* [dpdk-dev] [PATCH 00/17] unified packet type
[not found] <1421637666-16872-1-git-send-email-helin.zhang@intel.com>
@ 2015-01-29 3:15 ` Helin Zhang
2015-01-29 3:15 ` [dpdk-dev] [PATCH 01/17] mbuf: add definitions of unified packet types Helin Zhang
` (19 more replies)
0 siblings, 20 replies; 257+ messages in thread
From: Helin Zhang @ 2015-01-29 3:15 UTC (permalink / raw)
To: dev
Currently only 6 bits which are stored in ol_flags are used to indicate
the packet types. This is not enough, as some NIC hardware can recognize
quite a lot of packet types, e.g i40e hardware can recognize more than 150
packet types. Hiding those packet types hides hardware offload capabilities
which could be quite useful for improving performance and for end users.
So an unified packet types are needed to support all possible PMDs. Recently
a 16 bits packet_type field has been added in mbuf header which can be used
for this purpose. In addition, all packet types stored in ol_flag field
should be deleted at all, and 6 bits of ol_flags can be save as the benifit.
Initially, 16 bits of packet_type can be divided into several sub fields to
indicate different packet type information of a packet. The initial design
is to divide those bits into 4 fields for L3 types, tunnel types, inner L3
types and L4 types. All PMDs should translate the offloaded packet types
into this 4 fields of information, for user applications.
Helin Zhang (17):
mbuf: add definitions of unified packet types
e1000: support of unified packet type
ixgbe: support of unified packet type
ixgbe: support of unified packet type
i40e: support of unified packet type
bond: support of unified packet type
enic: support of unified packet type
vmxnet3: support of unified packet type
app/test-pipeline: support of unified packet type
app/test-pmd: support of unified packet type
app/test: support of unified packet type
examples/ip_fragmentation: support of unified packet type
examples/ip_reassembly: support of unified packet type
examples/l3fwd-acl: support of unified packet type
examples/l3fwd-power: support of unified packet type
examples/l3fwd: support of unified packet type
mbuf: remove old packet type bit masks for ol_flags
app/test-pipeline/pipeline_hash.c | 4 +-
app/test-pmd/csumonly.c | 6 +-
app/test-pmd/rxonly.c | 9 +-
app/test/packet_burst_generator.c | 10 +-
examples/ip_fragmentation/main.c | 7 +-
examples/ip_reassembly/main.c | 7 +-
examples/l3fwd-acl/main.c | 19 +-
examples/l3fwd-power/main.c | 5 +-
examples/l3fwd/main.c | 64 +--
lib/librte_mbuf/rte_mbuf.c | 6 -
lib/librte_mbuf/rte_mbuf.h | 84 +++-
lib/librte_pmd_bond/rte_eth_bond_pmd.c | 9 +-
lib/librte_pmd_e1000/igb_rxtx.c | 95 +++-
lib/librte_pmd_enic/enic_main.c | 14 +-
lib/librte_pmd_i40e/i40e_rxtx.c | 778 +++++++++++++++++++++------------
lib/librte_pmd_ixgbe/ixgbe_rxtx.c | 141 ++++--
lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c | 39 +-
lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c | 4 +-
18 files changed, 865 insertions(+), 436 deletions(-)
--
1.8.1.4
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH 01/17] mbuf: add definitions of unified packet types
2015-01-29 3:15 ` [dpdk-dev] [PATCH 00/17] unified packet type Helin Zhang
@ 2015-01-29 3:15 ` Helin Zhang
2015-01-30 13:56 ` Olivier MATZ
2015-01-29 3:15 ` [dpdk-dev] [PATCH 02/17] e1000: support of unified packet type Helin Zhang
` (18 subsequent siblings)
19 siblings, 1 reply; 257+ messages in thread
From: Helin Zhang @ 2015-01-29 3:15 UTC (permalink / raw)
To: dev
As there are only 6 bit flags in ol_flags for indicating packet types,
which is not enough to describe all the possible packet types hardware
can recognize. For example, i40e hardware can recognize more than 150
packet types. Unified packet type is composed of tunnel type, L3 type,
L4 type and inner L3 type fields, and can be stored in 16 bits mbuf
field of 'packet_type'.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Signed-off-by: Cunming Liang <cunming.liang@intel.com>
Signed-off-by: Jijiang Liu <jijiang.liu@intel.com>
---
lib/librte_mbuf/rte_mbuf.h | 74 ++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 74 insertions(+)
diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
index 16059c6..94ae344 100644
--- a/lib/librte_mbuf/rte_mbuf.h
+++ b/lib/librte_mbuf/rte_mbuf.h
@@ -165,6 +165,80 @@ extern "C" {
/* Use final bit of flags to indicate a control mbuf */
#define CTRL_MBUF_FLAG (1ULL << 63) /**< Mbuf contains control data */
+/*
+ * Sixteen bits are divided into several fields to mark packet types. Note that
+ * each field is indexical.
+ * - Bit 3:0 is for tunnel types.
+ * - Bit 7:4 is for L3 or outer L3 (for tunneling case) types.
+ * - Bit 10:8 is for L4 types. It can also be used for inner L4 types for
+ * tunneling packets.
+ * - Bit 13:11 is for inner L3 types.
+ * - Bit 15:14 is reserved.
+ *
+ * To be compitable with Vector PMD, RTE_PTYPE_L3_IPV4, RTE_PTYPE_L3_IPV4_EXT,
+ * RTE_PTYPE_L3_IPV6, RTE_PTYPE_L3_IPV6_EXT, RTE_PTYPE_L4_TCP, RTE_PTYPE_L4_UDP
+ * and RTE_PTYPE_L4_SCTP should be kept as below in a contiguous 7 bits.
+ *
+ * Note that L3 types values are selected for checking IPV4/IPV6 header from
+ * performance point of view. Reading annotations of RTE_ETH_IS_IPV4_HDR and
+ * RTE_ETH_IS_IPV6_HDR is needed for any future changes of L3 type values.
+ */
+#define RTE_PTYPE_UNKNOWN 0x0000 /* 0b0000000000000000 */
+/* bit 3:0 for tunnel types */
+#define RTE_PTYPE_TUNNEL_IP 0x0001 /* 0b0000000000000001 */
+#define RTE_PTYPE_TUNNEL_TCP 0x0002 /* 0b0000000000000010 */
+#define RTE_PTYPE_TUNNEL_UDP 0x0003 /* 0b0000000000000011 */
+#define RTE_PTYPE_TUNNEL_GRE 0x0004 /* 0b0000000000000100 */
+#define RTE_PTYPE_TUNNEL_VXLAN 0x0005 /* 0b0000000000000101 */
+#define RTE_PTYPE_TUNNEL_NVGRE 0x0006 /* 0b0000000000000110 */
+#define RTE_PTYPE_TUNNEL_GENEVE 0x0007 /* 0b0000000000000111 */
+#define RTE_PTYPE_TUNNEL_GRENAT 0x0008 /* 0b0000000000001000 */
+#define RTE_PTYPE_TUNNEL_GRENAT_MAC 0x0009 /* 0b0000000000001001 */
+#define RTE_PTYPE_TUNNEL_GRENAT_MACVLAN 0x000a /* 0b0000000000001010 */
+#define RTE_PTYPE_TUNNEL_MASK 0x000f /* 0b0000000000001111 */
+/* bit 7:4 for L3 types */
+#define RTE_PTYPE_L3_IPV4 0x0010 /* 0b0000000000010000 */
+#define RTE_PTYPE_L3_IPV4_EXT 0x0030 /* 0b0000000000110000 */
+#define RTE_PTYPE_L3_IPV6 0x0040 /* 0b0000000001000000 */
+#define RTE_PTYPE_L3_IPV4_EXT_UNKNOWN 0x0090 /* 0b0000000010010000 */
+#define RTE_PTYPE_L3_IPV6_EXT 0x00c0 /* 0b0000000011000000 */
+#define RTE_PTYPE_L3_IPV6_EXT_UNKNOWN 0x00e0 /* 0b0000000011100000 */
+#define RTE_PTYPE_L3_MASK 0x00f0 /* 0b0000000011110000 */
+/* bit 10:8 for L4 types */
+#define RTE_PTYPE_L4_TCP 0x0100 /* 0b0000000100000000 */
+#define RTE_PTYPE_L4_UDP 0x0200 /* 0b0000001000000000 */
+#define RTE_PTYPE_L4_FRAG 0x0300 /* 0b0000001100000000 */
+#define RTE_PTYPE_L4_SCTP 0x0400 /* 0b0000010000000000 */
+#define RTE_PTYPE_L4_ICMP 0x0500 /* 0b0000010100000000 */
+#define RTE_PTYPE_L4_NONFRAG 0x0600 /* 0b0000011000000000 */
+#define RTE_PTYPE_L4_MASK 0x0700 /* 0b0000011100000000 */
+/* bit 13:11 for inner L3 types */
+#define RTE_PTYPE_INNER_L3_IPV4 0x0800 /* 0b0000100000000000 */
+#define RTE_PTYPE_INNER_L3_IPV4_EXT 0x1000 /* 0b0001000000000000 */
+#define RTE_PTYPE_INNER_L3_IPV6 0x1800 /* 0b0001100000000000 */
+#define RTE_PTYPE_INNER_L3_IPV6_EXT 0x2000 /* 0b0010000000000000 */
+#define RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN 0x2800 /* 0b0010100000000000 */
+#define RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN 0x3000 /* 0b0011000000000000 */
+#define RTE_PTYPE_INNER_L3_MASK 0x3800 /* 0b0011100000000000 */
+/* bit 15:14 reserved */
+
+/**
+ * Check if the (outer) L3 header is IPv4. To avoid comparing IPv4 types one by
+ * one, bit 4 is selected to be used for IPv4 only. Then checking bit 4 can
+ * determin if it is an IPV4 packet.
+ */
+#define RTE_ETH_IS_IPV4_HDR(ptype) ((ptype) & RTE_PTYPE_L3_IPV4)
+
+/**
+ * Check if the (outer) L3 header is IPv4. To avoid comparing IPv4 types one by
+ * one, bit 6 is selected to be used for IPv4 only. Then checking bit 6 can
+ * determin if it is an IPV4 packet.
+ */
+#define RTE_ETH_IS_IPV6_HDR(ptype) ((ptype) & RTE_PTYPE_L3_IPV6)
+
+/* Check if it is a tunneling packet */
+#define RTE_ETH_IS_TUNNEL_PKT(ptype) ((ptype) & RTE_PTYPE_TUNNEL_MASK)
+
/**
* Get the name of a RX offload flag
*
--
1.8.1.4
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH 02/17] e1000: support of unified packet type
2015-01-29 3:15 ` [dpdk-dev] [PATCH 00/17] unified packet type Helin Zhang
2015-01-29 3:15 ` [dpdk-dev] [PATCH 01/17] mbuf: add definitions of unified packet types Helin Zhang
@ 2015-01-29 3:15 ` Helin Zhang
2015-01-29 3:15 ` [dpdk-dev] [PATCH 03/17] ixgbe: " Helin Zhang
` (17 subsequent siblings)
19 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-01-29 3:15 UTC (permalink / raw)
To: dev
To unify packet types among all PMDs, bit masks of packet type for
ol_flags are replaced by unified packet type.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
lib/librte_pmd_e1000/igb_rxtx.c | 95 ++++++++++++++++++++++++++++++++++-------
1 file changed, 80 insertions(+), 15 deletions(-)
diff --git a/lib/librte_pmd_e1000/igb_rxtx.c b/lib/librte_pmd_e1000/igb_rxtx.c
index 5c394a9..1ffb39e 100644
--- a/lib/librte_pmd_e1000/igb_rxtx.c
+++ b/lib/librte_pmd_e1000/igb_rxtx.c
@@ -602,17 +602,82 @@ eth_igb_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
* RX functions
*
**********************************************************************/
+#define IGB_PACKET_TYPE_IPV4 0X01
+#define IGB_PACKET_TYPE_IPV4_TCP 0X11
+#define IGB_PACKET_TYPE_IPV4_UDP 0X21
+#define IGB_PACKET_TYPE_IPV4_SCTP 0X41
+#define IGB_PACKET_TYPE_IPV4_EXT 0X03
+#define IGB_PACKET_TYPE_IPV4_EXT_SCTP 0X43
+#define IGB_PACKET_TYPE_IPV6 0X04
+#define IGB_PACKET_TYPE_IPV6_TCP 0X14
+#define IGB_PACKET_TYPE_IPV6_UDP 0X24
+#define IGB_PACKET_TYPE_IPV6_EXT 0X0C
+#define IGB_PACKET_TYPE_IPV6_EXT_TCP 0X1C
+#define IGB_PACKET_TYPE_IPV6_EXT_UDP 0X2C
+#define IGB_PACKET_TYPE_IPV4_IPV6 0X05
+#define IGB_PACKET_TYPE_IPV4_IPV6_TCP 0X15
+#define IGB_PACKET_TYPE_IPV4_IPV6_UDP 0X25
+#define IGB_PACKET_TYPE_IPV4_IPV6_EXT 0X0D
+#define IGB_PACKET_TYPE_IPV4_IPV6_EXT_TCP 0X1D
+#define IGB_PACKET_TYPE_IPV4_IPV6_EXT_UDP 0X2D
+#define IGB_PACKET_TYPE_MAX 0X80
+#define IGB_PACKET_TYPE_MASK 0X7F
+#define IGB_PACKET_TYPE_SHIFT 0X04
+static inline uint16_t
+igb_rxd_pkt_info_to_pkt_type(uint16_t pkt_info)
+{
+ static const uint16_t
+ ptype_table[IGB_PACKET_TYPE_MAX] __rte_cache_aligned = {
+ [IGB_PACKET_TYPE_IPV4] = RTE_PTYPE_L3_IPV4,
+ [IGB_PACKET_TYPE_IPV4_EXT] = RTE_PTYPE_L3_IPV4_EXT,
+ [IGB_PACKET_TYPE_IPV6] = RTE_PTYPE_L3_IPV6,
+ [IGB_PACKET_TYPE_IPV4_IPV6] = RTE_PTYPE_L3_IPV4 |
+ RTE_PTYPE_TUNNEL_IP | RTE_PTYPE_INNER_L3_IPV6,
+ [IGB_PACKET_TYPE_IPV6_EXT] = RTE_PTYPE_L3_IPV6_EXT,
+ [IGB_PACKET_TYPE_IPV4_IPV6_EXT] = RTE_PTYPE_L3_IPV4 |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT,
+ [IGB_PACKET_TYPE_IPV4_TCP] = RTE_PTYPE_L3_IPV4 |
+ RTE_PTYPE_L4_TCP,
+ [IGB_PACKET_TYPE_IPV6_TCP] = RTE_PTYPE_L3_IPV6 |
+ RTE_PTYPE_L4_TCP,
+ [IGB_PACKET_TYPE_IPV4_IPV6_TCP] = RTE_PTYPE_L3_IPV4 |
+ RTE_PTYPE_TUNNEL_IP | RTE_PTYPE_INNER_L3_IPV6 |
+ RTE_PTYPE_L4_TCP,
+ [IGB_PACKET_TYPE_IPV6_EXT_TCP] = RTE_PTYPE_L3_IPV6_EXT |
+ RTE_PTYPE_L4_TCP,
+ [IGB_PACKET_TYPE_IPV4_IPV6_EXT_TCP] = RTE_PTYPE_L3_IPV4 |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT | RTE_PTYPE_L4_TCP,
+ [IGB_PACKET_TYPE_IPV4_UDP] = RTE_PTYPE_L3_IPV4 |
+ RTE_PTYPE_L4_UDP,
+ [IGB_PACKET_TYPE_IPV6_UDP] = RTE_PTYPE_L3_IPV6 |
+ RTE_PTYPE_L4_UDP,
+ [IGB_PACKET_TYPE_IPV4_IPV6_UDP] = RTE_PTYPE_L3_IPV4 |
+ RTE_PTYPE_TUNNEL_IP | RTE_PTYPE_INNER_L3_IPV6 |
+ RTE_PTYPE_L4_UDP,
+ [IGB_PACKET_TYPE_IPV6_EXT_UDP] = RTE_PTYPE_L3_IPV6_EXT |
+ RTE_PTYPE_L4_UDP,
+ [IGB_PACKET_TYPE_IPV4_IPV6_EXT_UDP] = RTE_PTYPE_L3_IPV4 |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT | RTE_PTYPE_L4_UDP,
+ [IGB_PACKET_TYPE_IPV4_SCTP] = RTE_PTYPE_L3_IPV4 |
+ RTE_PTYPE_L4_SCTP,
+ [IGB_PACKET_TYPE_IPV4_EXT_SCTP] = RTE_PTYPE_L3_IPV4_EXT |
+ RTE_PTYPE_L4_SCTP,
+ };
+ if (unlikely(pkt_info & E1000_RXDADV_PKTTYPE_ETQF))
+ return RTE_PTYPE_UNKNOWN;
+
+ pkt_info = (pkt_info >> IGB_PACKET_TYPE_SHIFT) & IGB_PACKET_TYPE_MASK;
+
+ return ptype_table[pkt_info];
+}
+
static inline uint64_t
rx_desc_hlen_type_rss_to_pkt_flags(uint32_t hl_tp_rs)
{
- uint64_t pkt_flags;
-
- static uint64_t ip_pkt_types_map[16] = {
- 0, PKT_RX_IPV4_HDR, PKT_RX_IPV4_HDR_EXT, PKT_RX_IPV4_HDR_EXT,
- PKT_RX_IPV6_HDR, 0, 0, 0,
- PKT_RX_IPV6_HDR_EXT, 0, 0, 0,
- PKT_RX_IPV6_HDR_EXT, 0, 0, 0,
- };
+ uint64_t pkt_flags = ((hl_tp_rs & 0x0F) == 0) ? 0 : PKT_RX_RSS_HASH;
#if defined(RTE_LIBRTE_IEEE1588)
static uint32_t ip_pkt_etqf_map[8] = {
@@ -620,14 +685,10 @@ rx_desc_hlen_type_rss_to_pkt_flags(uint32_t hl_tp_rs)
0, 0, 0, 0,
};
- pkt_flags = (hl_tp_rs & E1000_RXDADV_PKTTYPE_ETQF) ?
- ip_pkt_etqf_map[(hl_tp_rs >> 4) & 0x07] :
- ip_pkt_types_map[(hl_tp_rs >> 4) & 0x0F];
-#else
- pkt_flags = (hl_tp_rs & E1000_RXDADV_PKTTYPE_ETQF) ? 0 :
- ip_pkt_types_map[(hl_tp_rs >> 4) & 0x0F];
+ pkt_flags |= ip_pkt_etqf_map[(hl_tp_rs >> 4) & 0x07];
#endif
- return pkt_flags | (((hl_tp_rs & 0x0F) == 0) ? 0 : PKT_RX_RSS_HASH);
+
+ return pkt_flags;
}
static inline uint64_t
@@ -802,6 +863,8 @@ eth_igb_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
pkt_flags = pkt_flags | rx_desc_status_to_pkt_flags(staterr);
pkt_flags = pkt_flags | rx_desc_error_to_pkt_flags(staterr);
rxm->ol_flags = pkt_flags;
+ rxm->packet_type = igb_rxd_pkt_info_to_pkt_type(rxd.wb.lower.
+ lo_dword.hs_rss.pkt_info);
/*
* Store the mbuf address into the next entry of the array
@@ -1036,6 +1099,8 @@ eth_igb_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
pkt_flags = pkt_flags | rx_desc_status_to_pkt_flags(staterr);
pkt_flags = pkt_flags | rx_desc_error_to_pkt_flags(staterr);
first_seg->ol_flags = pkt_flags;
+ first_seg->packet_type = igb_rxd_pkt_info_to_pkt_type(rxd.wb.
+ lower.lo_dword.hs_rss.pkt_info);
/* Prefetch data of first segment, if configured to do so. */
rte_packet_prefetch((char *)first_seg->buf_addr +
--
1.8.1.4
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH 03/17] ixgbe: support of unified packet type
2015-01-29 3:15 ` [dpdk-dev] [PATCH 00/17] unified packet type Helin Zhang
2015-01-29 3:15 ` [dpdk-dev] [PATCH 01/17] mbuf: add definitions of unified packet types Helin Zhang
2015-01-29 3:15 ` [dpdk-dev] [PATCH 02/17] e1000: support of unified packet type Helin Zhang
@ 2015-01-29 3:15 ` Helin Zhang
2015-01-29 3:15 ` [dpdk-dev] [PATCH 04/17] " Helin Zhang
` (16 subsequent siblings)
19 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-01-29 3:15 UTC (permalink / raw)
To: dev
To unify packet types among all PMDs, bit masks of packet type for
ol_flags are replaced by unified packet type.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
lib/librte_pmd_ixgbe/ixgbe_rxtx.c | 141 +++++++++++++++++++++++++++++---------
1 file changed, 107 insertions(+), 34 deletions(-)
diff --git a/lib/librte_pmd_ixgbe/ixgbe_rxtx.c b/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
index e6766b3..aefb4e9 100644
--- a/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
+++ b/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
@@ -866,40 +866,102 @@ end_of_tx:
* RX functions
*
**********************************************************************/
-static inline uint64_t
-rx_desc_hlen_type_rss_to_pkt_flags(uint32_t hl_tp_rs)
+#define IXGBE_PACKET_TYPE_IPV4 0X01
+#define IXGBE_PACKET_TYPE_IPV4_TCP 0X11
+#define IXGBE_PACKET_TYPE_IPV4_UDP 0X21
+#define IXGBE_PACKET_TYPE_IPV4_SCTP 0X41
+#define IXGBE_PACKET_TYPE_IPV4_EXT 0X03
+#define IXGBE_PACKET_TYPE_IPV4_EXT_SCTP 0X43
+#define IXGBE_PACKET_TYPE_IPV6 0X04
+#define IXGBE_PACKET_TYPE_IPV6_TCP 0X14
+#define IXGBE_PACKET_TYPE_IPV6_UDP 0X24
+#define IXGBE_PACKET_TYPE_IPV6_EXT 0X0C
+#define IXGBE_PACKET_TYPE_IPV6_EXT_TCP 0X1C
+#define IXGBE_PACKET_TYPE_IPV6_EXT_UDP 0X2C
+#define IXGBE_PACKET_TYPE_IPV4_IPV6 0X05
+#define IXGBE_PACKET_TYPE_IPV4_IPV6_TCP 0X15
+#define IXGBE_PACKET_TYPE_IPV4_IPV6_UDP 0X25
+#define IXGBE_PACKET_TYPE_IPV4_IPV6_EXT 0X0D
+#define IXGBE_PACKET_TYPE_IPV4_IPV6_EXT_TCP 0X1D
+#define IXGBE_PACKET_TYPE_IPV4_IPV6_EXT_UDP 0X2D
+#define IXGBE_PACKET_TYPE_MAX 0X80
+#define IXGBE_PACKET_TYPE_MASK 0X7F
+#define IXGBE_PACKET_TYPE_SHIFT 0X04
+static inline uint16_t
+ixgbe_rxd_pkt_info_to_pkt_type(uint16_t pkt_info)
{
- uint64_t pkt_flags;
-
- static uint64_t ip_pkt_types_map[16] = {
- 0, PKT_RX_IPV4_HDR, PKT_RX_IPV4_HDR_EXT, PKT_RX_IPV4_HDR_EXT,
- PKT_RX_IPV6_HDR, 0, 0, 0,
- PKT_RX_IPV6_HDR_EXT, 0, 0, 0,
- PKT_RX_IPV6_HDR_EXT, 0, 0, 0,
+ static const uint16_t
+ ptype_table[IXGBE_PACKET_TYPE_MAX] __rte_cache_aligned = {
+ [IXGBE_PACKET_TYPE_IPV4] = RTE_PTYPE_L3_IPV4,
+ [IXGBE_PACKET_TYPE_IPV4_EXT] = RTE_PTYPE_L3_IPV4_EXT,
+ [IXGBE_PACKET_TYPE_IPV6] = RTE_PTYPE_L3_IPV6,
+ [IXGBE_PACKET_TYPE_IPV4_IPV6] = RTE_PTYPE_L3_IPV4 |
+ RTE_PTYPE_TUNNEL_IP | RTE_PTYPE_INNER_L3_IPV6,
+ [IXGBE_PACKET_TYPE_IPV6_EXT] = RTE_PTYPE_L3_IPV6_EXT,
+ [IXGBE_PACKET_TYPE_IPV4_IPV6_EXT] = RTE_PTYPE_L3_IPV4 |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT,
+ [IXGBE_PACKET_TYPE_IPV4_TCP] = RTE_PTYPE_L3_IPV4 |
+ RTE_PTYPE_L4_TCP,
+ [IXGBE_PACKET_TYPE_IPV6_TCP] = RTE_PTYPE_L3_IPV6 |
+ RTE_PTYPE_L4_TCP,
+ [IXGBE_PACKET_TYPE_IPV4_IPV6_TCP] = RTE_PTYPE_L3_IPV4 |
+ RTE_PTYPE_TUNNEL_IP | RTE_PTYPE_INNER_L3_IPV6 |
+ RTE_PTYPE_L4_TCP,
+ [IXGBE_PACKET_TYPE_IPV6_EXT_TCP] = RTE_PTYPE_L3_IPV6_EXT |
+ RTE_PTYPE_L4_TCP,
+ [IXGBE_PACKET_TYPE_IPV4_IPV6_EXT_TCP] = RTE_PTYPE_L3_IPV4 |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT | RTE_PTYPE_L4_TCP,
+ [IXGBE_PACKET_TYPE_IPV4_UDP] = RTE_PTYPE_L3_IPV4 |
+ RTE_PTYPE_L4_UDP,
+ [IXGBE_PACKET_TYPE_IPV6_UDP] = RTE_PTYPE_L3_IPV6 |
+ RTE_PTYPE_L4_UDP,
+ [IXGBE_PACKET_TYPE_IPV4_IPV6_UDP] = RTE_PTYPE_L3_IPV4 |
+ RTE_PTYPE_TUNNEL_IP | RTE_PTYPE_INNER_L3_IPV6 |
+ RTE_PTYPE_L4_UDP,
+ [IXGBE_PACKET_TYPE_IPV6_EXT_UDP] = RTE_PTYPE_L3_IPV6_EXT |
+ RTE_PTYPE_L4_UDP,
+ [IXGBE_PACKET_TYPE_IPV4_IPV6_EXT_UDP] = RTE_PTYPE_L3_IPV4 |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT | RTE_PTYPE_L4_UDP,
+ [IXGBE_PACKET_TYPE_IPV4_SCTP] = RTE_PTYPE_L3_IPV4 |
+ RTE_PTYPE_L4_SCTP,
+ [IXGBE_PACKET_TYPE_IPV4_EXT_SCTP] = RTE_PTYPE_L3_IPV4_EXT |
+ RTE_PTYPE_L4_SCTP,
};
+ if (unlikely(pkt_info & IXGBE_RXDADV_PKTTYPE_ETQF))
+ return RTE_PTYPE_UNKNOWN;
- static uint64_t ip_rss_types_map[16] = {
+ pkt_info = (pkt_info >> IXGBE_PACKET_TYPE_SHIFT) &
+ IXGBE_PACKET_TYPE_MASK;
+
+ return ptype_table[pkt_info];
+}
+
+static inline uint64_t
+ixgbe_rxd_pkt_info_to_pkt_flags(uint16_t pkt_info)
+{
+ static uint64_t ip_rss_types_map[16] __rte_cache_aligned = {
0, PKT_RX_RSS_HASH, PKT_RX_RSS_HASH, PKT_RX_RSS_HASH,
0, PKT_RX_RSS_HASH, 0, PKT_RX_RSS_HASH,
PKT_RX_RSS_HASH, 0, 0, 0,
0, 0, 0, PKT_RX_FDIR,
};
-
#ifdef RTE_LIBRTE_IEEE1588
static uint64_t ip_pkt_etqf_map[8] = {
0, 0, 0, PKT_RX_IEEE1588_PTP,
0, 0, 0, 0,
};
- pkt_flags = (hl_tp_rs & IXGBE_RXDADV_PKTTYPE_ETQF) ?
- ip_pkt_etqf_map[(hl_tp_rs >> 4) & 0x07] :
- ip_pkt_types_map[(hl_tp_rs >> 4) & 0x0F];
+ if (likely(pkt_info & IXGBE_RXDADV_PKTTYPE_ETQF))
+ return ip_pkt_etqf_map[(pkt_info >> 4) & 0X07] |
+ ip_rss_types_map[pkt_info & 0xF];
+ else
+ return ip_rss_types_map[pkt_info & 0xF];
#else
- pkt_flags = (hl_tp_rs & IXGBE_RXDADV_PKTTYPE_ETQF) ? 0 :
- ip_pkt_types_map[(hl_tp_rs >> 4) & 0x0F];
-
+ return ip_rss_types_map[pkt_info & 0xF];
#endif
- return pkt_flags | ip_rss_types_map[hl_tp_rs & 0xF];
}
static inline uint64_t
@@ -956,7 +1018,9 @@ ixgbe_rx_scan_hw_ring(struct igb_rx_queue *rxq)
struct rte_mbuf *mb;
uint16_t pkt_len;
uint64_t pkt_flags;
- int s[LOOK_AHEAD], nb_dd;
+ int nb_dd;
+ uint32_t s[LOOK_AHEAD];
+ uint16_t pkt_info[LOOK_AHEAD];
int i, j, nb_rx = 0;
@@ -979,6 +1043,9 @@ ixgbe_rx_scan_hw_ring(struct igb_rx_queue *rxq)
for (j = LOOK_AHEAD-1; j >= 0; --j)
s[j] = rxdp[j].wb.upper.status_error;
+ for (j = LOOK_AHEAD-1; j >= 0; --j)
+ pkt_info[j] = rxdp[j].wb.lower.lo_dword.hs_rss.pkt_info;
+
/* Compute how many status bits were set */
nb_dd = 0;
for (j = 0; j < LOOK_AHEAD; ++j)
@@ -996,12 +1063,13 @@ ixgbe_rx_scan_hw_ring(struct igb_rx_queue *rxq)
mb->vlan_tci = rte_le_to_cpu_16(rxdp[j].wb.upper.vlan);
/* convert descriptor fields to rte mbuf flags */
- pkt_flags = rx_desc_hlen_type_rss_to_pkt_flags(
- rxdp[j].wb.lower.lo_dword.data);
- /* reuse status field from scan list */
- pkt_flags |= rx_desc_status_to_pkt_flags(s[j]);
+ pkt_flags = rx_desc_status_to_pkt_flags(s[j]);
pkt_flags |= rx_desc_error_to_pkt_flags(s[j]);
+ pkt_flags |=
+ ixgbe_rxd_pkt_info_to_pkt_flags(pkt_info[j]);
mb->ol_flags = pkt_flags;
+ mb->packet_type =
+ ixgbe_rxd_pkt_info_to_pkt_type(pkt_info[j]);
if (likely(pkt_flags & PKT_RX_RSS_HASH))
mb->hash.rss = rxdp[j].wb.lower.hi_dword.rss;
@@ -1198,7 +1266,7 @@ ixgbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
union ixgbe_adv_rx_desc rxd;
uint64_t dma_addr;
uint32_t staterr;
- uint32_t hlen_type_rss;
+ uint32_t pkt_info;
uint16_t pkt_len;
uint16_t rx_id;
uint16_t nb_rx;
@@ -1316,14 +1384,17 @@ ixgbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
rxm->data_len = pkt_len;
rxm->port = rxq->port_id;
- hlen_type_rss = rte_le_to_cpu_32(rxd.wb.lower.lo_dword.data);
+ pkt_info = rte_le_to_cpu_32(rxd.wb.lower.lo_dword.hs_rss.
+ pkt_info);
/* Only valid if PKT_RX_VLAN_PKT set in pkt_flags */
rxm->vlan_tci = rte_le_to_cpu_16(rxd.wb.upper.vlan);
- pkt_flags = rx_desc_hlen_type_rss_to_pkt_flags(hlen_type_rss);
- pkt_flags = pkt_flags | rx_desc_status_to_pkt_flags(staterr);
+ pkt_flags = rx_desc_status_to_pkt_flags(staterr);
pkt_flags = pkt_flags | rx_desc_error_to_pkt_flags(staterr);
+ pkt_flags = pkt_flags |
+ ixgbe_rxd_pkt_info_to_pkt_flags(pkt_info);
rxm->ol_flags = pkt_flags;
+ rxm->packet_type = ixgbe_rxd_pkt_info_to_pkt_type(pkt_info);
if (likely(pkt_flags & PKT_RX_RSS_HASH))
rxm->hash.rss = rxd.wb.lower.hi_dword.rss;
@@ -1382,7 +1453,7 @@ ixgbe_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
union ixgbe_adv_rx_desc rxd;
uint64_t dma; /* Physical address of mbuf data buffer */
uint32_t staterr;
- uint32_t hlen_type_rss;
+ uint16_t pkt_info;
uint16_t rx_id;
uint16_t nb_rx;
uint16_t nb_hold;
@@ -1561,13 +1632,15 @@ ixgbe_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
* set in the pkt_flags field.
*/
first_seg->vlan_tci = rte_le_to_cpu_16(rxd.wb.upper.vlan);
- hlen_type_rss = rte_le_to_cpu_32(rxd.wb.lower.lo_dword.data);
- pkt_flags = rx_desc_hlen_type_rss_to_pkt_flags(hlen_type_rss);
- pkt_flags = (pkt_flags |
- rx_desc_status_to_pkt_flags(staterr));
- pkt_flags = (pkt_flags |
- rx_desc_error_to_pkt_flags(staterr));
+ pkt_info = rte_le_to_cpu_32(rxd.wb.lower.lo_dword.hs_rss.
+ pkt_info);
+ pkt_flags = rx_desc_status_to_pkt_flags(staterr);
+ pkt_flags = pkt_flags | rx_desc_error_to_pkt_flags(staterr);
+ pkt_flags = pkt_flags |
+ ixgbe_rxd_pkt_info_to_pkt_flags(pkt_info);
first_seg->ol_flags = pkt_flags;
+ first_seg->packet_type =
+ ixgbe_rxd_pkt_info_to_pkt_type(pkt_info);
if (likely(pkt_flags & PKT_RX_RSS_HASH))
first_seg->hash.rss = rxd.wb.lower.hi_dword.rss;
--
1.8.1.4
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH 04/17] ixgbe: support of unified packet type
2015-01-29 3:15 ` [dpdk-dev] [PATCH 00/17] unified packet type Helin Zhang
` (2 preceding siblings ...)
2015-01-29 3:15 ` [dpdk-dev] [PATCH 03/17] ixgbe: " Helin Zhang
@ 2015-01-29 3:15 ` Helin Zhang
2015-01-29 23:30 ` Bruce Richardson
2015-01-29 3:15 ` [dpdk-dev] [PATCH 05/17] i40e: " Helin Zhang
` (15 subsequent siblings)
19 siblings, 1 reply; 257+ messages in thread
From: Helin Zhang @ 2015-01-29 3:15 UTC (permalink / raw)
To: dev
To unify packet types among all PMDs, bit masks of packet type for
ol_flags are replaced by unified packet type for Vector PMD.
Signed-off-by: Cunming Liang <cunming.liang@intel.com>
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c | 39 +++++++++++++++++++----------------
1 file changed, 21 insertions(+), 18 deletions(-)
diff --git a/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c b/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c
index b54cb19..b3cf7dd 100644
--- a/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c
+++ b/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c
@@ -134,44 +134,35 @@ ixgbe_rxq_rearm(struct igb_rx_queue *rxq)
*/
#ifdef RTE_IXGBE_RX_OLFLAGS_ENABLE
-#define OLFLAGS_MASK ((uint16_t)(PKT_RX_VLAN_PKT | PKT_RX_IPV4_HDR |\
- PKT_RX_IPV4_HDR_EXT | PKT_RX_IPV6_HDR |\
- PKT_RX_IPV6_HDR_EXT))
-#define OLFLAGS_MASK_V (((uint64_t)OLFLAGS_MASK << 48) | \
- ((uint64_t)OLFLAGS_MASK << 32) | \
- ((uint64_t)OLFLAGS_MASK << 16) | \
- ((uint64_t)OLFLAGS_MASK))
-#define PTYPE_SHIFT (1)
+#define OLFLAGS_MASK_V (((uint64_t)PKT_RX_VLAN_PKT << 48) | \
+ ((uint64_t)PKT_RX_VLAN_PKT << 32) | \
+ ((uint64_t)PKT_RX_VLAN_PKT << 16) | \
+ ((uint64_t)PKT_RX_VLAN_PKT))
#define VTAG_SHIFT (3)
static inline void
desc_to_olflags_v(__m128i descs[4], struct rte_mbuf **rx_pkts)
{
- __m128i ptype0, ptype1, vtag0, vtag1;
+ __m128i vtag0, vtag1;
union {
uint16_t e[4];
uint64_t dword;
} vol;
- ptype0 = _mm_unpacklo_epi16(descs[0], descs[1]);
- ptype1 = _mm_unpacklo_epi16(descs[2], descs[3]);
vtag0 = _mm_unpackhi_epi16(descs[0], descs[1]);
vtag1 = _mm_unpackhi_epi16(descs[2], descs[3]);
- ptype1 = _mm_unpacklo_epi32(ptype0, ptype1);
vtag1 = _mm_unpacklo_epi32(vtag0, vtag1);
-
- ptype1 = _mm_slli_epi16(ptype1, PTYPE_SHIFT);
vtag1 = _mm_srli_epi16(vtag1, VTAG_SHIFT);
- ptype1 = _mm_or_si128(ptype1, vtag1);
- vol.dword = _mm_cvtsi128_si64(ptype1) & OLFLAGS_MASK_V;
+ vol.dword = _mm_cvtsi128_si64(vtag1) & OLFLAGS_MASK_V;
rx_pkts[0]->ol_flags = vol.e[0];
rx_pkts[1]->ol_flags = vol.e[1];
rx_pkts[2]->ol_flags = vol.e[2];
rx_pkts[3]->ol_flags = vol.e[3];
}
+
#else
#define desc_to_olflags_v(desc, rx_pkts) do {} while (0)
#endif
@@ -204,6 +195,8 @@ _recv_raw_pkts_vec(struct igb_rx_queue *rxq, struct rte_mbuf **rx_pkts,
0 /* ignore pkt_type field */
);
__m128i dd_check, eop_check;
+ __m128i desc_mask = _mm_set_epi32(0xFFFFFFFF, 0xFFFFFFFF,
+ 0xFFFFFFFF, 0xFFFF07F0);
if (unlikely(nb_pkts < RTE_IXGBE_VPMD_RX_BURST))
return 0;
@@ -239,7 +232,8 @@ _recv_raw_pkts_vec(struct igb_rx_queue *rxq, struct rte_mbuf **rx_pkts,
0xFF, 0xFF, /* skip high 16 bits pkt_len, zero out */
13, 12, /* octet 12~13, low 16 bits pkt_len */
13, 12, /* octet 12~13, 16 bits data_len */
- 0xFF, 0xFF /* skip pkt_type field */
+ 1, /* octet 1, 8 bits pkt_type field */
+ 0 /* octet 0, 4 bits offset 4 pkt_type field */
);
/* Cache is empty -> need to scan the buffer rings, but first move
@@ -248,6 +242,7 @@ _recv_raw_pkts_vec(struct igb_rx_queue *rxq, struct rte_mbuf **rx_pkts,
/*
* A. load 4 packet in one loop
+ * [A*. mask out 4 unused dirty field in desc]
* B. copy 4 mbuf point from swring to rx_pkts
* C. calc the number of DD bits among the 4 packets
* [C*. extract the end-of-packet bit, if requested]
@@ -289,6 +284,14 @@ _recv_raw_pkts_vec(struct igb_rx_queue *rxq, struct rte_mbuf **rx_pkts,
/* B.2 copy 2 mbuf point into rx_pkts */
_mm_storeu_si128((__m128i *)&rx_pkts[pos+2], mbp2);
+ /* A* mask out 0~3 bits RSS type */
+ descs[3] = _mm_and_si128(descs[3], desc_mask);
+ descs[2] = _mm_and_si128(descs[2], desc_mask);
+
+ /* A* mask out 0~3 bits RSS type */
+ descs[1] = _mm_and_si128(descs[1], desc_mask);
+ descs[0] = _mm_and_si128(descs[0], desc_mask);
+
/* avoid compiler reorder optimization */
rte_compiler_barrier();
@@ -301,7 +304,7 @@ _recv_raw_pkts_vec(struct igb_rx_queue *rxq, struct rte_mbuf **rx_pkts,
/* C.1 4=>2 filter staterr info only */
sterr_tmp1 = _mm_unpackhi_epi32(descs[1], descs[0]);
- /* set ol_flags with packet type and vlan tag */
+ /* set ol_flags with vlan packet type */
desc_to_olflags_v(descs, &rx_pkts[pos]);
/* D.2 pkt 3,4 set in_port/nb_seg and remove crc */
--
1.8.1.4
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH 05/17] i40e: support of unified packet type
2015-01-29 3:15 ` [dpdk-dev] [PATCH 00/17] unified packet type Helin Zhang
` (3 preceding siblings ...)
2015-01-29 3:15 ` [dpdk-dev] [PATCH 04/17] " Helin Zhang
@ 2015-01-29 3:15 ` Helin Zhang
2015-01-29 3:15 ` [dpdk-dev] [PATCH 06/17] bond: " Helin Zhang
` (14 subsequent siblings)
19 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-01-29 3:15 UTC (permalink / raw)
To: dev
To unify packet types among all PMDs, bit masks of packet type for
ol_flags are replaced by unified packet type.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Signed-off-by: Jijiang Liu <jijiang.liu@intel.com>
---
lib/librte_pmd_i40e/i40e_rxtx.c | 778 ++++++++++++++++++++++++++--------------
1 file changed, 504 insertions(+), 274 deletions(-)
diff --git a/lib/librte_pmd_i40e/i40e_rxtx.c b/lib/librte_pmd_i40e/i40e_rxtx.c
index 2beae3c..68029c3 100644
--- a/lib/librte_pmd_i40e/i40e_rxtx.c
+++ b/lib/librte_pmd_i40e/i40e_rxtx.c
@@ -146,272 +146,503 @@ i40e_rxd_error_to_pkt_flags(uint64_t qword)
return flags;
}
-/* Translate pkt types to pkt flags */
-static inline uint64_t
-i40e_rxd_ptype_to_pkt_flags(uint64_t qword)
+/* For each value it means, datasheet of hardware can tell more details */
+static inline uint16_t
+i40e_rxd_pkt_type_mapping(uint8_t ptype)
{
- uint8_t ptype = (uint8_t)((qword & I40E_RXD_QW1_PTYPE_MASK) >>
- I40E_RXD_QW1_PTYPE_SHIFT);
- static const uint64_t ip_ptype_map[I40E_MAX_PKT_TYPE] = {
- 0, /* PTYPE 0 */
- 0, /* PTYPE 1 */
- 0, /* PTYPE 2 */
- 0, /* PTYPE 3 */
- 0, /* PTYPE 4 */
- 0, /* PTYPE 5 */
- 0, /* PTYPE 6 */
- 0, /* PTYPE 7 */
- 0, /* PTYPE 8 */
- 0, /* PTYPE 9 */
- 0, /* PTYPE 10 */
- 0, /* PTYPE 11 */
- 0, /* PTYPE 12 */
- 0, /* PTYPE 13 */
- 0, /* PTYPE 14 */
- 0, /* PTYPE 15 */
- 0, /* PTYPE 16 */
- 0, /* PTYPE 17 */
- 0, /* PTYPE 18 */
- 0, /* PTYPE 19 */
- 0, /* PTYPE 20 */
- 0, /* PTYPE 21 */
- PKT_RX_IPV4_HDR, /* PTYPE 22 */
- PKT_RX_IPV4_HDR, /* PTYPE 23 */
- PKT_RX_IPV4_HDR, /* PTYPE 24 */
- 0, /* PTYPE 25 */
- PKT_RX_IPV4_HDR, /* PTYPE 26 */
- PKT_RX_IPV4_HDR, /* PTYPE 27 */
- PKT_RX_IPV4_HDR, /* PTYPE 28 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 29 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 30 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 31 */
- 0, /* PTYPE 32 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 33 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 34 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 35 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 36 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 37 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 38 */
- 0, /* PTYPE 39 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 40 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 41 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 42 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 43 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 44 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 45 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 46 */
- 0, /* PTYPE 47 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 48 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 49 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 50 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 51 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 52 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 53 */
- 0, /* PTYPE 54 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 55 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 56 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 57 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 58 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 59 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 60 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 61 */
- 0, /* PTYPE 62 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 63 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 64 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 65 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 66 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 67 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 68 */
- 0, /* PTYPE 69 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 70 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 71 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 72 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 73 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 74 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 75 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 76 */
- 0, /* PTYPE 77 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 78 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 79 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 80 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 81 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 82 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 83 */
- 0, /* PTYPE 84 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 85 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 86 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 87 */
- PKT_RX_IPV6_HDR, /* PTYPE 88 */
- PKT_RX_IPV6_HDR, /* PTYPE 89 */
- PKT_RX_IPV6_HDR, /* PTYPE 90 */
- 0, /* PTYPE 91 */
- PKT_RX_IPV6_HDR, /* PTYPE 92 */
- PKT_RX_IPV6_HDR, /* PTYPE 93 */
- PKT_RX_IPV6_HDR, /* PTYPE 94 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 95 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 96 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 97 */
- 0, /* PTYPE 98 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 99 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 100 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 101 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 102 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 103 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 104 */
- 0, /* PTYPE 105 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 106 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 107 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 108 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 109 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 110 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 111 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 112 */
- 0, /* PTYPE 113 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 114 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 115 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 116 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 117 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 118 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 119 */
- 0, /* PTYPE 120 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 121 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 122 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 123 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 124 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 125 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 126 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 127 */
- 0, /* PTYPE 128 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 129 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 130 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 131 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 132 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 133 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 134 */
- 0, /* PTYPE 135 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 136 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 137 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 138 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 139 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 140 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 141 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 142 */
- 0, /* PTYPE 143 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 144 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 145 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 146 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 147 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 148 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 149 */
- 0, /* PTYPE 150 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 151 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 152 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 153 */
- 0, /* PTYPE 154 */
- 0, /* PTYPE 155 */
- 0, /* PTYPE 156 */
- 0, /* PTYPE 157 */
- 0, /* PTYPE 158 */
- 0, /* PTYPE 159 */
- 0, /* PTYPE 160 */
- 0, /* PTYPE 161 */
- 0, /* PTYPE 162 */
- 0, /* PTYPE 163 */
- 0, /* PTYPE 164 */
- 0, /* PTYPE 165 */
- 0, /* PTYPE 166 */
- 0, /* PTYPE 167 */
- 0, /* PTYPE 168 */
- 0, /* PTYPE 169 */
- 0, /* PTYPE 170 */
- 0, /* PTYPE 171 */
- 0, /* PTYPE 172 */
- 0, /* PTYPE 173 */
- 0, /* PTYPE 174 */
- 0, /* PTYPE 175 */
- 0, /* PTYPE 176 */
- 0, /* PTYPE 177 */
- 0, /* PTYPE 178 */
- 0, /* PTYPE 179 */
- 0, /* PTYPE 180 */
- 0, /* PTYPE 181 */
- 0, /* PTYPE 182 */
- 0, /* PTYPE 183 */
- 0, /* PTYPE 184 */
- 0, /* PTYPE 185 */
- 0, /* PTYPE 186 */
- 0, /* PTYPE 187 */
- 0, /* PTYPE 188 */
- 0, /* PTYPE 189 */
- 0, /* PTYPE 190 */
- 0, /* PTYPE 191 */
- 0, /* PTYPE 192 */
- 0, /* PTYPE 193 */
- 0, /* PTYPE 194 */
- 0, /* PTYPE 195 */
- 0, /* PTYPE 196 */
- 0, /* PTYPE 197 */
- 0, /* PTYPE 198 */
- 0, /* PTYPE 199 */
- 0, /* PTYPE 200 */
- 0, /* PTYPE 201 */
- 0, /* PTYPE 202 */
- 0, /* PTYPE 203 */
- 0, /* PTYPE 204 */
- 0, /* PTYPE 205 */
- 0, /* PTYPE 206 */
- 0, /* PTYPE 207 */
- 0, /* PTYPE 208 */
- 0, /* PTYPE 209 */
- 0, /* PTYPE 210 */
- 0, /* PTYPE 211 */
- 0, /* PTYPE 212 */
- 0, /* PTYPE 213 */
- 0, /* PTYPE 214 */
- 0, /* PTYPE 215 */
- 0, /* PTYPE 216 */
- 0, /* PTYPE 217 */
- 0, /* PTYPE 218 */
- 0, /* PTYPE 219 */
- 0, /* PTYPE 220 */
- 0, /* PTYPE 221 */
- 0, /* PTYPE 222 */
- 0, /* PTYPE 223 */
- 0, /* PTYPE 224 */
- 0, /* PTYPE 225 */
- 0, /* PTYPE 226 */
- 0, /* PTYPE 227 */
- 0, /* PTYPE 228 */
- 0, /* PTYPE 229 */
- 0, /* PTYPE 230 */
- 0, /* PTYPE 231 */
- 0, /* PTYPE 232 */
- 0, /* PTYPE 233 */
- 0, /* PTYPE 234 */
- 0, /* PTYPE 235 */
- 0, /* PTYPE 236 */
- 0, /* PTYPE 237 */
- 0, /* PTYPE 238 */
- 0, /* PTYPE 239 */
- 0, /* PTYPE 240 */
- 0, /* PTYPE 241 */
- 0, /* PTYPE 242 */
- 0, /* PTYPE 243 */
- 0, /* PTYPE 244 */
- 0, /* PTYPE 245 */
- 0, /* PTYPE 246 */
- 0, /* PTYPE 247 */
- 0, /* PTYPE 248 */
- 0, /* PTYPE 249 */
- 0, /* PTYPE 250 */
- 0, /* PTYPE 251 */
- 0, /* PTYPE 252 */
- 0, /* PTYPE 253 */
- 0, /* PTYPE 254 */
- 0, /* PTYPE 255 */
+ static const uint16_t ptype_table[UINT8_MAX] __rte_cache_aligned = {
+ /* [0] - [21] reserved */
+
+ /* Non tunneled IPv4 */
+ [22] = RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_FRAG,
+ [23] = RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_NONFRAG,
+ [24] = RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_UDP,
+ /* [25] reserved */
+ [26] = RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_TCP,
+ [27] = RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_SCTP,
+ [28] = RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_ICMP,
+
+ /* IPv4 --> IPv4 */
+ [29] = RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_FRAG,
+ [30] = RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_NONFRAG,
+ [31] = RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_UDP,
+ /* [32] reserved */
+ [33] = RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_TCP,
+ [34] = RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_SCTP,
+ [35] = RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_ICMP,
+
+ /* IPv4 --> IPv6 */
+ [36] = RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_FRAG,
+ [37] = RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_NONFRAG,
+ [38] = RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_UDP,
+ /* [39] reserved */
+ [40] = RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_TCP,
+ [41] = RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_SCTP,
+ [42] = RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_ICMP,
+
+ /* IPv4 --> GRE/Teredo/VXLAN */
+ [43] = RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT,
+
+ /* IPv4 --> GRE/Teredo/VXLAN --> IPv4 */
+ [44] = RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_FRAG,
+ [45] = RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_NONFRAG,
+ [46] = RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_UDP,
+ /* [47] reserved */
+ [48] = RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_TCP,
+ [49] = RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_SCTP,
+ [50] = RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_ICMP,
+
+ /* IPv4 --> GRE/Teredo/VXLAN --> IPv6 */
+ [51] = RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_FRAG,
+ [52] = RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_NONFRAG,
+ [53] = RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_UDP,
+ /* [54] reserved */
+ [55] = RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_TCP,
+ [56] = RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_SCTP,
+ [57] = RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_ICMP,
+
+ /* IPv4 --> GRE/Teredo/VXLAN --> MAC */
+ [58] = RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT_MAC,
+
+ /* IPv4 --> GRE/Teredo/VXLAN --> MAC --> IPv4 */
+ [59] = RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_FRAG,
+ [60] = RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_NONFRAG,
+ [61] = RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_UDP,
+ /* [62] reserved */
+ [63] = RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_TCP,
+ [64] = RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_SCTP,
+ [65] = RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_ICMP,
+
+ /* IPv4 --> GRE/Teredo/VXLAN --> MAC --> IPv6 */
+ [66] = RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_FRAG,
+ [67] = RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_NONFRAG,
+ [68] = RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_UDP,
+ /* [69] reserved */
+ [70] = RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_TCP,
+ [71] = RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_SCTP,
+ [72] = RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_ICMP,
+
+ /* IPv4 --> GRE/Teredo/VXLAN --> MAC/VLAN */
+ [73] = RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT_MACVLAN,
+
+ /* IPv4 --> GRE/Teredo/VXLAN --> MAC/VLAN --> IPv4 */
+ [74] = RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT_MACVLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_FRAG,
+ [75] = RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT_MACVLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_NONFRAG,
+ [76] = RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT_MACVLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_UDP,
+ /* [77] reserved */
+ [78] = RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT_MACVLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_TCP,
+ [79] = RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT_MACVLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_SCTP,
+ [80] = RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT_MACVLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_ICMP,
+
+ /* IPv4 --> GRE/Teredo/VXLAN --> MAC/VLAN --> IPv6 */
+ [81] = RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT_MACVLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_FRAG,
+ [82] = RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT_MACVLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_NONFRAG,
+ [83] = RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT_MACVLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_UDP,
+ /* [84] reserved */
+ [85] = RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT_MACVLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_TCP,
+ [86] = RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT_MACVLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_SCTP,
+ [87] = RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT_MACVLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_ICMP,
+
+ /* Non tunneled IPv6 */
+ [88] = RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_FRAG,
+ [89] = RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_NONFRAG,
+ [90] = RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_UDP,
+ /* [91] reserved */
+ [92] = RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_TCP,
+ [93] = RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_SCTP,
+ [94] = RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_ICMP,
+
+ /* IPv6 --> IPv4 */
+ [95] = RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_FRAG,
+ [96] = RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_NONFRAG,
+ [97] = RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_UDP,
+ /* [98] reserved */
+ [99] = RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_TCP,
+ [100] = RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_SCTP,
+ [101] = RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_ICMP,
+
+ /* IPv6 --> IPv6 */
+ [102] = RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_FRAG,
+ [103] = RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_NONFRAG,
+ [104] = RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_UDP,
+ /* [105] reserved */
+ [106] = RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_TCP,
+ [107] = RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_SCTP,
+ [108] = RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_ICMP,
+
+ /* IPv6 --> GRE/Teredo/VXLAN */
+ [109] = RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT,
+
+ /* IPv6 --> GRE/Teredo/VXLAN --> IPv4 */
+ [110] = RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_FRAG,
+ [111] = RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_NONFRAG,
+ [112] = RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_UDP,
+ /* [113] reserved */
+ [114] = RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_TCP,
+ [115] = RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_SCTP,
+ [116] = RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_ICMP,
+
+ /* IPv6 --> GRE/Teredo/VXLAN --> IPv6 */
+ [117] = RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_FRAG,
+ [118] = RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_NONFRAG,
+ [119] = RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_UDP,
+ /* [120] reserved */
+ [121] = RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_TCP,
+ [122] = RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_SCTP,
+ [123] = RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_ICMP,
+
+ /* IPv6 --> GRE/Teredo/VXLAN --> MAC */
+ [124] = RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT_MAC,
+
+ /* IPv6 --> GRE/Teredo/VXLAN --> MAC --> IPv4 */
+ [125] = RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_FRAG,
+ [126] = RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_NONFRAG,
+ [127] = RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_UDP,
+ /* [128] reserved */
+ [129] = RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_TCP,
+ [130] = RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_SCTP,
+ [131] = RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_ICMP,
+
+ /* IPv6 --> GRE/Teredo/VXLAN --> MAC --> IPv6 */
+ [132] = RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_FRAG,
+ [133] = RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_NONFRAG,
+ [134] = RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_UDP,
+ /* [135] reserved */
+ [136] = RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_TCP,
+ [137] = RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_SCTP,
+ [138] = RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_ICMP,
+
+ /* IPv6 --> GRE/Teredo/VXLAN --> MAC/VLAN */
+ [139] = RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT_MACVLAN,
+
+ /* IPv6 --> GRE/Teredo/VXLAN --> MAC/VLAN --> IPv4 */
+ [140] = RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT_MACVLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_FRAG,
+ [141] = RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT_MACVLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_NONFRAG,
+ [142] = RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT_MACVLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_UDP,
+ /* [143] reserved */
+ [144] = RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT_MACVLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_TCP,
+ [145] = RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT_MACVLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_SCTP,
+ [146] = RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT_MACVLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_ICMP,
+
+ /* IPv6 --> GRE/Teredo/VXLAN --> MAC/VLAN --> IPv6 */
+ [147] = RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT_MACVLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_FRAG,
+ [148] = RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT_MACVLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_NONFRAG,
+ [149] = RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT_MACVLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_UDP,
+ /* [150] reserved */
+ [151] = RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT_MACVLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_TCP,
+ [152] = RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT_MACVLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_SCTP,
+ [153] = RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT_MACVLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_ICMP,
+
+ /* All others reserved */
};
- return ip_ptype_map[ptype];
+ return ptype_table[ptype];
}
#define I40E_RX_DESC_EXT_STATUS_FLEXBH_MASK 0x03
@@ -708,11 +939,11 @@ i40e_rx_scan_hw_ring(struct i40e_rx_queue *rxq)
rxdp[j].wb.qword0.lo_dword.l2tag1) : 0;
pkt_flags = i40e_rxd_status_to_pkt_flags(qword1);
pkt_flags |= i40e_rxd_error_to_pkt_flags(qword1);
- pkt_flags |= i40e_rxd_ptype_to_pkt_flags(qword1);
- mb->packet_type = (uint16_t)((qword1 &
- I40E_RXD_QW1_PTYPE_MASK) >>
- I40E_RXD_QW1_PTYPE_SHIFT);
+ mb->packet_type =
+ i40e_rxd_pkt_type_mapping((uint8_t)((qword1 &
+ I40E_RXD_QW1_PTYPE_MASK) >>
+ I40E_RXD_QW1_PTYPE_SHIFT));
if (pkt_flags & PKT_RX_RSS_HASH)
mb->hash.rss = rte_le_to_cpu_32(\
rxdp[j].wb.qword0.hi_dword.rss);
@@ -951,9 +1182,9 @@ i40e_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
rte_le_to_cpu_16(rxd.wb.qword0.lo_dword.l2tag1) : 0;
pkt_flags = i40e_rxd_status_to_pkt_flags(qword1);
pkt_flags |= i40e_rxd_error_to_pkt_flags(qword1);
- pkt_flags |= i40e_rxd_ptype_to_pkt_flags(qword1);
- rxm->packet_type = (uint16_t)((qword1 & I40E_RXD_QW1_PTYPE_MASK) >>
- I40E_RXD_QW1_PTYPE_SHIFT);
+ rxm->packet_type =
+ i40e_rxd_pkt_type_mapping((uint8_t)((qword1 &
+ I40E_RXD_QW1_PTYPE_MASK) >> I40E_RXD_QW1_PTYPE_SHIFT));
if (pkt_flags & PKT_RX_RSS_HASH)
rxm->hash.rss =
rte_le_to_cpu_32(rxd.wb.qword0.hi_dword.rss);
@@ -1110,10 +1341,9 @@ i40e_recv_scattered_pkts(void *rx_queue,
rte_le_to_cpu_16(rxd.wb.qword0.lo_dword.l2tag1) : 0;
pkt_flags = i40e_rxd_status_to_pkt_flags(qword1);
pkt_flags |= i40e_rxd_error_to_pkt_flags(qword1);
- pkt_flags |= i40e_rxd_ptype_to_pkt_flags(qword1);
- first_seg->packet_type = (uint16_t)((qword1 &
- I40E_RXD_QW1_PTYPE_MASK) >>
- I40E_RXD_QW1_PTYPE_SHIFT);
+ first_seg->packet_type =
+ i40e_rxd_pkt_type_mapping((uint8_t)((qword1 &
+ I40E_RXD_QW1_PTYPE_MASK) >> I40E_RXD_QW1_PTYPE_SHIFT));
if (pkt_flags & PKT_RX_RSS_HASH)
rxm->hash.rss =
rte_le_to_cpu_32(rxd.wb.qword0.hi_dword.rss);
--
1.8.1.4
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH 06/17] bond: support of unified packet type
2015-01-29 3:15 ` [dpdk-dev] [PATCH 00/17] unified packet type Helin Zhang
` (4 preceding siblings ...)
2015-01-29 3:15 ` [dpdk-dev] [PATCH 05/17] i40e: " Helin Zhang
@ 2015-01-29 3:15 ` Helin Zhang
2015-02-11 15:01 ` Declan Doherty
2015-01-29 3:15 ` [dpdk-dev] [PATCH 07/17] enic: " Helin Zhang
` (13 subsequent siblings)
19 siblings, 1 reply; 257+ messages in thread
From: Helin Zhang @ 2015-01-29 3:15 UTC (permalink / raw)
To: dev
To unify packet types among all PMDs, bit masks of packet type for
ol_flags are replaced by unified packet type.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
lib/librte_pmd_bond/rte_eth_bond_pmd.c | 9 ++++-----
1 file changed, 4 insertions(+), 5 deletions(-)
diff --git a/lib/librte_pmd_bond/rte_eth_bond_pmd.c b/lib/librte_pmd_bond/rte_eth_bond_pmd.c
index 8b80297..acd8e77 100644
--- a/lib/librte_pmd_bond/rte_eth_bond_pmd.c
+++ b/lib/librte_pmd_bond/rte_eth_bond_pmd.c
@@ -319,12 +319,11 @@ xmit_l23_hash(const struct rte_mbuf *buf, uint8_t slave_count)
hash = ether_hash(eth_hdr);
- if (buf->ol_flags & PKT_RX_IPV4_HDR) {
+ if (RTE_ETH_IS_IPV4_HDR(buf->packet_type)) {
struct ipv4_hdr *ipv4_hdr = (struct ipv4_hdr *)
((char *)(eth_hdr + 1) + vlan_offset);
l3hash = ipv4_hash(ipv4_hdr);
-
- } else if (buf->ol_flags & PKT_RX_IPV6_HDR) {
+ } else if (RTE_ETH_IS_IPV6_HDR(buf->packet_type)) {
struct ipv6_hdr *ipv6_hdr = (struct ipv6_hdr *)
((char *)(eth_hdr + 1) + vlan_offset);
l3hash = ipv6_hash(ipv6_hdr);
@@ -346,7 +345,7 @@ xmit_l34_hash(const struct rte_mbuf *buf, uint8_t slave_count)
struct tcp_hdr *tcp_hdr = NULL;
uint32_t hash, l3hash = 0, l4hash = 0;
- if (buf->ol_flags & PKT_RX_IPV4_HDR) {
+ if (RTE_ETH_IS_IPV4_HDR(buf->packet_type)) {
struct ipv4_hdr *ipv4_hdr = (struct ipv4_hdr *)
((char *)(eth_hdr + 1) + vlan_offset);
size_t ip_hdr_offset;
@@ -365,7 +364,7 @@ xmit_l34_hash(const struct rte_mbuf *buf, uint8_t slave_count)
ip_hdr_offset);
l4hash = HASH_L4_PORTS(udp_hdr);
}
- } else if (buf->ol_flags & PKT_RX_IPV6_HDR) {
+ } else if (RTE_ETH_IS_IPV6_HDR(buf->packet_type)) {
struct ipv6_hdr *ipv6_hdr = (struct ipv6_hdr *)
((char *)(eth_hdr + 1) + vlan_offset);
l3hash = ipv6_hash(ipv6_hdr);
--
1.8.1.4
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH 07/17] enic: support of unified packet type
2015-01-29 3:15 ` [dpdk-dev] [PATCH 00/17] unified packet type Helin Zhang
` (5 preceding siblings ...)
2015-01-29 3:15 ` [dpdk-dev] [PATCH 06/17] bond: " Helin Zhang
@ 2015-01-29 3:15 ` Helin Zhang
2015-01-29 3:15 ` [dpdk-dev] [PATCH 08/17] vmxnet3: " Helin Zhang
` (12 subsequent siblings)
19 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-01-29 3:15 UTC (permalink / raw)
To: dev
To unify packet types among all PMDs, bit masks of packet type for
ol_flags are replaced by unified packet type.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
lib/librte_pmd_enic/enic_main.c | 14 ++++++++------
1 file changed, 8 insertions(+), 6 deletions(-)
diff --git a/lib/librte_pmd_enic/enic_main.c b/lib/librte_pmd_enic/enic_main.c
index 48fdca2..9acba9a 100644
--- a/lib/librte_pmd_enic/enic_main.c
+++ b/lib/librte_pmd_enic/enic_main.c
@@ -423,7 +423,7 @@ static int enic_rq_indicate_buf(struct vnic_rq *rq,
rx_pkt->pkt_len = bytes_written;
if (ipv4) {
- rx_pkt->ol_flags |= PKT_RX_IPV4_HDR;
+ rx_pkt->packet_type = RTE_PTYPE_L3_IPV4;
if (!csum_not_calc) {
if (unlikely(!ipv4_csum_ok))
rx_pkt->ol_flags |= PKT_RX_IP_CKSUM_BAD;
@@ -432,7 +432,7 @@ static int enic_rq_indicate_buf(struct vnic_rq *rq,
rx_pkt->ol_flags |= PKT_RX_L4_CKSUM_BAD;
}
} else if (ipv6)
- rx_pkt->ol_flags |= PKT_RX_IPV6_HDR;
+ rx_pkt->packet_type = RTE_PTYPE_L3_IPV6;
} else {
/* Header split */
if (sop && !eop) {
@@ -445,7 +445,7 @@ static int enic_rq_indicate_buf(struct vnic_rq *rq,
*rx_pkt_bucket = rx_pkt;
rx_pkt->pkt_len = bytes_written;
if (ipv4) {
- rx_pkt->ol_flags |= PKT_RX_IPV4_HDR;
+ rx_pkt->packet_type = RTE_PTYPE_L3_IPV4;
if (!csum_not_calc) {
if (unlikely(!ipv4_csum_ok))
rx_pkt->ol_flags |=
@@ -457,13 +457,14 @@ static int enic_rq_indicate_buf(struct vnic_rq *rq,
PKT_RX_L4_CKSUM_BAD;
}
} else if (ipv6)
- rx_pkt->ol_flags |= PKT_RX_IPV6_HDR;
+ rx_pkt->packet_type = RTE_PTYPE_L3_IPV6;
} else {
/* Payload */
hdr_rx_pkt = *rx_pkt_bucket;
hdr_rx_pkt->pkt_len += bytes_written;
if (ipv4) {
- hdr_rx_pkt->ol_flags |= PKT_RX_IPV4_HDR;
+ hdr_rx_pkt->packet_type =
+ RTE_PTYPE_L3_IPV4;
if (!csum_not_calc) {
if (unlikely(!ipv4_csum_ok))
hdr_rx_pkt->ol_flags |=
@@ -475,7 +476,8 @@ static int enic_rq_indicate_buf(struct vnic_rq *rq,
PKT_RX_L4_CKSUM_BAD;
}
} else if (ipv6)
- hdr_rx_pkt->ol_flags |= PKT_RX_IPV6_HDR;
+ hdr_rx_pkt->packet_type =
+ RTE_PTYPE_L3_IPV6;
}
}
--
1.8.1.4
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH 08/17] vmxnet3: support of unified packet type
2015-01-29 3:15 ` [dpdk-dev] [PATCH 00/17] unified packet type Helin Zhang
` (6 preceding siblings ...)
2015-01-29 3:15 ` [dpdk-dev] [PATCH 07/17] enic: " Helin Zhang
@ 2015-01-29 3:15 ` Helin Zhang
2015-01-29 3:15 ` [dpdk-dev] [PATCH 09/17] app/test-pipeline: " Helin Zhang
` (11 subsequent siblings)
19 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-01-29 3:15 UTC (permalink / raw)
To: dev
To unify packet types among all PMDs, bit masks of packet type for
ol_flags are replaced by unified packet type.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c b/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c
index 8425f32..c85ebd8 100644
--- a/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c
+++ b/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c
@@ -650,9 +650,9 @@ vmxnet3_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
struct ipv4_hdr *ip = (struct ipv4_hdr *)(eth + 1);
if (((ip->version_ihl & 0xf) << 2) > (int)sizeof(struct ipv4_hdr))
- rxm->ol_flags |= PKT_RX_IPV4_HDR_EXT;
+ rxm->packet_type = RTE_PTYPE_L3_IPV4_EXT;
else
- rxm->ol_flags |= PKT_RX_IPV4_HDR;
+ rxm->packet_type = RTE_PTYPE_L3_IPV4;
if (!rcd->cnc) {
if (!rcd->ipc)
--
1.8.1.4
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH 09/17] app/test-pipeline: support of unified packet type
2015-01-29 3:15 ` [dpdk-dev] [PATCH 00/17] unified packet type Helin Zhang
` (7 preceding siblings ...)
2015-01-29 3:15 ` [dpdk-dev] [PATCH 08/17] vmxnet3: " Helin Zhang
@ 2015-01-29 3:15 ` Helin Zhang
2015-01-29 3:15 ` [dpdk-dev] [PATCH 10/17] app/test-pmd: " Helin Zhang
` (10 subsequent siblings)
19 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-01-29 3:15 UTC (permalink / raw)
To: dev
To unify packet types among all PMDs, bit masks and relevant macros
of packet type for ol_flags are replaced by unified packet type and
relevant macros.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
app/test-pipeline/pipeline_hash.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/app/test-pipeline/pipeline_hash.c b/app/test-pipeline/pipeline_hash.c
index 4598ad4..db650c8 100644
--- a/app/test-pipeline/pipeline_hash.c
+++ b/app/test-pipeline/pipeline_hash.c
@@ -459,14 +459,14 @@ app_main_loop_rx_metadata(void) {
signature = RTE_MBUF_METADATA_UINT32_PTR(m, 0);
key = RTE_MBUF_METADATA_UINT8_PTR(m, 32);
- if (m->ol_flags & PKT_RX_IPV4_HDR) {
+ if (RTE_ETH_IS_IPV4_HDR(m->packet_type)) {
ip_hdr = (struct ipv4_hdr *)
&m_data[sizeof(struct ether_hdr)];
ip_dst = ip_hdr->dst_addr;
k32 = (uint32_t *) key;
k32[0] = ip_dst & 0xFFFFFF00;
- } else {
+ } else if (RTE_ETH_IS_IPV6_HDR(m->packet_type)) {
ipv6_hdr = (struct ipv6_hdr *)
&m_data[sizeof(struct ether_hdr)];
ipv6_dst = ipv6_hdr->dst_addr;
--
1.8.1.4
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH 10/17] app/test-pmd: support of unified packet type
2015-01-29 3:15 ` [dpdk-dev] [PATCH 00/17] unified packet type Helin Zhang
` (8 preceding siblings ...)
2015-01-29 3:15 ` [dpdk-dev] [PATCH 09/17] app/test-pipeline: " Helin Zhang
@ 2015-01-29 3:15 ` Helin Zhang
2015-01-29 3:15 ` [dpdk-dev] [PATCH 11/17] app/test: " Helin Zhang
` (9 subsequent siblings)
19 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-01-29 3:15 UTC (permalink / raw)
To: dev
To unify packet types among all PMDs, bit masks and relevant macros
of packet type for ol_flags are replaced by unified packet type and
relevant macros.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
app/test-pmd/csumonly.c | 6 +++---
app/test-pmd/rxonly.c | 9 +++------
2 files changed, 6 insertions(+), 9 deletions(-)
diff --git a/app/test-pmd/csumonly.c b/app/test-pmd/csumonly.c
index 41711fd..5e08272 100644
--- a/app/test-pmd/csumonly.c
+++ b/app/test-pmd/csumonly.c
@@ -319,7 +319,7 @@ pkt_burst_checksum_forward(struct fwd_stream *fs)
uint16_t nb_tx;
uint16_t i;
uint64_t ol_flags;
- uint16_t testpmd_ol_flags;
+ uint16_t testpmd_ol_flags, packet_type;
uint8_t l4_proto, l4_tun_len = 0;
uint16_t ethertype = 0, outer_ethertype = 0;
uint16_t l2_len = 0, l3_len = 0, l4_len = 0;
@@ -362,6 +362,7 @@ pkt_burst_checksum_forward(struct fwd_stream *fs)
tunnel = 0;
l4_tun_len = 0;
m = pkts_burst[i];
+ packet_type = m->packet_type;
/* Update the L3/L4 checksum error packet statistics */
rx_bad_ip_csum += ((m->ol_flags & PKT_RX_IP_CKSUM_BAD) != 0);
@@ -387,8 +388,7 @@ pkt_burst_checksum_forward(struct fwd_stream *fs)
/* currently, this flag is set by i40e only if the
* packet is vxlan */
- } else if (m->ol_flags & (PKT_RX_TUNNEL_IPV4_HDR |
- PKT_RX_TUNNEL_IPV6_HDR))
+ } else if (RTE_ETH_IS_TUNNEL_PKT(packet_type))
tunnel = 1;
if (tunnel == 1) {
diff --git a/app/test-pmd/rxonly.c b/app/test-pmd/rxonly.c
index fdfe990..8eb68c4 100644
--- a/app/test-pmd/rxonly.c
+++ b/app/test-pmd/rxonly.c
@@ -92,7 +92,7 @@ pkt_burst_receive(struct fwd_stream *fs)
uint64_t ol_flags;
uint16_t nb_rx;
uint16_t i, packet_type;
- uint64_t is_encapsulation;
+ uint16_t is_encapsulation;
#ifdef RTE_TEST_PMD_RECORD_CORE_CYCLES
uint64_t start_tsc;
@@ -135,10 +135,7 @@ pkt_burst_receive(struct fwd_stream *fs)
eth_type = RTE_BE_TO_CPU_16(eth_hdr->ether_type);
ol_flags = mb->ol_flags;
packet_type = mb->packet_type;
-
- is_encapsulation = ol_flags & (PKT_RX_TUNNEL_IPV4_HDR |
- PKT_RX_TUNNEL_IPV6_HDR);
-
+ is_encapsulation = RTE_ETH_IS_TUNNEL_PKT(packet_type);
print_ether_addr(" src=", ð_hdr->s_addr);
print_ether_addr(" - dst=", ð_hdr->d_addr);
printf(" - type=0x%04x - length=%u - nb_segs=%d",
@@ -174,7 +171,7 @@ pkt_burst_receive(struct fwd_stream *fs)
l2_len = sizeof(struct ether_hdr);
/* Do not support ipv4 option field */
- if (ol_flags & PKT_RX_TUNNEL_IPV4_HDR) {
+ if (RTE_ETH_IS_IPV4_HDR(packet_type)) {
l3_len = sizeof(struct ipv4_hdr);
ipv4_hdr = (struct ipv4_hdr *) (rte_pktmbuf_mtod(mb,
unsigned char *) + l2_len);
--
1.8.1.4
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH 11/17] app/test: support of unified packet type
2015-01-29 3:15 ` [dpdk-dev] [PATCH 00/17] unified packet type Helin Zhang
` (9 preceding siblings ...)
2015-01-29 3:15 ` [dpdk-dev] [PATCH 10/17] app/test-pmd: " Helin Zhang
@ 2015-01-29 3:15 ` Helin Zhang
2015-01-29 3:16 ` [dpdk-dev] [PATCH 12/17] examples/ip_fragmentation: " Helin Zhang
` (8 subsequent siblings)
19 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-01-29 3:15 UTC (permalink / raw)
To: dev
To unify packet types among all PMDs, bit masks and relevant macros
of packet type for ol_flags are replaced by unified packet type and
relevant macros.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
app/test/packet_burst_generator.c | 10 ++++------
1 file changed, 4 insertions(+), 6 deletions(-)
diff --git a/app/test/packet_burst_generator.c b/app/test/packet_burst_generator.c
index 4a89663..0a936ea 100644
--- a/app/test/packet_burst_generator.c
+++ b/app/test/packet_burst_generator.c
@@ -258,18 +258,16 @@ nomore_mbuf:
pkt->vlan_tci = ETHER_TYPE_IPv4;
pkt->l3_len = sizeof(struct ipv4_hdr);
+ pkt->packet_type = RTE_PTYPE_L3_IPV4;
if (vlan_enabled)
- pkt->ol_flags = PKT_RX_IPV4_HDR | PKT_RX_VLAN_PKT;
- else
- pkt->ol_flags = PKT_RX_IPV4_HDR;
+ pkt->ol_flags = PKT_RX_VLAN_PKT;
} else {
pkt->vlan_tci = ETHER_TYPE_IPv6;
pkt->l3_len = sizeof(struct ipv6_hdr);
+ pkt->packet_type = RTE_PTYPE_L3_IPV6;
if (vlan_enabled)
- pkt->ol_flags = PKT_RX_IPV6_HDR | PKT_RX_VLAN_PKT;
- else
- pkt->ol_flags = PKT_RX_IPV6_HDR;
+ pkt->ol_flags = PKT_RX_VLAN_PKT;
}
pkts_burst[nb_pkt] = pkt;
--
1.8.1.4
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH 12/17] examples/ip_fragmentation: support of unified packet type
2015-01-29 3:15 ` [dpdk-dev] [PATCH 00/17] unified packet type Helin Zhang
` (10 preceding siblings ...)
2015-01-29 3:15 ` [dpdk-dev] [PATCH 11/17] app/test: " Helin Zhang
@ 2015-01-29 3:16 ` Helin Zhang
2015-01-29 3:16 ` [dpdk-dev] [PATCH 13/17] examples/ip_reassembly: " Helin Zhang
` (7 subsequent siblings)
19 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-01-29 3:16 UTC (permalink / raw)
To: dev
To unify packet types among all PMDs, bit masks and relevant macros
of packet type for ol_flags are replaced by unified packet type and
relevant macros.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
examples/ip_fragmentation/main.c | 7 +++----
1 file changed, 3 insertions(+), 4 deletions(-)
diff --git a/examples/ip_fragmentation/main.c b/examples/ip_fragmentation/main.c
index eac5427..152844e 100644
--- a/examples/ip_fragmentation/main.c
+++ b/examples/ip_fragmentation/main.c
@@ -286,7 +286,7 @@ l3fwd_simple_forward(struct rte_mbuf *m, struct lcore_queue_conf *qconf,
len = qconf->tx_mbufs[port_out].len;
/* if this is an IPv4 packet */
- if (m->ol_flags & PKT_RX_IPV4_HDR) {
+ if (RTE_ETH_IS_IPV4_HDR(m->packet_type)) {
struct ipv4_hdr *ip_hdr;
uint32_t ip_dst;
/* Read the lookup key (i.e. ip_dst) from the input packet */
@@ -320,9 +320,8 @@ l3fwd_simple_forward(struct rte_mbuf *m, struct lcore_queue_conf *qconf,
if (unlikely (len2 < 0))
return;
}
- }
- /* if this is an IPv6 packet */
- else if (m->ol_flags & PKT_RX_IPV6_HDR) {
+ } else if (RTE_ETH_IS_IPV6_HDR(m->packet_type)) {
+ /* if this is an IPv6 packet */
struct ipv6_hdr *ip_hdr;
ipv6 = 1;
--
1.8.1.4
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH 13/17] examples/ip_reassembly: support of unified packet type
2015-01-29 3:15 ` [dpdk-dev] [PATCH 00/17] unified packet type Helin Zhang
` (11 preceding siblings ...)
2015-01-29 3:16 ` [dpdk-dev] [PATCH 12/17] examples/ip_fragmentation: " Helin Zhang
@ 2015-01-29 3:16 ` Helin Zhang
2015-01-29 3:16 ` [dpdk-dev] [PATCH 14/17] examples/l3fwd-acl: " Helin Zhang
` (6 subsequent siblings)
19 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-01-29 3:16 UTC (permalink / raw)
To: dev
To unify packet types among all PMDs, bit masks and relevant macros
of packet type for ol_flags are replaced by unified packet type and
relevant macros.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
examples/ip_reassembly/main.c | 7 +++----
1 file changed, 3 insertions(+), 4 deletions(-)
diff --git a/examples/ip_reassembly/main.c b/examples/ip_reassembly/main.c
index 8492153..5ef2135 100644
--- a/examples/ip_reassembly/main.c
+++ b/examples/ip_reassembly/main.c
@@ -357,7 +357,7 @@ reassemble(struct rte_mbuf *m, uint8_t portid, uint32_t queue,
dst_port = portid;
/* if packet is IPv4 */
- if (m->ol_flags & (PKT_RX_IPV4_HDR)) {
+ if (RTE_ETH_IS_IPV4_HDR(m->packet_type)) {
struct ipv4_hdr *ip_hdr;
uint32_t ip_dst;
@@ -397,9 +397,8 @@ reassemble(struct rte_mbuf *m, uint8_t portid, uint32_t queue,
}
eth_hdr->ether_type = rte_be_to_cpu_16(ETHER_TYPE_IPv4);
- }
- /* if packet is IPv6 */
- else if (m->ol_flags & (PKT_RX_IPV6_HDR | PKT_RX_IPV6_HDR_EXT)) {
+ } else if (RTE_ETH_IS_IPV6_HDR(m->packet_type)) {
+ /* if packet is IPv6 */
struct ipv6_extension_fragment *frag_hdr;
struct ipv6_hdr *ip_hdr;
--
1.8.1.4
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH 14/17] examples/l3fwd-acl: support of unified packet type
2015-01-29 3:15 ` [dpdk-dev] [PATCH 00/17] unified packet type Helin Zhang
` (12 preceding siblings ...)
2015-01-29 3:16 ` [dpdk-dev] [PATCH 13/17] examples/ip_reassembly: " Helin Zhang
@ 2015-01-29 3:16 ` Helin Zhang
2015-01-29 3:16 ` [dpdk-dev] [PATCH 15/17] examples/l3fwd-power: " Helin Zhang
` (5 subsequent siblings)
19 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-01-29 3:16 UTC (permalink / raw)
To: dev
To unify packet types among all PMDs, bit masks and relevant macros
of packet type for ol_flags are replaced by unified packet type and
relevant macros.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
examples/l3fwd-acl/main.c | 19 ++++++-------------
1 file changed, 6 insertions(+), 13 deletions(-)
diff --git a/examples/l3fwd-acl/main.c b/examples/l3fwd-acl/main.c
index f1f7601..af70ccd 100644
--- a/examples/l3fwd-acl/main.c
+++ b/examples/l3fwd-acl/main.c
@@ -651,9 +651,7 @@ prepare_one_packet(struct rte_mbuf **pkts_in, struct acl_search_t *acl,
struct ipv4_hdr *ipv4_hdr;
struct rte_mbuf *pkt = pkts_in[index];
- int type = pkt->ol_flags & (PKT_RX_IPV4_HDR | PKT_RX_IPV6_HDR);
-
- if (type == PKT_RX_IPV4_HDR) {
+ if (RTE_ETH_IS_IPV4_HDR(pkt->packet_type)) {
ipv4_hdr = (struct ipv4_hdr *)(rte_pktmbuf_mtod(pkt,
unsigned char *) + sizeof(struct ether_hdr));
@@ -674,8 +672,7 @@ prepare_one_packet(struct rte_mbuf **pkts_in, struct acl_search_t *acl,
rte_pktmbuf_free(pkt);
}
- } else if (type == PKT_RX_IPV6_HDR) {
-
+ } else if (RTE_ETH_IS_IPV6_HDR(pkt->packet_type)) {
/* Fill acl structure */
acl->data_ipv6[acl->num_ipv6] = MBUF_IPV6_2PROTO(pkt);
acl->m_ipv6[(acl->num_ipv6)++] = pkt;
@@ -693,17 +690,13 @@ prepare_one_packet(struct rte_mbuf **pkts_in, struct acl_search_t *acl,
{
struct rte_mbuf *pkt = pkts_in[index];
- int type = pkt->ol_flags & (PKT_RX_IPV4_HDR | PKT_RX_IPV6_HDR);
-
- if (type == PKT_RX_IPV4_HDR) {
-
+ if (RTE_ETH_IS_IPV4_HDR(pkt->packet_type)) {
/* Fill acl structure */
acl->data_ipv4[acl->num_ipv4] = MBUF_IPV4_2PROTO(pkt);
acl->m_ipv4[(acl->num_ipv4)++] = pkt;
- } else if (type == PKT_RX_IPV6_HDR) {
-
+ } else if (RTE_ETH_IS_IPV6_HDR(pkt->packet_type)) {
/* Fill acl structure */
acl->data_ipv6[acl->num_ipv6] = MBUF_IPV6_2PROTO(pkt);
acl->m_ipv6[(acl->num_ipv6)++] = pkt;
@@ -751,9 +744,9 @@ send_one_packet(struct rte_mbuf *m, uint32_t res)
/* in the ACL list, drop it */
#ifdef L3FWDACL_DEBUG
if ((res & ACL_DENY_SIGNATURE) != 0) {
- if (m->ol_flags & PKT_RX_IPV4_HDR)
+ if (RTE_ETH_IS_IPV4_HDR(m->packet_type))
dump_acl4_rule(m, res);
- else
+ else if (RTE_ETH_IS_IPV6_HDR(m->packet_type))
dump_acl6_rule(m, res);
}
#endif
--
1.8.1.4
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH 15/17] examples/l3fwd-power: support of unified packet type
2015-01-29 3:15 ` [dpdk-dev] [PATCH 00/17] unified packet type Helin Zhang
` (13 preceding siblings ...)
2015-01-29 3:16 ` [dpdk-dev] [PATCH 14/17] examples/l3fwd-acl: " Helin Zhang
@ 2015-01-29 3:16 ` Helin Zhang
2015-01-29 3:16 ` [dpdk-dev] [PATCH 16/17] examples/l3fwd: " Helin Zhang
` (4 subsequent siblings)
19 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-01-29 3:16 UTC (permalink / raw)
To: dev
To unify packet types among all PMDs, bit masks and relevant macros
of packet type for ol_flags are replaced by unified packet type and
relevant macros.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
examples/l3fwd-power/main.c | 5 ++---
1 file changed, 2 insertions(+), 3 deletions(-)
diff --git a/examples/l3fwd-power/main.c b/examples/l3fwd-power/main.c
index f6b55b9..964e5b9 100644
--- a/examples/l3fwd-power/main.c
+++ b/examples/l3fwd-power/main.c
@@ -638,7 +638,7 @@ l3fwd_simple_forward(struct rte_mbuf *m, uint8_t portid,
eth_hdr = rte_pktmbuf_mtod(m, struct ether_hdr *);
- if (m->ol_flags & PKT_RX_IPV4_HDR) {
+ if (RTE_ETH_IS_IPV4_HDR(m->packet_type)) {
/* Handle IPv4 headers.*/
ipv4_hdr =
(struct ipv4_hdr *)(rte_pktmbuf_mtod(m, unsigned char*)
@@ -673,8 +673,7 @@ l3fwd_simple_forward(struct rte_mbuf *m, uint8_t portid,
ether_addr_copy(&ports_eth_addr[dst_port], ð_hdr->s_addr);
send_single_packet(m, dst_port);
- }
- else {
+ } else if (RTE_ETH_IS_IPV6_HDR(m->packet_type)) {
/* Handle IPv6 headers.*/
#if (APP_LOOKUP_METHOD == APP_LOOKUP_EXACT_MATCH)
struct ipv6_hdr *ipv6_hdr;
--
1.8.1.4
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH 16/17] examples/l3fwd: support of unified packet type
2015-01-29 3:15 ` [dpdk-dev] [PATCH 00/17] unified packet type Helin Zhang
` (14 preceding siblings ...)
2015-01-29 3:16 ` [dpdk-dev] [PATCH 15/17] examples/l3fwd-power: " Helin Zhang
@ 2015-01-29 3:16 ` Helin Zhang
2015-01-29 3:16 ` [dpdk-dev] [PATCH 17/17] mbuf: remove old packet type bit masks for ol_flags Helin Zhang
` (3 subsequent siblings)
19 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-01-29 3:16 UTC (permalink / raw)
To: dev
To unify packet types among all PMDs, bit masks and relevant macros
of packet type for ol_flags are replaced by unified packet type and
relevant macros.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
examples/l3fwd/main.c | 64 +++++++++++++++++++++++++++++----------------------
1 file changed, 37 insertions(+), 27 deletions(-)
diff --git a/examples/l3fwd/main.c b/examples/l3fwd/main.c
index 6f7d7d4..d02a19c 100644
--- a/examples/l3fwd/main.c
+++ b/examples/l3fwd/main.c
@@ -958,7 +958,7 @@ l3fwd_simple_forward(struct rte_mbuf *m, uint8_t portid, struct lcore_conf *qcon
eth_hdr = rte_pktmbuf_mtod(m, struct ether_hdr *);
- if (m->ol_flags & PKT_RX_IPV4_HDR) {
+ if (RTE_ETH_IS_IPV4_HDR(m->packet_type)) {
/* Handle IPv4 headers.*/
ipv4_hdr = (struct ipv4_hdr *)(rte_pktmbuf_mtod(m, unsigned char *) +
sizeof(struct ether_hdr));
@@ -993,7 +993,7 @@ l3fwd_simple_forward(struct rte_mbuf *m, uint8_t portid, struct lcore_conf *qcon
send_single_packet(m, dst_port);
- } else {
+ } else if (RTE_ETH_IS_IPV6_HDR(m->packet_type)) {
/* Handle IPv6 headers.*/
struct ipv6_hdr *ipv6_hdr;
@@ -1039,11 +1039,11 @@ l3fwd_simple_forward(struct rte_mbuf *m, uint8_t portid, struct lcore_conf *qcon
* to BAD_PORT value.
*/
static inline __attribute__((always_inline)) void
-rfc1812_process(struct ipv4_hdr *ipv4_hdr, uint16_t *dp, uint32_t flags)
+rfc1812_process(struct ipv4_hdr *ipv4_hdr, uint16_t *dp, uint16_t ptype)
{
uint8_t ihl;
- if ((flags & PKT_RX_IPV4_HDR) != 0) {
+ if (RTE_ETH_IS_IPV4_HDR(ptype)) {
ihl = ipv4_hdr->version_ihl - IPV4_MIN_VER_IHL;
@@ -1074,11 +1074,11 @@ get_dst_port(const struct lcore_conf *qconf, struct rte_mbuf *pkt,
struct ipv6_hdr *ipv6_hdr;
struct ether_hdr *eth_hdr;
- if (pkt->ol_flags & PKT_RX_IPV4_HDR) {
+ if (RTE_ETH_IS_IPV4_HDR(pkt->packet_type)) {
if (rte_lpm_lookup(qconf->ipv4_lookup_struct, dst_ipv4,
&next_hop) != 0)
next_hop = portid;
- } else if (pkt->ol_flags & PKT_RX_IPV6_HDR) {
+ } else if (RTE_ETH_IS_IPV6_HDR(pkt->packet_type)) {
eth_hdr = rte_pktmbuf_mtod(pkt, struct ether_hdr *);
ipv6_hdr = (struct ipv6_hdr *)(eth_hdr + 1);
if (rte_lpm6_lookup(qconf->ipv6_lookup_struct,
@@ -1112,7 +1112,7 @@ process_packet(struct lcore_conf *qconf, struct rte_mbuf *pkt,
ve = val_eth[dp];
dst_port[0] = dp;
- rfc1812_process(ipv4_hdr, dst_port, pkt->ol_flags);
+ rfc1812_process(ipv4_hdr, dst_port, pkt->packet_type);
te = _mm_blend_epi16(te, ve, MASK_ETH);
_mm_store_si128((__m128i *)eth_hdr, te);
@@ -1122,7 +1122,7 @@ process_packet(struct lcore_conf *qconf, struct rte_mbuf *pkt,
* Read ol_flags and destination IPV4 addresses from 4 mbufs.
*/
static inline void
-processx4_step1(struct rte_mbuf *pkt[FWDSTEP], __m128i *dip, uint32_t *flag)
+processx4_step1(struct rte_mbuf *pkt[FWDSTEP], __m128i *dip, int *ipv4_flag)
{
struct ipv4_hdr *ipv4_hdr;
struct ether_hdr *eth_hdr;
@@ -1131,22 +1131,22 @@ processx4_step1(struct rte_mbuf *pkt[FWDSTEP], __m128i *dip, uint32_t *flag)
eth_hdr = rte_pktmbuf_mtod(pkt[0], struct ether_hdr *);
ipv4_hdr = (struct ipv4_hdr *)(eth_hdr + 1);
x0 = ipv4_hdr->dst_addr;
- flag[0] = pkt[0]->ol_flags & PKT_RX_IPV4_HDR;
eth_hdr = rte_pktmbuf_mtod(pkt[1], struct ether_hdr *);
ipv4_hdr = (struct ipv4_hdr *)(eth_hdr + 1);
x1 = ipv4_hdr->dst_addr;
- flag[0] &= pkt[1]->ol_flags;
eth_hdr = rte_pktmbuf_mtod(pkt[2], struct ether_hdr *);
ipv4_hdr = (struct ipv4_hdr *)(eth_hdr + 1);
x2 = ipv4_hdr->dst_addr;
- flag[0] &= pkt[2]->ol_flags;
eth_hdr = rte_pktmbuf_mtod(pkt[3], struct ether_hdr *);
ipv4_hdr = (struct ipv4_hdr *)(eth_hdr + 1);
x3 = ipv4_hdr->dst_addr;
- flag[0] &= pkt[3]->ol_flags;
+ *ipv4_flag = RTE_ETH_IS_IPV4_HDR(pkt[0]->packet_type) &&
+ RTE_ETH_IS_IPV4_HDR(pkt[1]->packet_type) &&
+ RTE_ETH_IS_IPV4_HDR(pkt[2]->packet_type) &&
+ RTE_ETH_IS_IPV4_HDR(pkt[3]->packet_type);
dip[0] = _mm_set_epi32(x3, x2, x1, x0);
}
@@ -1156,7 +1156,7 @@ processx4_step1(struct rte_mbuf *pkt[FWDSTEP], __m128i *dip, uint32_t *flag)
* If lookup fails, use incoming port (portid) as destination port.
*/
static inline void
-processx4_step2(const struct lcore_conf *qconf, __m128i dip, uint32_t flag,
+processx4_step2(const struct lcore_conf *qconf, __m128i dip, int ipv4_flag,
uint8_t portid, struct rte_mbuf *pkt[FWDSTEP], uint16_t dprt[FWDSTEP])
{
rte_xmm_t dst;
@@ -1167,7 +1167,7 @@ processx4_step2(const struct lcore_conf *qconf, __m128i dip, uint32_t flag,
dip = _mm_shuffle_epi8(dip, bswap_mask);
/* if all 4 packets are IPV4. */
- if (likely(flag != 0)) {
+ if (likely(ipv4_flag)) {
rte_lpm_lookupx4(qconf->ipv4_lookup_struct, dip, dprt, portid);
} else {
dst.x = dip;
@@ -1218,13 +1218,13 @@ processx4_step3(struct rte_mbuf *pkt[FWDSTEP], uint16_t dst_port[FWDSTEP])
_mm_store_si128(p[3], te[3]);
rfc1812_process((struct ipv4_hdr *)((struct ether_hdr *)p[0] + 1),
- &dst_port[0], pkt[0]->ol_flags);
+ &dst_port[0], pkt[0]->packet_type);
rfc1812_process((struct ipv4_hdr *)((struct ether_hdr *)p[1] + 1),
- &dst_port[1], pkt[1]->ol_flags);
+ &dst_port[1], pkt[1]->packet_type);
rfc1812_process((struct ipv4_hdr *)((struct ether_hdr *)p[2] + 1),
- &dst_port[2], pkt[2]->ol_flags);
+ &dst_port[2], pkt[2]->packet_type);
rfc1812_process((struct ipv4_hdr *)((struct ether_hdr *)p[3] + 1),
- &dst_port[3], pkt[3]->ol_flags);
+ &dst_port[3], pkt[3]->packet_type);
}
/*
@@ -1411,7 +1411,7 @@ main_loop(__attribute__((unused)) void *dummy)
uint16_t *lp;
uint16_t dst_port[MAX_PKT_BURST];
__m128i dip[MAX_PKT_BURST / FWDSTEP];
- uint32_t flag[MAX_PKT_BURST / FWDSTEP];
+ int ipv4_flag[MAX_PKT_BURST / FWDSTEP];
uint16_t pnum[MAX_PKT_BURST + 1];
#endif
@@ -1481,14 +1481,24 @@ main_loop(__attribute__((unused)) void *dummy)
*/
int32_t n = RTE_ALIGN_FLOOR(nb_rx, 4);
for (j = 0; j < n ; j+=4) {
- uint32_t ol_flag = pkts_burst[j]->ol_flags
- & pkts_burst[j+1]->ol_flags
- & pkts_burst[j+2]->ol_flags
- & pkts_burst[j+3]->ol_flags;
- if (ol_flag & PKT_RX_IPV4_HDR ) {
+ if (RTE_ETH_IS_IPV4_HDR(
+ pkts_burst[j]->packet_type) &&
+ RTE_ETH_IS_IPV4_HDR(
+ pkts_burst[j+1]->packet_type) &&
+ RTE_ETH_IS_IPV4_HDR(
+ pkts_burst[j+2]->packet_type) &&
+ RTE_ETH_IS_IPV4_HDR(
+ pkts_burst[j+3]->packet_type)) {
simple_ipv4_fwd_4pkts(&pkts_burst[j],
portid, qconf);
- } else if (ol_flag & PKT_RX_IPV6_HDR) {
+ } else if (RTE_ETH_IS_IPV6_HDR(
+ pkts_burst[j]->packet_type) &&
+ RTE_ETH_IS_IPV6_HDR(
+ pkts_burst[j+1]->packet_type) &&
+ RTE_ETH_IS_IPV6_HDR(
+ pkts_burst[j+2]->packet_type) &&
+ RTE_ETH_IS_IPV6_HDR(
+ pkts_burst[j+3]->packet_type)) {
simple_ipv6_fwd_4pkts(&pkts_burst[j],
portid, qconf);
} else {
@@ -1513,13 +1523,13 @@ main_loop(__attribute__((unused)) void *dummy)
for (j = 0; j != k; j += FWDSTEP) {
processx4_step1(&pkts_burst[j],
&dip[j / FWDSTEP],
- &flag[j / FWDSTEP]);
+ &ipv4_flag[j / FWDSTEP]);
}
k = RTE_ALIGN_FLOOR(nb_rx, FWDSTEP);
for (j = 0; j != k; j += FWDSTEP) {
processx4_step2(qconf, dip[j / FWDSTEP],
- flag[j / FWDSTEP], portid,
+ ipv4_flag[j / FWDSTEP], portid,
&pkts_burst[j], &dst_port[j]);
}
--
1.8.1.4
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH 17/17] mbuf: remove old packet type bit masks for ol_flags
2015-01-29 3:15 ` [dpdk-dev] [PATCH 00/17] unified packet type Helin Zhang
` (15 preceding siblings ...)
2015-01-29 3:16 ` [dpdk-dev] [PATCH 16/17] examples/l3fwd: " Helin Zhang
@ 2015-01-29 3:16 ` Helin Zhang
2015-01-30 13:37 ` Olivier MATZ
2015-01-30 13:31 ` [dpdk-dev] [PATCH 00/17] unified packet type Olivier MATZ
` (2 subsequent siblings)
19 siblings, 1 reply; 257+ messages in thread
From: Helin Zhang @ 2015-01-29 3:16 UTC (permalink / raw)
To: dev
To unify packet types among all PMDs, bit masks and relevant macros
of packet type for ol_flags are replaced by unified packet type and
relevant macros.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
lib/librte_mbuf/rte_mbuf.c | 6 ------
lib/librte_mbuf/rte_mbuf.h | 10 ++--------
2 files changed, 2 insertions(+), 14 deletions(-)
diff --git a/lib/librte_mbuf/rte_mbuf.c b/lib/librte_mbuf/rte_mbuf.c
index 1b14e02..8050ccf 100644
--- a/lib/librte_mbuf/rte_mbuf.c
+++ b/lib/librte_mbuf/rte_mbuf.c
@@ -215,14 +215,8 @@ const char *rte_get_rx_ol_flag_name(uint64_t mask)
/* case PKT_RX_HBUF_OVERFLOW: return "PKT_RX_HBUF_OVERFLOW"; */
/* case PKT_RX_RECIP_ERR: return "PKT_RX_RECIP_ERR"; */
/* case PKT_RX_MAC_ERR: return "PKT_RX_MAC_ERR"; */
- case PKT_RX_IPV4_HDR: return "PKT_RX_IPV4_HDR";
- case PKT_RX_IPV4_HDR_EXT: return "PKT_RX_IPV4_HDR_EXT";
- case PKT_RX_IPV6_HDR: return "PKT_RX_IPV6_HDR";
- case PKT_RX_IPV6_HDR_EXT: return "PKT_RX_IPV6_HDR_EXT";
case PKT_RX_IEEE1588_PTP: return "PKT_RX_IEEE1588_PTP";
case PKT_RX_IEEE1588_TMST: return "PKT_RX_IEEE1588_TMST";
- case PKT_RX_TUNNEL_IPV4_HDR: return "PKT_RX_TUNNEL_IPV4_HDR";
- case PKT_RX_TUNNEL_IPV6_HDR: return "PKT_RX_TUNNEL_IPV6_HDR";
default: return NULL;
}
}
diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
index 94ae344..5df0d61 100644
--- a/lib/librte_mbuf/rte_mbuf.h
+++ b/lib/librte_mbuf/rte_mbuf.h
@@ -90,16 +90,10 @@ extern "C" {
#define PKT_RX_HBUF_OVERFLOW (0ULL << 0) /**< Header buffer overflow. */
#define PKT_RX_RECIP_ERR (0ULL << 0) /**< Hardware processing error. */
#define PKT_RX_MAC_ERR (0ULL << 0) /**< MAC error. */
-#define PKT_RX_IPV4_HDR (1ULL << 5) /**< RX packet with IPv4 header. */
-#define PKT_RX_IPV4_HDR_EXT (1ULL << 6) /**< RX packet with extended IPv4 header. */
-#define PKT_RX_IPV6_HDR (1ULL << 7) /**< RX packet with IPv6 header. */
-#define PKT_RX_IPV6_HDR_EXT (1ULL << 8) /**< RX packet with extended IPv6 header. */
#define PKT_RX_IEEE1588_PTP (1ULL << 9) /**< RX IEEE1588 L2 Ethernet PT Packet. */
#define PKT_RX_IEEE1588_TMST (1ULL << 10) /**< RX IEEE1588 L2/L4 timestamped packet.*/
-#define PKT_RX_TUNNEL_IPV4_HDR (1ULL << 11) /**< RX tunnel packet with IPv4 header.*/
-#define PKT_RX_TUNNEL_IPV6_HDR (1ULL << 12) /**< RX tunnel packet with IPv6 header. */
-#define PKT_RX_FDIR_ID (1ULL << 13) /**< FD id reported if FDIR match. */
-#define PKT_RX_FDIR_FLX (1ULL << 14) /**< Flexible bytes reported if FDIR match. */
+#define PKT_RX_FDIR_ID (1ULL << 11) /**< FD id reported if FDIR match. */
+#define PKT_RX_FDIR_FLX (1ULL << 12) /**< Flexible bytes reported if FDIR match. */
/* add new RX flags here */
/* add new TX flags here */
--
1.8.1.4
^ permalink raw reply [flat|nested] 257+ messages in thread
* Re: [dpdk-dev] [PATCH 04/17] ixgbe: support of unified packet type
2015-01-29 3:15 ` [dpdk-dev] [PATCH 04/17] " Helin Zhang
@ 2015-01-29 23:30 ` Bruce Richardson
2015-01-29 23:52 ` Liang, Cunming
2015-01-30 6:09 ` Zhang, Helin
0 siblings, 2 replies; 257+ messages in thread
From: Bruce Richardson @ 2015-01-29 23:30 UTC (permalink / raw)
To: Helin Zhang; +Cc: dev
On Thu, Jan 29, 2015 at 11:15:52AM +0800, Helin Zhang wrote:
> To unify packet types among all PMDs, bit masks of packet type for
> ol_flags are replaced by unified packet type for Vector PMD.
>
Two suggestions on the commit log:
1. Can you add scalar and vector into the titles to make it clear how this
patch and the previous ones differ
2. Can you add a note calling out performance impacts for this patch. If no
performance impacts, then please note that for reviewers.
/Bruce
> Signed-off-by: Cunming Liang <cunming.liang@intel.com>
> Signed-off-by: Helin Zhang <helin.zhang@intel.com>
> ---
> lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c | 39 +++++++++++++++++++----------------
> 1 file changed, 21 insertions(+), 18 deletions(-)
>
> diff --git a/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c b/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c
> index b54cb19..b3cf7dd 100644
> --- a/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c
> +++ b/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c
> @@ -134,44 +134,35 @@ ixgbe_rxq_rearm(struct igb_rx_queue *rxq)
> */
> #ifdef RTE_IXGBE_RX_OLFLAGS_ENABLE
>
> -#define OLFLAGS_MASK ((uint16_t)(PKT_RX_VLAN_PKT | PKT_RX_IPV4_HDR |\
> - PKT_RX_IPV4_HDR_EXT | PKT_RX_IPV6_HDR |\
> - PKT_RX_IPV6_HDR_EXT))
> -#define OLFLAGS_MASK_V (((uint64_t)OLFLAGS_MASK << 48) | \
> - ((uint64_t)OLFLAGS_MASK << 32) | \
> - ((uint64_t)OLFLAGS_MASK << 16) | \
> - ((uint64_t)OLFLAGS_MASK))
> -#define PTYPE_SHIFT (1)
> +#define OLFLAGS_MASK_V (((uint64_t)PKT_RX_VLAN_PKT << 48) | \
> + ((uint64_t)PKT_RX_VLAN_PKT << 32) | \
> + ((uint64_t)PKT_RX_VLAN_PKT << 16) | \
> + ((uint64_t)PKT_RX_VLAN_PKT))
> #define VTAG_SHIFT (3)
>
> static inline void
> desc_to_olflags_v(__m128i descs[4], struct rte_mbuf **rx_pkts)
> {
> - __m128i ptype0, ptype1, vtag0, vtag1;
> + __m128i vtag0, vtag1;
> union {
> uint16_t e[4];
> uint64_t dword;
> } vol;
>
> - ptype0 = _mm_unpacklo_epi16(descs[0], descs[1]);
> - ptype1 = _mm_unpacklo_epi16(descs[2], descs[3]);
> vtag0 = _mm_unpackhi_epi16(descs[0], descs[1]);
> vtag1 = _mm_unpackhi_epi16(descs[2], descs[3]);
>
> - ptype1 = _mm_unpacklo_epi32(ptype0, ptype1);
> vtag1 = _mm_unpacklo_epi32(vtag0, vtag1);
> -
> - ptype1 = _mm_slli_epi16(ptype1, PTYPE_SHIFT);
> vtag1 = _mm_srli_epi16(vtag1, VTAG_SHIFT);
>
> - ptype1 = _mm_or_si128(ptype1, vtag1);
> - vol.dword = _mm_cvtsi128_si64(ptype1) & OLFLAGS_MASK_V;
> + vol.dword = _mm_cvtsi128_si64(vtag1) & OLFLAGS_MASK_V;
>
> rx_pkts[0]->ol_flags = vol.e[0];
> rx_pkts[1]->ol_flags = vol.e[1];
> rx_pkts[2]->ol_flags = vol.e[2];
> rx_pkts[3]->ol_flags = vol.e[3];
> }
> +
> #else
> #define desc_to_olflags_v(desc, rx_pkts) do {} while (0)
> #endif
> @@ -204,6 +195,8 @@ _recv_raw_pkts_vec(struct igb_rx_queue *rxq, struct rte_mbuf **rx_pkts,
> 0 /* ignore pkt_type field */
> );
> __m128i dd_check, eop_check;
> + __m128i desc_mask = _mm_set_epi32(0xFFFFFFFF, 0xFFFFFFFF,
> + 0xFFFFFFFF, 0xFFFF07F0);
>
> if (unlikely(nb_pkts < RTE_IXGBE_VPMD_RX_BURST))
> return 0;
> @@ -239,7 +232,8 @@ _recv_raw_pkts_vec(struct igb_rx_queue *rxq, struct rte_mbuf **rx_pkts,
> 0xFF, 0xFF, /* skip high 16 bits pkt_len, zero out */
> 13, 12, /* octet 12~13, low 16 bits pkt_len */
> 13, 12, /* octet 12~13, 16 bits data_len */
> - 0xFF, 0xFF /* skip pkt_type field */
> + 1, /* octet 1, 8 bits pkt_type field */
> + 0 /* octet 0, 4 bits offset 4 pkt_type field */
> );
>
> /* Cache is empty -> need to scan the buffer rings, but first move
> @@ -248,6 +242,7 @@ _recv_raw_pkts_vec(struct igb_rx_queue *rxq, struct rte_mbuf **rx_pkts,
>
> /*
> * A. load 4 packet in one loop
> + * [A*. mask out 4 unused dirty field in desc]
> * B. copy 4 mbuf point from swring to rx_pkts
> * C. calc the number of DD bits among the 4 packets
> * [C*. extract the end-of-packet bit, if requested]
> @@ -289,6 +284,14 @@ _recv_raw_pkts_vec(struct igb_rx_queue *rxq, struct rte_mbuf **rx_pkts,
> /* B.2 copy 2 mbuf point into rx_pkts */
> _mm_storeu_si128((__m128i *)&rx_pkts[pos+2], mbp2);
>
> + /* A* mask out 0~3 bits RSS type */
> + descs[3] = _mm_and_si128(descs[3], desc_mask);
> + descs[2] = _mm_and_si128(descs[2], desc_mask);
> +
> + /* A* mask out 0~3 bits RSS type */
> + descs[1] = _mm_and_si128(descs[1], desc_mask);
> + descs[0] = _mm_and_si128(descs[0], desc_mask);
> +
> /* avoid compiler reorder optimization */
> rte_compiler_barrier();
>
> @@ -301,7 +304,7 @@ _recv_raw_pkts_vec(struct igb_rx_queue *rxq, struct rte_mbuf **rx_pkts,
> /* C.1 4=>2 filter staterr info only */
> sterr_tmp1 = _mm_unpackhi_epi32(descs[1], descs[0]);
>
> - /* set ol_flags with packet type and vlan tag */
> + /* set ol_flags with vlan packet type */
> desc_to_olflags_v(descs, &rx_pkts[pos]);
>
> /* D.2 pkt 3,4 set in_port/nb_seg and remove crc */
> --
> 1.8.1.4
>
^ permalink raw reply [flat|nested] 257+ messages in thread
* Re: [dpdk-dev] [PATCH 04/17] ixgbe: support of unified packet type
2015-01-29 23:30 ` Bruce Richardson
@ 2015-01-29 23:52 ` Liang, Cunming
2015-01-30 3:39 ` Bruce Richardson
2015-01-30 6:09 ` Zhang, Helin
1 sibling, 1 reply; 257+ messages in thread
From: Liang, Cunming @ 2015-01-29 23:52 UTC (permalink / raw)
To: Richardson, Bruce, Zhang, Helin; +Cc: dev
> -----Original Message-----
> From: Richardson, Bruce
> Sent: Thursday, January 29, 2015 4:30 PM
> To: Zhang, Helin
> Cc: dev@dpdk.org; Cao, Waterman; Liang, Cunming; Liu, Jijiang; Ananyev,
> Konstantin
> Subject: Re: [PATCH 04/17] ixgbe: support of unified packet type
>
> On Thu, Jan 29, 2015 at 11:15:52AM +0800, Helin Zhang wrote:
> > To unify packet types among all PMDs, bit masks of packet type for
> > ol_flags are replaced by unified packet type for Vector PMD.
> >
>
> Two suggestions on the commit log:
> 1. Can you add scalar and vector into the titles to make it clear how this
> patch and the previous ones differ
> 2. Can you add a note calling out performance impacts for this patch. If no
> performance impacts, then please note that for reviewers.
[Liang, Cunming] Accept, will update it in v2.
For performance, lose 1 cycle per packet during 4x10GE io fwd loopback unit test.
>
> /Bruce
>
> > Signed-off-by: Cunming Liang <cunming.liang@intel.com>
> > Signed-off-by: Helin Zhang <helin.zhang@intel.com>
> > ---
> > lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c | 39
> +++++++++++++++++++----------------
> > 1 file changed, 21 insertions(+), 18 deletions(-)
> >
> > diff --git a/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c
> b/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c
> > index b54cb19..b3cf7dd 100644
> > --- a/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c
> > +++ b/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c
> > @@ -134,44 +134,35 @@ ixgbe_rxq_rearm(struct igb_rx_queue *rxq)
> > */
> > #ifdef RTE_IXGBE_RX_OLFLAGS_ENABLE
> >
> > -#define OLFLAGS_MASK ((uint16_t)(PKT_RX_VLAN_PKT |
> PKT_RX_IPV4_HDR |\
> > - PKT_RX_IPV4_HDR_EXT | PKT_RX_IPV6_HDR |\
> > - PKT_RX_IPV6_HDR_EXT))
> > -#define OLFLAGS_MASK_V (((uint64_t)OLFLAGS_MASK << 48) | \
> > - ((uint64_t)OLFLAGS_MASK << 32) | \
> > - ((uint64_t)OLFLAGS_MASK << 16) | \
> > - ((uint64_t)OLFLAGS_MASK))
> > -#define PTYPE_SHIFT (1)
> > +#define OLFLAGS_MASK_V (((uint64_t)PKT_RX_VLAN_PKT << 48) | \
> > + ((uint64_t)PKT_RX_VLAN_PKT << 32) | \
> > + ((uint64_t)PKT_RX_VLAN_PKT << 16) | \
> > + ((uint64_t)PKT_RX_VLAN_PKT))
> > #define VTAG_SHIFT (3)
> >
> > static inline void
> > desc_to_olflags_v(__m128i descs[4], struct rte_mbuf **rx_pkts)
> > {
> > - __m128i ptype0, ptype1, vtag0, vtag1;
> > + __m128i vtag0, vtag1;
> > union {
> > uint16_t e[4];
> > uint64_t dword;
> > } vol;
> >
> > - ptype0 = _mm_unpacklo_epi16(descs[0], descs[1]);
> > - ptype1 = _mm_unpacklo_epi16(descs[2], descs[3]);
> > vtag0 = _mm_unpackhi_epi16(descs[0], descs[1]);
> > vtag1 = _mm_unpackhi_epi16(descs[2], descs[3]);
> >
> > - ptype1 = _mm_unpacklo_epi32(ptype0, ptype1);
> > vtag1 = _mm_unpacklo_epi32(vtag0, vtag1);
> > -
> > - ptype1 = _mm_slli_epi16(ptype1, PTYPE_SHIFT);
> > vtag1 = _mm_srli_epi16(vtag1, VTAG_SHIFT);
> >
> > - ptype1 = _mm_or_si128(ptype1, vtag1);
> > - vol.dword = _mm_cvtsi128_si64(ptype1) & OLFLAGS_MASK_V;
> > + vol.dword = _mm_cvtsi128_si64(vtag1) & OLFLAGS_MASK_V;
> >
> > rx_pkts[0]->ol_flags = vol.e[0];
> > rx_pkts[1]->ol_flags = vol.e[1];
> > rx_pkts[2]->ol_flags = vol.e[2];
> > rx_pkts[3]->ol_flags = vol.e[3];
> > }
> > +
> > #else
> > #define desc_to_olflags_v(desc, rx_pkts) do {} while (0)
> > #endif
> > @@ -204,6 +195,8 @@ _recv_raw_pkts_vec(struct igb_rx_queue *rxq, struct
> rte_mbuf **rx_pkts,
> > 0 /* ignore pkt_type field */
> > );
> > __m128i dd_check, eop_check;
> > + __m128i desc_mask = _mm_set_epi32(0xFFFFFFFF, 0xFFFFFFFF,
> > + 0xFFFFFFFF, 0xFFFF07F0);
> >
> > if (unlikely(nb_pkts < RTE_IXGBE_VPMD_RX_BURST))
> > return 0;
> > @@ -239,7 +232,8 @@ _recv_raw_pkts_vec(struct igb_rx_queue *rxq, struct
> rte_mbuf **rx_pkts,
> > 0xFF, 0xFF, /* skip high 16 bits pkt_len, zero out */
> > 13, 12, /* octet 12~13, low 16 bits pkt_len */
> > 13, 12, /* octet 12~13, 16 bits data_len */
> > - 0xFF, 0xFF /* skip pkt_type field */
> > + 1, /* octet 1, 8 bits pkt_type field */
> > + 0 /* octet 0, 4 bits offset 4 pkt_type field */
> > );
> >
> > /* Cache is empty -> need to scan the buffer rings, but first move
> > @@ -248,6 +242,7 @@ _recv_raw_pkts_vec(struct igb_rx_queue *rxq, struct
> rte_mbuf **rx_pkts,
> >
> > /*
> > * A. load 4 packet in one loop
> > + * [A*. mask out 4 unused dirty field in desc]
> > * B. copy 4 mbuf point from swring to rx_pkts
> > * C. calc the number of DD bits among the 4 packets
> > * [C*. extract the end-of-packet bit, if requested]
> > @@ -289,6 +284,14 @@ _recv_raw_pkts_vec(struct igb_rx_queue *rxq, struct
> rte_mbuf **rx_pkts,
> > /* B.2 copy 2 mbuf point into rx_pkts */
> > _mm_storeu_si128((__m128i *)&rx_pkts[pos+2], mbp2);
> >
> > + /* A* mask out 0~3 bits RSS type */
> > + descs[3] = _mm_and_si128(descs[3], desc_mask);
> > + descs[2] = _mm_and_si128(descs[2], desc_mask);
> > +
> > + /* A* mask out 0~3 bits RSS type */
> > + descs[1] = _mm_and_si128(descs[1], desc_mask);
> > + descs[0] = _mm_and_si128(descs[0], desc_mask);
> > +
> > /* avoid compiler reorder optimization */
> > rte_compiler_barrier();
> >
> > @@ -301,7 +304,7 @@ _recv_raw_pkts_vec(struct igb_rx_queue *rxq, struct
> rte_mbuf **rx_pkts,
> > /* C.1 4=>2 filter staterr info only */
> > sterr_tmp1 = _mm_unpackhi_epi32(descs[1], descs[0]);
> >
> > - /* set ol_flags with packet type and vlan tag */
> > + /* set ol_flags with vlan packet type */
> > desc_to_olflags_v(descs, &rx_pkts[pos]);
> >
> > /* D.2 pkt 3,4 set in_port/nb_seg and remove crc */
> > --
> > 1.8.1.4
> >
^ permalink raw reply [flat|nested] 257+ messages in thread
* Re: [dpdk-dev] [PATCH 04/17] ixgbe: support of unified packet type
2015-01-29 23:52 ` Liang, Cunming
@ 2015-01-30 3:39 ` Bruce Richardson
0 siblings, 0 replies; 257+ messages in thread
From: Bruce Richardson @ 2015-01-30 3:39 UTC (permalink / raw)
To: Liang, Cunming; +Cc: dev
On Thu, Jan 29, 2015 at 11:52:03PM +0000, Liang, Cunming wrote:
>
>
> > -----Original Message-----
> > From: Richardson, Bruce
> > Sent: Thursday, January 29, 2015 4:30 PM
> > To: Zhang, Helin
> > Cc: dev@dpdk.org; Cao, Waterman; Liang, Cunming; Liu, Jijiang; Ananyev,
> > Konstantin
> > Subject: Re: [PATCH 04/17] ixgbe: support of unified packet type
> >
> > On Thu, Jan 29, 2015 at 11:15:52AM +0800, Helin Zhang wrote:
> > > To unify packet types among all PMDs, bit masks of packet type for
> > > ol_flags are replaced by unified packet type for Vector PMD.
> > >
> >
> > Two suggestions on the commit log:
> > 1. Can you add scalar and vector into the titles to make it clear how this
> > patch and the previous ones differ
> > 2. Can you add a note calling out performance impacts for this patch. If no
> > performance impacts, then please note that for reviewers.
> [Liang, Cunming] Accept, will update it in v2.
> For performance, lose 1 cycle per packet during 4x10GE io fwd loopback unit test.
>
Good to know, thanks.
/Bruce
^ permalink raw reply [flat|nested] 257+ messages in thread
* Re: [dpdk-dev] [PATCH 04/17] ixgbe: support of unified packet type
2015-01-29 23:30 ` Bruce Richardson
2015-01-29 23:52 ` Liang, Cunming
@ 2015-01-30 6:09 ` Zhang, Helin
1 sibling, 0 replies; 257+ messages in thread
From: Zhang, Helin @ 2015-01-30 6:09 UTC (permalink / raw)
To: Richardson, Bruce; +Cc: dev
Hi Bruce
> -----Original Message-----
> From: Richardson, Bruce
> Sent: Friday, January 30, 2015 7:30 AM
> To: Zhang, Helin
> Cc: dev@dpdk.org; Cao, Waterman; Liang, Cunming; Liu, Jijiang; Ananyev,
> Konstantin
> Subject: Re: [PATCH 04/17] ixgbe: support of unified packet type
>
> On Thu, Jan 29, 2015 at 11:15:52AM +0800, Helin Zhang wrote:
> > To unify packet types among all PMDs, bit masks of packet type for
> > ol_flags are replaced by unified packet type for Vector PMD.
> >
>
> Two suggestions on the commit log:
> 1. Can you add scalar and vector into the titles to make it clear how this patch
> and the previous ones differ 2. Can you add a note calling out performance
> impacts for this patch. If no performance impacts, then please note that for
> reviewers.
OK. That will be in the v2 patches. Thanks for the good comments!
Regards,
Helin
>
> /Bruce
>
> > Signed-off-by: Cunming Liang <cunming.liang@intel.com>
> > Signed-off-by: Helin Zhang <helin.zhang@intel.com>
> > ---
> > lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c | 39
> > +++++++++++++++++++----------------
> > 1 file changed, 21 insertions(+), 18 deletions(-)
> >
> > diff --git a/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c
> > b/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c
> > index b54cb19..b3cf7dd 100644
> > --- a/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c
> > +++ b/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c
> > @@ -134,44 +134,35 @@ ixgbe_rxq_rearm(struct igb_rx_queue *rxq)
> > */
> > #ifdef RTE_IXGBE_RX_OLFLAGS_ENABLE
> >
> > -#define OLFLAGS_MASK ((uint16_t)(PKT_RX_VLAN_PKT |
> PKT_RX_IPV4_HDR |\
> > - PKT_RX_IPV4_HDR_EXT | PKT_RX_IPV6_HDR |\
> > - PKT_RX_IPV6_HDR_EXT))
> > -#define OLFLAGS_MASK_V (((uint64_t)OLFLAGS_MASK << 48) | \
> > - ((uint64_t)OLFLAGS_MASK << 32) | \
> > - ((uint64_t)OLFLAGS_MASK << 16) | \
> > - ((uint64_t)OLFLAGS_MASK))
> > -#define PTYPE_SHIFT (1)
> > +#define OLFLAGS_MASK_V (((uint64_t)PKT_RX_VLAN_PKT << 48) | \
> > + ((uint64_t)PKT_RX_VLAN_PKT << 32) | \
> > + ((uint64_t)PKT_RX_VLAN_PKT << 16) | \
> > + ((uint64_t)PKT_RX_VLAN_PKT))
> > #define VTAG_SHIFT (3)
> >
> > static inline void
> > desc_to_olflags_v(__m128i descs[4], struct rte_mbuf **rx_pkts) {
> > - __m128i ptype0, ptype1, vtag0, vtag1;
> > + __m128i vtag0, vtag1;
> > union {
> > uint16_t e[4];
> > uint64_t dword;
> > } vol;
> >
> > - ptype0 = _mm_unpacklo_epi16(descs[0], descs[1]);
> > - ptype1 = _mm_unpacklo_epi16(descs[2], descs[3]);
> > vtag0 = _mm_unpackhi_epi16(descs[0], descs[1]);
> > vtag1 = _mm_unpackhi_epi16(descs[2], descs[3]);
> >
> > - ptype1 = _mm_unpacklo_epi32(ptype0, ptype1);
> > vtag1 = _mm_unpacklo_epi32(vtag0, vtag1);
> > -
> > - ptype1 = _mm_slli_epi16(ptype1, PTYPE_SHIFT);
> > vtag1 = _mm_srli_epi16(vtag1, VTAG_SHIFT);
> >
> > - ptype1 = _mm_or_si128(ptype1, vtag1);
> > - vol.dword = _mm_cvtsi128_si64(ptype1) & OLFLAGS_MASK_V;
> > + vol.dword = _mm_cvtsi128_si64(vtag1) & OLFLAGS_MASK_V;
> >
> > rx_pkts[0]->ol_flags = vol.e[0];
> > rx_pkts[1]->ol_flags = vol.e[1];
> > rx_pkts[2]->ol_flags = vol.e[2];
> > rx_pkts[3]->ol_flags = vol.e[3];
> > }
> > +
> > #else
> > #define desc_to_olflags_v(desc, rx_pkts) do {} while (0) #endif @@
> > -204,6 +195,8 @@ _recv_raw_pkts_vec(struct igb_rx_queue *rxq, struct
> rte_mbuf **rx_pkts,
> > 0 /* ignore pkt_type field */
> > );
> > __m128i dd_check, eop_check;
> > + __m128i desc_mask = _mm_set_epi32(0xFFFFFFFF, 0xFFFFFFFF,
> > + 0xFFFFFFFF, 0xFFFF07F0);
> >
> > if (unlikely(nb_pkts < RTE_IXGBE_VPMD_RX_BURST))
> > return 0;
> > @@ -239,7 +232,8 @@ _recv_raw_pkts_vec(struct igb_rx_queue *rxq,
> struct rte_mbuf **rx_pkts,
> > 0xFF, 0xFF, /* skip high 16 bits pkt_len, zero out */
> > 13, 12, /* octet 12~13, low 16 bits pkt_len */
> > 13, 12, /* octet 12~13, 16 bits data_len */
> > - 0xFF, 0xFF /* skip pkt_type field */
> > + 1, /* octet 1, 8 bits pkt_type field */
> > + 0 /* octet 0, 4 bits offset 4 pkt_type field */
> > );
> >
> > /* Cache is empty -> need to scan the buffer rings, but first move
> > @@ -248,6 +242,7 @@ _recv_raw_pkts_vec(struct igb_rx_queue *rxq,
> > struct rte_mbuf **rx_pkts,
> >
> > /*
> > * A. load 4 packet in one loop
> > + * [A*. mask out 4 unused dirty field in desc]
> > * B. copy 4 mbuf point from swring to rx_pkts
> > * C. calc the number of DD bits among the 4 packets
> > * [C*. extract the end-of-packet bit, if requested] @@ -289,6
> > +284,14 @@ _recv_raw_pkts_vec(struct igb_rx_queue *rxq, struct rte_mbuf
> **rx_pkts,
> > /* B.2 copy 2 mbuf point into rx_pkts */
> > _mm_storeu_si128((__m128i *)&rx_pkts[pos+2], mbp2);
> >
> > + /* A* mask out 0~3 bits RSS type */
> > + descs[3] = _mm_and_si128(descs[3], desc_mask);
> > + descs[2] = _mm_and_si128(descs[2], desc_mask);
> > +
> > + /* A* mask out 0~3 bits RSS type */
> > + descs[1] = _mm_and_si128(descs[1], desc_mask);
> > + descs[0] = _mm_and_si128(descs[0], desc_mask);
> > +
> > /* avoid compiler reorder optimization */
> > rte_compiler_barrier();
> >
> > @@ -301,7 +304,7 @@ _recv_raw_pkts_vec(struct igb_rx_queue *rxq,
> struct rte_mbuf **rx_pkts,
> > /* C.1 4=>2 filter staterr info only */
> > sterr_tmp1 = _mm_unpackhi_epi32(descs[1], descs[0]);
> >
> > - /* set ol_flags with packet type and vlan tag */
> > + /* set ol_flags with vlan packet type */
> > desc_to_olflags_v(descs, &rx_pkts[pos]);
> >
> > /* D.2 pkt 3,4 set in_port/nb_seg and remove crc */
> > --
> > 1.8.1.4
> >
^ permalink raw reply [flat|nested] 257+ messages in thread
* Re: [dpdk-dev] [PATCH 00/17] unified packet type
2015-01-29 3:15 ` [dpdk-dev] [PATCH 00/17] unified packet type Helin Zhang
` (16 preceding siblings ...)
2015-01-29 3:16 ` [dpdk-dev] [PATCH 17/17] mbuf: remove old packet type bit masks for ol_flags Helin Zhang
@ 2015-01-30 13:31 ` Olivier MATZ
2015-02-02 2:44 ` Zhang, Helin
2015-02-09 6:40 ` [dpdk-dev] [PATCH v2 00/15] " Helin Zhang
2015-02-17 6:59 ` [dpdk-dev] [PATCH v3 00/16] unified packet type Helin Zhang
19 siblings, 1 reply; 257+ messages in thread
From: Olivier MATZ @ 2015-01-30 13:31 UTC (permalink / raw)
To: Helin Zhang, dev
Hi Helin,
On 01/29/2015 04:15 AM, Helin Zhang wrote:
> Currently only 6 bits which are stored in ol_flags are used to indicate
> the packet types. This is not enough, as some NIC hardware can recognize
> quite a lot of packet types, e.g i40e hardware can recognize more than 150
> packet types. Hiding those packet types hides hardware offload capabilities
> which could be quite useful for improving performance and for end users.
> So an unified packet types are needed to support all possible PMDs. Recently
> a 16 bits packet_type field has been added in mbuf header which can be used
> for this purpose. In addition, all packet types stored in ol_flag field
> should be deleted at all, and 6 bits of ol_flags can be save as the benifit.
>
> Initially, 16 bits of packet_type can be divided into several sub fields to
> indicate different packet type information of a packet. The initial design
> is to divide those bits into 4 fields for L3 types, tunnel types, inner L3
> types and L4 types. All PMDs should translate the offloaded packet types
> into this 4 fields of information, for user applications.
You haven't answered to my question I asked in your RFC patch [1].
I copied it below:
>> On 01/20/2015 03:28 AM, Zhang, Helin wrote:
>>>> Another question I've asked several times[1][2] : what does having
>>>> RTE_PTYPE_TUNNEL_IP mean? What fields are checked by the hardware (or
>>>> the driver) and what fields should be checked by the application?
>>>> Are you sure that all the drivers (ixgbe, i40e, vmxnet3, enic) check the same
>>>> fields? (ethertype, ip version, ip len correct, ip checksum correct, flags, ...)
>>> RTE_PTYPE_TUNNEL_IP means hardware recognizes the received packet as an
>>> IP-in-IP packet.
>>> All the fields are filled by PMD which is recognized by hardware. The application
>>> can just use it which can save some cpu cycles to recognize the packet type by
>>> software.
>>> Drivers is responsible for filling with correct values according to the packet types
>>> recognized by its hardware. Different PMDs may fill with different values based on
>>> different capabilities.
>>
>> Sorry, that does not answer to my question.
>>
>> Let's take a simple example. Imagine a hardware-1 that is able to
>> recognize an IP packet by checking the ethertype and that the IP
>> version is set to 4.
>> Another hardware-2 recognize an IP packet by checking the ethertype,
>> the IP version and that the IP length is correct compared to m_len(m).
>>
>> For the same packet, both hardwares will return RTE_PTYPE_L3_IPV4, but
>> they don't do the same checks on the packet. As I want my application
>> behave exactly the same whatever the hardware, I need to know what
>> checks are done in hardware, so I can decide what checks must be
>> done in my application.
>>
>> Example of definition: RTE_PTYPE_L3_IPV4 means that ethertype is
>> 0x0800 and IP.version is 4.
>>
>> It means that I can skip these 2 tests in my application if I have
>> this packet_type, but all other checks must be done in software
>> (ip length, flags, checksum, ...)
>>
>> For each packet type, we need a definition like above, and we must
>> check that all drivers setting a packet type behave like described.
I'm not opposed to have a packet_type field in rx mbuf, but I really
think the question above is an important question to make this feature
useful to the applications.
Regards,
Olivier
[1] http://dpdk.org/ml/archives/dev/2015-January/011273.html
^ permalink raw reply [flat|nested] 257+ messages in thread
* Re: [dpdk-dev] [PATCH 17/17] mbuf: remove old packet type bit masks for ol_flags
2015-01-29 3:16 ` [dpdk-dev] [PATCH 17/17] mbuf: remove old packet type bit masks for ol_flags Helin Zhang
@ 2015-01-30 13:37 ` Olivier MATZ
2015-02-02 1:53 ` Zhang, Helin
0 siblings, 1 reply; 257+ messages in thread
From: Olivier MATZ @ 2015-01-30 13:37 UTC (permalink / raw)
To: Helin Zhang, dev
Hi Helin,
On 01/29/2015 04:16 AM, Helin Zhang wrote:
> To unify packet types among all PMDs, bit masks and relevant macros
> of packet type for ol_flags are replaced by unified packet type and
> relevant macros.
>
> Signed-off-by: Helin Zhang <helin.zhang@intel.com>
> ---
> lib/librte_mbuf/rte_mbuf.c | 6 ------
> lib/librte_mbuf/rte_mbuf.h | 10 ++--------
> 2 files changed, 2 insertions(+), 14 deletions(-)
>
> diff --git a/lib/librte_mbuf/rte_mbuf.c b/lib/librte_mbuf/rte_mbuf.c
> index 1b14e02..8050ccf 100644
> --- a/lib/librte_mbuf/rte_mbuf.c
> +++ b/lib/librte_mbuf/rte_mbuf.c
> @@ -215,14 +215,8 @@ const char *rte_get_rx_ol_flag_name(uint64_t mask)
> /* case PKT_RX_HBUF_OVERFLOW: return "PKT_RX_HBUF_OVERFLOW"; */
> /* case PKT_RX_RECIP_ERR: return "PKT_RX_RECIP_ERR"; */
> /* case PKT_RX_MAC_ERR: return "PKT_RX_MAC_ERR"; */
> - case PKT_RX_IPV4_HDR: return "PKT_RX_IPV4_HDR";
> - case PKT_RX_IPV4_HDR_EXT: return "PKT_RX_IPV4_HDR_EXT";
> - case PKT_RX_IPV6_HDR: return "PKT_RX_IPV6_HDR";
> - case PKT_RX_IPV6_HDR_EXT: return "PKT_RX_IPV6_HDR_EXT";
> case PKT_RX_IEEE1588_PTP: return "PKT_RX_IEEE1588_PTP";
> case PKT_RX_IEEE1588_TMST: return "PKT_RX_IEEE1588_TMST";
> - case PKT_RX_TUNNEL_IPV4_HDR: return "PKT_RX_TUNNEL_IPV4_HDR";
> - case PKT_RX_TUNNEL_IPV6_HDR: return "PKT_RX_TUNNEL_IPV6_HDR";
I see you are not removing IEEE1588. Is there a reason why it is not
handled as a packet_type?
> diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
> index 94ae344..5df0d61 100644
> --- a/lib/librte_mbuf/rte_mbuf.h
> +++ b/lib/librte_mbuf/rte_mbuf.h
> @@ -90,16 +90,10 @@ extern "C" {
> #define PKT_RX_HBUF_OVERFLOW (0ULL << 0) /**< Header buffer overflow. */
> #define PKT_RX_RECIP_ERR (0ULL << 0) /**< Hardware processing error. */
> #define PKT_RX_MAC_ERR (0ULL << 0) /**< MAC error. */
> -#define PKT_RX_IPV4_HDR (1ULL << 5) /**< RX packet with IPv4 header. */
> -#define PKT_RX_IPV4_HDR_EXT (1ULL << 6) /**< RX packet with extended IPv4 header. */
> -#define PKT_RX_IPV6_HDR (1ULL << 7) /**< RX packet with IPv6 header. */
> -#define PKT_RX_IPV6_HDR_EXT (1ULL << 8) /**< RX packet with extended IPv6 header. */
> #define PKT_RX_IEEE1588_PTP (1ULL << 9) /**< RX IEEE1588 L2 Ethernet PT Packet. */
> #define PKT_RX_IEEE1588_TMST (1ULL << 10) /**< RX IEEE1588 L2/L4 timestamped packet.*/
> -#define PKT_RX_TUNNEL_IPV4_HDR (1ULL << 11) /**< RX tunnel packet with IPv4 header.*/
> -#define PKT_RX_TUNNEL_IPV6_HDR (1ULL << 12) /**< RX tunnel packet with IPv6 header. */
> -#define PKT_RX_FDIR_ID (1ULL << 13) /**< FD id reported if FDIR match. */
> -#define PKT_RX_FDIR_FLX (1ULL << 14) /**< Flexible bytes reported if FDIR match. */
> +#define PKT_RX_FDIR_ID (1ULL << 11) /**< FD id reported if FDIR match. */
> +#define PKT_RX_FDIR_FLX (1ULL << 12) /**< Flexible bytes reported if FDIR match. */
It looks like but numbers are not contiguous anymore (there is a hole
between 5 and 8).
Regards,
Olivier
^ permalink raw reply [flat|nested] 257+ messages in thread
* Re: [dpdk-dev] [PATCH 01/17] mbuf: add definitions of unified packet types
2015-01-29 3:15 ` [dpdk-dev] [PATCH 01/17] mbuf: add definitions of unified packet types Helin Zhang
@ 2015-01-30 13:56 ` Olivier MATZ
2015-02-02 1:43 ` Zhang, Helin
0 siblings, 1 reply; 257+ messages in thread
From: Olivier MATZ @ 2015-01-30 13:56 UTC (permalink / raw)
To: Helin Zhang, dev
Hi Helin,
On 01/29/2015 04:15 AM, Helin Zhang wrote:
> As there are only 6 bit flags in ol_flags for indicating packet types,
> which is not enough to describe all the possible packet types hardware
> can recognize. For example, i40e hardware can recognize more than 150
> packet types. Unified packet type is composed of tunnel type, L3 type,
> L4 type and inner L3 type fields, and can be stored in 16 bits mbuf
> field of 'packet_type'.
>
> Signed-off-by: Helin Zhang <helin.zhang@intel.com>
> Signed-off-by: Cunming Liang <cunming.liang@intel.com>
> Signed-off-by: Jijiang Liu <jijiang.liu@intel.com>
> ---
> lib/librte_mbuf/rte_mbuf.h | 74 ++++++++++++++++++++++++++++++++++++++++++++++
> 1 file changed, 74 insertions(+)
>
> diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
> index 16059c6..94ae344 100644
> --- a/lib/librte_mbuf/rte_mbuf.h
> +++ b/lib/librte_mbuf/rte_mbuf.h
> @@ -165,6 +165,80 @@ extern "C" {
> /* Use final bit of flags to indicate a control mbuf */
> #define CTRL_MBUF_FLAG (1ULL << 63) /**< Mbuf contains control data */
>
> +/*
> + * Sixteen bits are divided into several fields to mark packet types. Note that
> + * each field is indexical.
> + * - Bit 3:0 is for tunnel types.
> + * - Bit 7:4 is for L3 or outer L3 (for tunneling case) types.
> + * - Bit 10:8 is for L4 types. It can also be used for inner L4 types for
> + * tunneling packets.
> + * - Bit 13:11 is for inner L3 types.
> + * - Bit 15:14 is reserved.
Is there a reason why using this specific order?
Also, there are 4 bits for outer L3 types and 3 bits for inner L3
types, but both of them have 6 different supported types. Is it
intentional?
> + *
> + * To be compitable with Vector PMD, RTE_PTYPE_L3_IPV4, RTE_PTYPE_L3_IPV4_EXT,
compitable -> compatible
> + * RTE_PTYPE_L3_IPV6, RTE_PTYPE_L3_IPV6_EXT, RTE_PTYPE_L4_TCP, RTE_PTYPE_L4_UDP
> + * and RTE_PTYPE_L4_SCTP should be kept as below in a contiguous 7 bits.
> + *
> + * Note that L3 types values are selected for checking IPV4/IPV6 header from
> + * performance point of view. Reading annotations of RTE_ETH_IS_IPV4_HDR and
> + * RTE_ETH_IS_IPV6_HDR is needed for any future changes of L3 type values.
> + */
> +#define RTE_PTYPE_UNKNOWN 0x0000 /* 0b0000000000000000 */
> +/* bit 3:0 for tunnel types */
> +#define RTE_PTYPE_TUNNEL_IP 0x0001 /* 0b0000000000000001 */
> +#define RTE_PTYPE_TUNNEL_TCP 0x0002 /* 0b0000000000000010 */
> +#define RTE_PTYPE_TUNNEL_UDP 0x0003 /* 0b0000000000000011 */
> +#define RTE_PTYPE_TUNNEL_GRE 0x0004 /* 0b0000000000000100 */
> +#define RTE_PTYPE_TUNNEL_VXLAN 0x0005 /* 0b0000000000000101 */
> +#define RTE_PTYPE_TUNNEL_NVGRE 0x0006 /* 0b0000000000000110 */
> +#define RTE_PTYPE_TUNNEL_GENEVE 0x0007 /* 0b0000000000000111 */
> +#define RTE_PTYPE_TUNNEL_GRENAT 0x0008 /* 0b0000000000001000 */
> +#define RTE_PTYPE_TUNNEL_GRENAT_MAC 0x0009 /* 0b0000000000001001 */
> +#define RTE_PTYPE_TUNNEL_GRENAT_MACVLAN 0x000a /* 0b0000000000001010 */
> +#define RTE_PTYPE_TUNNEL_MASK 0x000f /* 0b0000000000001111 */
> +/* bit 7:4 for L3 types */
> +#define RTE_PTYPE_L3_IPV4 0x0010 /* 0b0000000000010000 */
> +#define RTE_PTYPE_L3_IPV4_EXT 0x0030 /* 0b0000000000110000 */
> +#define RTE_PTYPE_L3_IPV6 0x0040 /* 0b0000000001000000 */
> +#define RTE_PTYPE_L3_IPV4_EXT_UNKNOWN 0x0090 /* 0b0000000010010000 */
> +#define RTE_PTYPE_L3_IPV6_EXT 0x00c0 /* 0b0000000011000000 */
> +#define RTE_PTYPE_L3_IPV6_EXT_UNKNOWN 0x00e0 /* 0b0000000011100000 */
> +#define RTE_PTYPE_L3_MASK 0x00f0 /* 0b0000000011110000 */
can we expect that when RTE_PTYPE_L3_IPV4, RTE_PTYPE_L3_IPV4_EXT or
RTE_PTYPE_L3_IPV4_EXT_UNKNOWN is set, the hardware also verified the
L3 checksum?
My understanding is:
- if packet_type is IPv4* and PKT_RX_IP_CKSUM_BAD is 0
-> checksum was checked by hw and is good
- if packet_type is IPv4* and PKT_RX_IP_CKSUM_BAD is 1
-> checksum was checked by hw and is bad
- if packet_type is not IPv4*
-> checksum was not checked by hw
I think it would solve the problem asked by Stephen
http://dpdk.org/ml/archives/dev/2015-January/011550.html
> +/* bit 10:8 for L4 types */
> +#define RTE_PTYPE_L4_TCP 0x0100 /* 0b0000000100000000 */
> +#define RTE_PTYPE_L4_UDP 0x0200 /* 0b0000001000000000 */
> +#define RTE_PTYPE_L4_FRAG 0x0300 /* 0b0000001100000000 */
> +#define RTE_PTYPE_L4_SCTP 0x0400 /* 0b0000010000000000 */
> +#define RTE_PTYPE_L4_ICMP 0x0500 /* 0b0000010100000000 */
> +#define RTE_PTYPE_L4_NONFRAG 0x0600 /* 0b0000011000000000 */
> +#define RTE_PTYPE_L4_MASK 0x0700 /* 0b0000011100000000 */
Same question for L4.
Note: it would means that if a hardware is able to recognize a TCP
packet but not to verify the checksum, it has to set RTE_PTYPE_L4 to
unknown.
> +/* bit 13:11 for inner L3 types */
> +#define RTE_PTYPE_INNER_L3_IPV4 0x0800 /* 0b0000100000000000 */
> +#define RTE_PTYPE_INNER_L3_IPV4_EXT 0x1000 /* 0b0001000000000000 */
> +#define RTE_PTYPE_INNER_L3_IPV6 0x1800 /* 0b0001100000000000 */
> +#define RTE_PTYPE_INNER_L3_IPV6_EXT 0x2000 /* 0b0010000000000000 */
> +#define RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN 0x2800 /* 0b0010100000000000 */
> +#define RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN 0x3000 /* 0b0011000000000000 */
> +#define RTE_PTYPE_INNER_L3_MASK 0x3800 /* 0b0011100000000000 */
> +/* bit 15:14 reserved */
> +
> +/**
> + * Check if the (outer) L3 header is IPv4. To avoid comparing IPv4 types one by
> + * one, bit 4 is selected to be used for IPv4 only. Then checking bit 4 can
> + * determin if it is an IPV4 packet.
> + */
> +#define RTE_ETH_IS_IPV4_HDR(ptype) ((ptype) & RTE_PTYPE_L3_IPV4)
> +
> +/**
> + * Check if the (outer) L3 header is IPv4. To avoid comparing IPv4 types one by
> + * one, bit 6 is selected to be used for IPv4 only. Then checking bit 6 can
> + * determin if it is an IPV4 packet.
> + */
> +#define RTE_ETH_IS_IPV6_HDR(ptype) ((ptype) & RTE_PTYPE_L3_IPV6)
> +
> +/* Check if it is a tunneling packet */
> +#define RTE_ETH_IS_TUNNEL_PKT(ptype) ((ptype) & RTE_PTYPE_TUNNEL_MASK)
> +
> /**
> * Get the name of a RX offload flag
> *
>
Thanks,
Olivier
^ permalink raw reply [flat|nested] 257+ messages in thread
* Re: [dpdk-dev] [PATCH 01/17] mbuf: add definitions of unified packet types
2015-01-30 13:56 ` Olivier MATZ
@ 2015-02-02 1:43 ` Zhang, Helin
[not found] ` <54CF5CF8.2090605@6wind.com>
0 siblings, 1 reply; 257+ messages in thread
From: Zhang, Helin @ 2015-02-02 1:43 UTC (permalink / raw)
To: Olivier MATZ, dev
Hi Olivier
> -----Original Message-----
> From: Olivier MATZ [mailto:olivier.matz@6wind.com]
> Sent: Friday, January 30, 2015 9:56 PM
> To: Zhang, Helin; dev@dpdk.org
> Cc: Stephen Hemminger
> Subject: Re: [dpdk-dev] [PATCH 01/17] mbuf: add definitions of unified packet
> types
>
> Hi Helin,
>
> On 01/29/2015 04:15 AM, Helin Zhang wrote:
> > As there are only 6 bit flags in ol_flags for indicating packet types,
> > which is not enough to describe all the possible packet types hardware
> > can recognize. For example, i40e hardware can recognize more than 150
> > packet types. Unified packet type is composed of tunnel type, L3 type,
> > L4 type and inner L3 type fields, and can be stored in 16 bits mbuf
> > field of 'packet_type'.
> >
> > Signed-off-by: Helin Zhang <helin.zhang@intel.com>
> > Signed-off-by: Cunming Liang <cunming.liang@intel.com>
> > Signed-off-by: Jijiang Liu <jijiang.liu@intel.com>
> > ---
> > lib/librte_mbuf/rte_mbuf.h | 74
> > ++++++++++++++++++++++++++++++++++++++++++++++
> > 1 file changed, 74 insertions(+)
> >
> > diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
> > index 16059c6..94ae344 100644
> > --- a/lib/librte_mbuf/rte_mbuf.h
> > +++ b/lib/librte_mbuf/rte_mbuf.h
> > @@ -165,6 +165,80 @@ extern "C" {
> > /* Use final bit of flags to indicate a control mbuf */
> > #define CTRL_MBUF_FLAG (1ULL << 63) /**< Mbuf contains control
> data */
> >
> > +/*
> > + * Sixteen bits are divided into several fields to mark packet types.
> > +Note that
> > + * each field is indexical.
> > + * - Bit 3:0 is for tunnel types.
> > + * - Bit 7:4 is for L3 or outer L3 (for tunneling case) types.
> > + * - Bit 10:8 is for L4 types. It can also be used for inner L4 types for
> > + * tunneling packets.
> > + * - Bit 13:11 is for inner L3 types.
> > + * - Bit 15:14 is reserved.
>
> Is there a reason why using this specific order?
Yes, to support ixgbe Vector PMD, outer L3 types and L4 types need to be contiguous
and in this order.
>
> Also, there are 4 bits for outer L3 types and 3 bits for inner L3 types, but both of
> them have 6 different supported types. Is it intentional?
Yes, it is to support ixgbe Vector PMD. Contiguous 7 bits are needed, though 1 bit wasted.
>
> > + *
> > + * To be compitable with Vector PMD, RTE_PTYPE_L3_IPV4,
> > + RTE_PTYPE_L3_IPV4_EXT,
>
> compitable -> compatible
Good catch! It will be fixed in next version. Thanks!
>
> > + * RTE_PTYPE_L3_IPV6, RTE_PTYPE_L3_IPV6_EXT, RTE_PTYPE_L4_TCP,
> > +RTE_PTYPE_L4_UDP
> > + * and RTE_PTYPE_L4_SCTP should be kept as below in a contiguous 7 bits.
> > + *
> > + * Note that L3 types values are selected for checking IPV4/IPV6
> > +header from
> > + * performance point of view. Reading annotations of
> > +RTE_ETH_IS_IPV4_HDR and
> > + * RTE_ETH_IS_IPV6_HDR is needed for any future changes of L3 type
> values.
> > + */
> > +#define RTE_PTYPE_UNKNOWN 0x0000 /*
> 0b0000000000000000 */
> > +/* bit 3:0 for tunnel types */
> > +#define RTE_PTYPE_TUNNEL_IP 0x0001 /*
> 0b0000000000000001 */
> > +#define RTE_PTYPE_TUNNEL_TCP 0x0002 /*
> 0b0000000000000010 */
> > +#define RTE_PTYPE_TUNNEL_UDP 0x0003 /*
> 0b0000000000000011 */
> > +#define RTE_PTYPE_TUNNEL_GRE 0x0004 /*
> 0b0000000000000100 */
> > +#define RTE_PTYPE_TUNNEL_VXLAN 0x0005 /*
> 0b0000000000000101 */
> > +#define RTE_PTYPE_TUNNEL_NVGRE 0x0006 /*
> 0b0000000000000110 */
> > +#define RTE_PTYPE_TUNNEL_GENEVE 0x0007 /*
> 0b0000000000000111 */
> > +#define RTE_PTYPE_TUNNEL_GRENAT 0x0008 /*
> 0b0000000000001000 */
> > +#define RTE_PTYPE_TUNNEL_GRENAT_MAC 0x0009 /*
> 0b0000000000001001 */
> > +#define RTE_PTYPE_TUNNEL_GRENAT_MACVLAN 0x000a /*
> 0b0000000000001010 */
> > +#define RTE_PTYPE_TUNNEL_MASK 0x000f /*
> 0b0000000000001111 */
> > +/* bit 7:4 for L3 types */
> > +#define RTE_PTYPE_L3_IPV4 0x0010 /*
> 0b0000000000010000 */
> > +#define RTE_PTYPE_L3_IPV4_EXT 0x0030 /*
> 0b0000000000110000 */
> > +#define RTE_PTYPE_L3_IPV6 0x0040 /*
> 0b0000000001000000 */
> > +#define RTE_PTYPE_L3_IPV4_EXT_UNKNOWN 0x0090 /*
> 0b0000000010010000 */
> > +#define RTE_PTYPE_L3_IPV6_EXT 0x00c0 /*
> 0b0000000011000000 */
> > +#define RTE_PTYPE_L3_IPV6_EXT_UNKNOWN 0x00e0 /*
> 0b0000000011100000 */
> > +#define RTE_PTYPE_L3_MASK 0x00f0 /*
> 0b0000000011110000 */
>
> can we expect that when RTE_PTYPE_L3_IPV4, RTE_PTYPE_L3_IPV4_EXT or
> RTE_PTYPE_L3_IPV4_EXT_UNKNOWN is set, the hardware also verified the
> L3 checksum?
RTE_PTYPE_L3_IPV4 means there is NONE-EXT. Each time only one of above 3 can be set.
These bits don't indicate any checksum, checksum should be indicated by other flags.
They are just for packet types hardware can recognized.
>
> My understanding is:
>
> - if packet_type is IPv4* and PKT_RX_IP_CKSUM_BAD is 0
> -> checksum was checked by hw and is good
> - if packet_type is IPv4* and PKT_RX_IP_CKSUM_BAD is 1
> -> checksum was checked by hw and is bad
> - if packet_type is not IPv4*
> -> checksum was not checked by hw
>
> I think it would solve the problem asked by Stephen
> http://dpdk.org/ml/archives/dev/2015-January/011550.html
>
> > +/* bit 10:8 for L4 types */
> > +#define RTE_PTYPE_L4_TCP 0x0100 /*
> 0b0000000100000000 */
> > +#define RTE_PTYPE_L4_UDP 0x0200 /*
> 0b0000001000000000 */
> > +#define RTE_PTYPE_L4_FRAG 0x0300 /*
> 0b0000001100000000 */
> > +#define RTE_PTYPE_L4_SCTP 0x0400 /*
> 0b0000010000000000 */
> > +#define RTE_PTYPE_L4_ICMP 0x0500 /*
> 0b0000010100000000 */
> > +#define RTE_PTYPE_L4_NONFRAG 0x0600 /*
> 0b0000011000000000 */
> > +#define RTE_PTYPE_L4_MASK 0x0700 /*
> 0b0000011100000000 */
>
> Same question for L4.
>
> Note: it would means that if a hardware is able to recognize a TCP packet but
> not to verify the checksum, it has to set RTE_PTYPE_L4 to unknown.
>
> > +/* bit 13:11 for inner L3 types */
> > +#define RTE_PTYPE_INNER_L3_IPV4 0x0800 /*
> 0b0000100000000000 */
> > +#define RTE_PTYPE_INNER_L3_IPV4_EXT 0x1000 /*
> 0b0001000000000000 */
> > +#define RTE_PTYPE_INNER_L3_IPV6 0x1800 /*
> 0b0001100000000000 */
> > +#define RTE_PTYPE_INNER_L3_IPV6_EXT 0x2000 /*
> 0b0010000000000000 */
> > +#define RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN 0x2800 /*
> > +0b0010100000000000 */ #define
> RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN 0x3000 /* 0b0011000000000000
> */
We cannot define the hardware behaviors, it just reports the hardware recognized
packet information directly to the mbuf.
Based on my experiment on i40e hardware, if a IPV4 packet with wrong checksum,
by default, the PMD driver cannot see the packet at all. So we don't need to care
about it too much!
Thanks for the good comments!
> > +#define RTE_PTYPE_INNER_L3_MASK 0x3800 /*
> 0b0011100000000000 */
> > +/* bit 15:14 reserved */
> > +
> > +/**
> > + * Check if the (outer) L3 header is IPv4. To avoid comparing IPv4
> > +types one by
> > + * one, bit 4 is selected to be used for IPv4 only. Then checking bit
> > +4 can
> > + * determin if it is an IPV4 packet.
> > + */
> > +#define RTE_ETH_IS_IPV4_HDR(ptype) ((ptype) & RTE_PTYPE_L3_IPV4)
> > +
> > +/**
> > + * Check if the (outer) L3 header is IPv4. To avoid comparing IPv4
> > +types one by
> > + * one, bit 6 is selected to be used for IPv4 only. Then checking bit
> > +6 can
> > + * determin if it is an IPV4 packet.
> > + */
> > +#define RTE_ETH_IS_IPV6_HDR(ptype) ((ptype) & RTE_PTYPE_L3_IPV6)
> > +
> > +/* Check if it is a tunneling packet */ #define
> > +RTE_ETH_IS_TUNNEL_PKT(ptype) ((ptype) & RTE_PTYPE_TUNNEL_MASK)
> > +
> > /**
> > * Get the name of a RX offload flag
> > *
> >
>
> Thanks,
> Olivier
^ permalink raw reply [flat|nested] 257+ messages in thread
* Re: [dpdk-dev] [PATCH 17/17] mbuf: remove old packet type bit masks for ol_flags
2015-01-30 13:37 ` Olivier MATZ
@ 2015-02-02 1:53 ` Zhang, Helin
0 siblings, 0 replies; 257+ messages in thread
From: Zhang, Helin @ 2015-02-02 1:53 UTC (permalink / raw)
To: Olivier MATZ, dev
Hi Olivier
> -----Original Message-----
> From: Olivier MATZ [mailto:olivier.matz@6wind.com]
> Sent: Friday, January 30, 2015 9:37 PM
> To: Zhang, Helin; dev@dpdk.org
> Subject: Re: [dpdk-dev] [PATCH 17/17] mbuf: remove old packet type bit masks
> for ol_flags
>
> Hi Helin,
>
> On 01/29/2015 04:16 AM, Helin Zhang wrote:
> > To unify packet types among all PMDs, bit masks and relevant macros of
> > packet type for ol_flags are replaced by unified packet type and
> > relevant macros.
> >
> > Signed-off-by: Helin Zhang <helin.zhang@intel.com>
> > ---
> > lib/librte_mbuf/rte_mbuf.c | 6 ------ lib/librte_mbuf/rte_mbuf.h |
> > 10 ++--------
> > 2 files changed, 2 insertions(+), 14 deletions(-)
> >
> > diff --git a/lib/librte_mbuf/rte_mbuf.c b/lib/librte_mbuf/rte_mbuf.c
> > index 1b14e02..8050ccf 100644
> > --- a/lib/librte_mbuf/rte_mbuf.c
> > +++ b/lib/librte_mbuf/rte_mbuf.c
> > @@ -215,14 +215,8 @@ const char *rte_get_rx_ol_flag_name(uint64_t
> mask)
> > /* case PKT_RX_HBUF_OVERFLOW: return "PKT_RX_HBUF_OVERFLOW";
> */
> > /* case PKT_RX_RECIP_ERR: return "PKT_RX_RECIP_ERR"; */
> > /* case PKT_RX_MAC_ERR: return "PKT_RX_MAC_ERR"; */
> > - case PKT_RX_IPV4_HDR: return "PKT_RX_IPV4_HDR";
> > - case PKT_RX_IPV4_HDR_EXT: return "PKT_RX_IPV4_HDR_EXT";
> > - case PKT_RX_IPV6_HDR: return "PKT_RX_IPV6_HDR";
> > - case PKT_RX_IPV6_HDR_EXT: return "PKT_RX_IPV6_HDR_EXT";
> > case PKT_RX_IEEE1588_PTP: return "PKT_RX_IEEE1588_PTP";
> > case PKT_RX_IEEE1588_TMST: return "PKT_RX_IEEE1588_TMST";
> > - case PKT_RX_TUNNEL_IPV4_HDR: return "PKT_RX_TUNNEL_IPV4_HDR";
> > - case PKT_RX_TUNNEL_IPV6_HDR: return "PKT_RX_TUNNEL_IPV6_HDR";
>
> I see you are not removing IEEE1588. Is there a reason why it is not handled as
> a packet_type?
Ieee1588 is not a part of information reported by hardware in packet type.
Yes, your idea on this is worth being taken into account.
>
> > diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
> > index 94ae344..5df0d61 100644
> > --- a/lib/librte_mbuf/rte_mbuf.h
> > +++ b/lib/librte_mbuf/rte_mbuf.h
> > @@ -90,16 +90,10 @@ extern "C" {
> > #define PKT_RX_HBUF_OVERFLOW (0ULL << 0) /**< Header buffer
> overflow. */
> > #define PKT_RX_RECIP_ERR (0ULL << 0) /**< Hardware processing
> error. */
> > #define PKT_RX_MAC_ERR (0ULL << 0) /**< MAC error. */
> > -#define PKT_RX_IPV4_HDR (1ULL << 5) /**< RX packet with IPv4
> header. */
> > -#define PKT_RX_IPV4_HDR_EXT (1ULL << 6) /**< RX packet with
> extended IPv4 header. */
> > -#define PKT_RX_IPV6_HDR (1ULL << 7) /**< RX packet with IPv6
> header. */
> > -#define PKT_RX_IPV6_HDR_EXT (1ULL << 8) /**< RX packet with
> > extended IPv6 header. */ #define PKT_RX_IEEE1588_PTP (1ULL << 9)
> > /**< RX IEEE1588 L2 Ethernet PT Packet. */ #define
> > PKT_RX_IEEE1588_TMST (1ULL << 10) /**< RX IEEE1588 L2/L4 timestamped
> > packet.*/ -#define PKT_RX_TUNNEL_IPV4_HDR (1ULL << 11) /**< RX tunnel
> packet with IPv4 header.*/ -#define PKT_RX_TUNNEL_IPV6_HDR (1ULL << 12)
> /**< RX tunnel packet with IPv6 header. */
> > -#define PKT_RX_FDIR_ID (1ULL << 13) /**< FD id reported if FDIR
> match. */
> > -#define PKT_RX_FDIR_FLX (1ULL << 14) /**< Flexible bytes reported if
> FDIR match. */
> > +#define PKT_RX_FDIR_ID (1ULL << 11) /**< FD id reported if FDIR
> match. */
> > +#define PKT_RX_FDIR_FLX (1ULL << 12) /**< Flexible bytes reported if
> FDIR match. */
>
> It looks like but numbers are not contiguous anymore (there is a hole between
> 5 and 8).
Initially I don't want to move the following values up, as I am not sure if it may
affect other features.
I'd prefer to keep that hole as reserved. What's the opinion from you guys?
Thanks for the good comments!
>
> Regards,
> Olivier
^ permalink raw reply [flat|nested] 257+ messages in thread
* Re: [dpdk-dev] [PATCH 00/17] unified packet type
2015-01-30 13:31 ` [dpdk-dev] [PATCH 00/17] unified packet type Olivier MATZ
@ 2015-02-02 2:44 ` Zhang, Helin
[not found] ` <54CF617B.5010009@6wind.com>
0 siblings, 1 reply; 257+ messages in thread
From: Zhang, Helin @ 2015-02-02 2:44 UTC (permalink / raw)
To: Olivier MATZ, dev
Hi Olivier
> -----Original Message-----
> From: Olivier MATZ [mailto:olivier.matz@6wind.com]
> Sent: Friday, January 30, 2015 9:31 PM
> To: Zhang, Helin; dev@dpdk.org
> Subject: Re: [dpdk-dev] [PATCH 00/17] unified packet type
>
> Hi Helin,
>
> On 01/29/2015 04:15 AM, Helin Zhang wrote:
> > Currently only 6 bits which are stored in ol_flags are used to
> > indicate the packet types. This is not enough, as some NIC hardware
> > can recognize quite a lot of packet types, e.g i40e hardware can
> > recognize more than 150 packet types. Hiding those packet types hides
> > hardware offload capabilities which could be quite useful for improving
> performance and for end users.
> > So an unified packet types are needed to support all possible PMDs.
> > Recently a 16 bits packet_type field has been added in mbuf header
> > which can be used for this purpose. In addition, all packet types
> > stored in ol_flag field should be deleted at all, and 6 bits of ol_flags can be
> save as the benifit.
> >
> > Initially, 16 bits of packet_type can be divided into several sub
> > fields to indicate different packet type information of a packet. The
> > initial design is to divide those bits into 4 fields for L3 types,
> > tunnel types, inner L3 types and L4 types. All PMDs should translate
> > the offloaded packet types into this 4 fields of information, for user
> applications.
>
> You haven't answered to my question I asked in your RFC patch [1].
> I copied it below:
>
>
> >> On 01/20/2015 03:28 AM, Zhang, Helin wrote:
> >>>> Another question I've asked several times[1][2] : what does having
> >>>> RTE_PTYPE_TUNNEL_IP mean? What fields are checked by the hardware
> >>>> (or the driver) and what fields should be checked by the application?
> >>>> Are you sure that all the drivers (ixgbe, i40e, vmxnet3, enic)
> >>>> check the same fields? (ethertype, ip version, ip len correct, ip
> >>>> checksum correct, flags, ...)
> >>> RTE_PTYPE_TUNNEL_IP means hardware recognizes the received packet
> as
> >>> an IP-in-IP packet.
> >>> All the fields are filled by PMD which is recognized by hardware.
> >>> The application can just use it which can save some cpu cycles to
> >>> recognize the packet type by software.
> >>> Drivers is responsible for filling with correct values according to
> >>> the packet types recognized by its hardware. Different PMDs may fill
> >>> with different values based on different capabilities.
> >>
> >> Sorry, that does not answer to my question.
> >>
> >> Let's take a simple example. Imagine a hardware-1 that is able to
> >> recognize an IP packet by checking the ethertype and that the IP
> >> version is set to 4.
> >> Another hardware-2 recognize an IP packet by checking the ethertype,
> >> the IP version and that the IP length is correct compared to m_len(m).
> >>
> >> For the same packet, both hardwares will return RTE_PTYPE_L3_IPV4,
> >> but they don't do the same checks on the packet. As I want my
> >> application behave exactly the same whatever the hardware, I need to
> >> know what checks are done in hardware, so I can decide what checks
> >> must be done in my application.
> >>
> >> Example of definition: RTE_PTYPE_L3_IPV4 means that ethertype is
> >> 0x0800 and IP.version is 4.
> >>
> >> It means that I can skip these 2 tests in my application if I have
> >> this packet_type, but all other checks must be done in software (ip
> >> length, flags, checksum, ...)
> >>
> >> For each packet type, we need a definition like above, and we must
> >> check that all drivers setting a packet type behave like described.
Hmm, I think the packet_type may need to be renamed to else, like offload_packet_type.
It is just for hardware reported packet type information. It is not for all
information of a packet.
As different hardware may have different capability, it cannot report the same
in mbuf among different hardware for the same packet.
With your question, I think the hardware capability flags may be needed. Applications
can query the packet type capabilities on each port, then it knows what type of packet
type information can be reported by the corresponding hardware.
What do you think? And are they any better ideas from you?
Thanks you very much!
Regards,
Helin
>
> I'm not opposed to have a packet_type field in rx mbuf, but I really think the
> question above is an important question to make this feature useful to the
> applications.
>
>
> Regards,
> Olivier
>
> [1] http://dpdk.org/ml/archives/dev/2015-January/011273.html
>
^ permalink raw reply [flat|nested] 257+ messages in thread
* Re: [dpdk-dev] [PATCH 01/17] mbuf: add definitions of unified packet types
[not found] ` <54CF5CF8.2090605@6wind.com>
@ 2015-02-03 3:18 ` Zhang, Helin
2015-02-03 6:37 ` Zhang, Helin
0 siblings, 1 reply; 257+ messages in thread
From: Zhang, Helin @ 2015-02-03 3:18 UTC (permalink / raw)
To: Olivier MATZ, dev
> -----Original Message-----
> From: Olivier MATZ [mailto:olivier.matz@6wind.com]
> Sent: Monday, February 2, 2015 7:18 PM
> To: Zhang, Helin; dev@dpdk.org
> Cc: Stephen Hemminger
> Subject: Re: [dpdk-dev] [PATCH 01/17] mbuf: add definitions of unified packet
> types
>
> Hi Helin,
>
> On 02/02/2015 02:43 AM, Zhang, Helin wrote:
> >>> +/*
> >>> + * Sixteen bits are divided into several fields to mark packet types.
> >>> +Note that
> >>> + * each field is indexical.
> >>> + * - Bit 3:0 is for tunnel types.
> >>> + * - Bit 7:4 is for L3 or outer L3 (for tunneling case) types.
> >>> + * - Bit 10:8 is for L4 types. It can also be used for inner L4 types for
> >>> + * tunneling packets.
> >>> + * - Bit 13:11 is for inner L3 types.
> >>> + * - Bit 15:14 is reserved.
> >>
> >> Is there a reason why using this specific order?
> > Yes, to support ixgbe Vector PMD, outer L3 types and L4 types need to
> > be contiguous and in this order.
>
> When you say "need to be", do you mean it's impossible to do in another
> manner or just that it would be slower?
It was designed to be like this, otherwise, performance drop must be expected.
>
> >> Also, there are 4 bits for outer L3 types and 3 bits for inner L3
> >> types, but both of them have 6 different supported types. Is it intentional?
> > Yes, it is to support ixgbe Vector PMD. Contiguous 7 bits are needed, though
> 1 bit wasted.
>
> To be honnest, I'm always a surprised that in dpdk we prefer having a strange
> API just because it's faster or easier to do on one specific driver (usually i40e or
> ixgbe). Unfortunately, trying to optimize the API for one driver may result in
> making the rest of the code (application and other drivers) slower and more
> complex.
Based on my understanding, 'faster' is most of DPDK customers wanted. Otherwise,
they don't need DPDK. Different hardware must have different capabilities, I am trying
to unify at least packet types to get things easier.
>
> In your proposition, there is no inner l4_type. I consider it's as useful as the
> other fields. From what I see, there are only 2 bits left. What do you think about
> changing the packet type to 64 bits now?
For tunneling cases, L4_type is for inner L4 type, outer L4 type is not needed, as it
can be in tunnel type.
I can expect 64 bits are needed in the future. But for now, I don't see any strong
demand on that for currently supported hardware.
In addition, there is no free bit in the first cache line of mbuf header, mbuf changes
are needed to expand it. I'd prefer to do it later to make things easier.
>
> From an API point of view, I think it would be good to have the same structure
> for inner and outer types. For instance (this is just an example):
>
> union layer_pkt_type {
> struct {
> uint16_t l2_type:4;
> uint16_t l3_type:4;
> uint16_t l4_type:4;
> uint16_t tun_type:4;
> };
> uint16_t u16;
> };
>
> struct pkt_type {
> union layer_pkt_type outer;
> union layer_pkt_type inner;
> };
>
> When your application decapsulates tunnels, you can just do outer = inner and
> enter into the same code.
Expanding packet_type is not easy, as there is no free bits in the first cache line.
Is there any tunnel type in inner packet? Is it a waste?
Is L2 type really needed? I don't know.
>
>
> >>> + * RTE_PTYPE_L3_IPV6, RTE_PTYPE_L3_IPV6_EXT, RTE_PTYPE_L4_TCP,
> >>> +RTE_PTYPE_L4_UDP
> >>> + * and RTE_PTYPE_L4_SCTP should be kept as below in a contiguous 7
> bits.
> >>> + *
> >>> + * Note that L3 types values are selected for checking IPV4/IPV6
> >>> +header from
> >>> + * performance point of view. Reading annotations of
> >>> +RTE_ETH_IS_IPV4_HDR and
> >>> + * RTE_ETH_IS_IPV6_HDR is needed for any future changes of L3 type
> >> values.
> >>> + */
> >>> +#define RTE_PTYPE_UNKNOWN 0x0000 /*
> >> 0b0000000000000000 */
> >>> +/* bit 3:0 for tunnel types */
> >>> +#define RTE_PTYPE_TUNNEL_IP 0x0001 /*
> >> 0b0000000000000001 */
> >>> +#define RTE_PTYPE_TUNNEL_TCP 0x0002 /*
> >> 0b0000000000000010 */
> >>> +#define RTE_PTYPE_TUNNEL_UDP 0x0003 /*
> >> 0b0000000000000011 */
> >>> +#define RTE_PTYPE_TUNNEL_GRE 0x0004 /*
> >> 0b0000000000000100 */
> >>> +#define RTE_PTYPE_TUNNEL_VXLAN 0x0005 /*
> >> 0b0000000000000101 */
> >>> +#define RTE_PTYPE_TUNNEL_NVGRE 0x0006 /*
> >> 0b0000000000000110 */
> >>> +#define RTE_PTYPE_TUNNEL_GENEVE 0x0007 /*
> >> 0b0000000000000111 */
> >>> +#define RTE_PTYPE_TUNNEL_GRENAT 0x0008 /*
> >> 0b0000000000001000 */
> >>> +#define RTE_PTYPE_TUNNEL_GRENAT_MAC 0x0009 /*
> >> 0b0000000000001001 */
> >>> +#define RTE_PTYPE_TUNNEL_GRENAT_MACVLAN 0x000a /*
> >> 0b0000000000001010 */
> >>> +#define RTE_PTYPE_TUNNEL_MASK 0x000f /*
> >> 0b0000000000001111 */
> >>> +/* bit 7:4 for L3 types */
> >>> +#define RTE_PTYPE_L3_IPV4 0x0010 /*
> >> 0b0000000000010000 */
> >>> +#define RTE_PTYPE_L3_IPV4_EXT 0x0030 /*
> >> 0b0000000000110000 */
> >>> +#define RTE_PTYPE_L3_IPV6 0x0040 /*
> >> 0b0000000001000000 */
> >>> +#define RTE_PTYPE_L3_IPV4_EXT_UNKNOWN 0x0090 /*
> >> 0b0000000010010000 */
> >>> +#define RTE_PTYPE_L3_IPV6_EXT 0x00c0 /*
> >> 0b0000000011000000 */
> >>> +#define RTE_PTYPE_L3_IPV6_EXT_UNKNOWN 0x00e0 /*
> >> 0b0000000011100000 */
> >>> +#define RTE_PTYPE_L3_MASK 0x00f0 /*
> >> 0b0000000011110000 */
> >>
> >> can we expect that when RTE_PTYPE_L3_IPV4, RTE_PTYPE_L3_IPV4_EXT or
> >> RTE_PTYPE_L3_IPV4_EXT_UNKNOWN is set, the hardware also verified the
> >> L3 checksum?
> > RTE_PTYPE_L3_IPV4 means there is NONE-EXT. Each time only one of above 3
> can be set.
> > These bits don't indicate any checksum, checksum should be indicated by
> other flags.
> > They are just for packet types hardware can recognized.
>
> I think these 2 information are linked:
>
> - if the hardware cannot recognize packet, it cannot calculate the
> checksum because it does not know the packet type
> - if the hardware can recognize the packet, it can verify that the
> checksum is good or wrong
We cannot know how hardware works, we care about what hardware can report.
>
> Today, we have:
>
> - PKT_RX_IPV4_HDR and PKT_RX_IPV4_HDR_EXT to tell if the packet is
> seen as IPv4 by the hw.
>
> - We can suppose that:
>
> - PKT_RX_IPV4_HDR(_EXT)=0 -> no hw checksum information
> - PKT_RX_IPV4_HDR(_EXT)=1 and PKT_RX_IP_CKSUM_BAD=0 -> checksum
> is correct
> - PKT_RX_IPV4_HDR(_EXT)=1 and PKT_RX_IP_CKSUM_BAD=1 -> checksum
> is not correct
>
> - We cannot do the same with L4 because we have no L4 type info,
> but it would be good to be able to do the same.
>
> With your patch, you are removing the PKT_RX_IPV4_HDR and
> PKT_RX_IPV4_HDR_EXT flags, but I think the above assumption about
> checksum should be kept. As you are adding a L4 type info, the same method
> could be applied to L4 checksums.
>
> I think this would definitely solve the problem described by Stephen.
I think packet type and checksum are different things. They are reported by different fields.
PKT_RX_IPV4_HDR and PKT_RX_IPV4_HDR_EXT mean packet type only,
nothing about checksum. Checksum GOOD/BAD can be reported by other flags in ol_flags.
>
>
> >> My understanding is:
> >>
> >> - if packet_type is IPv4* and PKT_RX_IP_CKSUM_BAD is 0
> >> -> checksum was checked by hw and is good
> >> - if packet_type is IPv4* and PKT_RX_IP_CKSUM_BAD is 1
> >> -> checksum was checked by hw and is bad
> >> - if packet_type is not IPv4*
> >> -> checksum was not checked by hw
> >>
> >> I think it would solve the problem asked by Stephen
> >> http://dpdk.org/ml/archives/dev/2015-January/011550.html
> >>
> >>> +/* bit 10:8 for L4 types */
> >>> +#define RTE_PTYPE_L4_TCP 0x0100 /*
> >> 0b0000000100000000 */
> >>> +#define RTE_PTYPE_L4_UDP 0x0200 /*
> >> 0b0000001000000000 */
> >>> +#define RTE_PTYPE_L4_FRAG 0x0300 /*
> >> 0b0000001100000000 */
> >>> +#define RTE_PTYPE_L4_SCTP 0x0400 /*
> >> 0b0000010000000000 */
> >>> +#define RTE_PTYPE_L4_ICMP 0x0500 /*
> >> 0b0000010100000000 */
> >>> +#define RTE_PTYPE_L4_NONFRAG 0x0600 /*
> >> 0b0000011000000000 */
> >>> +#define RTE_PTYPE_L4_MASK 0x0700 /*
> >> 0b0000011100000000 */
> >>
> >> Same question for L4.
> >>
> >> Note: it would means that if a hardware is able to recognize a TCP
> >> packet but not to verify the checksum, it has to set RTE_PTYPE_L4 to
> unknown.
> >>
> >>> +/* bit 13:11 for inner L3 types */
> >>> +#define RTE_PTYPE_INNER_L3_IPV4 0x0800 /*
> >> 0b0000100000000000 */
> >>> +#define RTE_PTYPE_INNER_L3_IPV4_EXT 0x1000 /*
> >> 0b0001000000000000 */
> >>> +#define RTE_PTYPE_INNER_L3_IPV6 0x1800 /*
> >> 0b0001100000000000 */
> >>> +#define RTE_PTYPE_INNER_L3_IPV6_EXT 0x2000 /*
> >> 0b0010000000000000 */
> >>> +#define RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN 0x2800 /*
> >>> +0b0010100000000000 */ #define
> >> RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN 0x3000 /*
> 0b0011000000000000 */
> > We cannot define the hardware behaviors, it just reports the hardware
> > recognized packet information directly to the mbuf.
> > Based on my experiment on i40e hardware, if a IPV4 packet with wrong
> > checksum, by default, the PMD driver cannot see the packet at all. So
> > we don't need to care about it too much!
>
> I agree that the hardware reports some info that can be different depending on
> the hw. But the role of the driver is to convert these info into a common API
> with a well-defined behavior.
Yes, driver should report the received packet information to a well-defined behavior,
but not the same behavior, even for the same packet.
Capability can be queried for each port, and then the application can know the port
capability well, and know what the hardware can report, and what the hardware
cannot report.
Driver should enable the hardware with its advanced capabilities as most as possible.
>
> Regards,
> Olivier
^ permalink raw reply [flat|nested] 257+ messages in thread
* Re: [dpdk-dev] [PATCH 00/17] unified packet type
[not found] ` <2601191342CEEE43887BDE71AB977258213E28EC@irsmsx105.ger.corp.intel.com>
@ 2015-02-03 3:25 ` Zhang, Helin
2015-02-03 8:55 ` Olivier MATZ
1 sibling, 0 replies; 257+ messages in thread
From: Zhang, Helin @ 2015-02-03 3:25 UTC (permalink / raw)
To: Ananyev, Konstantin, Olivier MATZ, dev
> -----Original Message-----
> From: Ananyev, Konstantin
> Sent: Tuesday, February 3, 2015 1:20 AM
> To: Olivier MATZ; Zhang, Helin; dev@dpdk.org
> Subject: RE: [dpdk-dev] [PATCH 00/17] unified packet type
>
> Hi Olivier,
>
> > -----Original Message-----
> > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Olivier MATZ
> > Sent: Monday, February 02, 2015 11:38 AM
> > To: Zhang, Helin; dev@dpdk.org
> > Subject: Re: [dpdk-dev] [PATCH 00/17] unified packet type
> >
> > Hi Helin,
> >
> > On 02/02/2015 03:44 AM, Zhang, Helin wrote:
> > >>>> Let's take a simple example. Imagine a hardware-1 that is able to
> > >>>> recognize an IP packet by checking the ethertype and that the IP
> > >>>> version is set to 4.
> > >>>> Another hardware-2 recognize an IP packet by checking the
> > >>>> ethertype, the IP version and that the IP length is correct compared to
> m_len(m).
> > >>>>
> > >>>> For the same packet, both hardwares will return
> > >>>> RTE_PTYPE_L3_IPV4, but they don't do the same checks on the
> > >>>> packet. As I want my application behave exactly the same whatever
> > >>>> the hardware, I need to know what checks are done in hardware, so
> > >>>> I can decide what checks must be done in my application.
> > >>>>
> > >>>> Example of definition: RTE_PTYPE_L3_IPV4 means that ethertype is
> > >>>> 0x0800 and IP.version is 4.
> > >>>>
> > >>>> It means that I can skip these 2 tests in my application if I
> > >>>> have this packet_type, but all other checks must be done in
> > >>>> software (ip length, flags, checksum, ...)
> > >>>>
> > >>>> For each packet type, we need a definition like above, and we
> > >>>> must check that all drivers setting a packet type behave like described.
> > > Hmm, I think the packet_type may need to be renamed to else, like
> offload_packet_type.
> > > It is just for hardware reported packet type information. It is not
> > > for all information of a packet.
> > > As different hardware may have different capability, it cannot
> > > report the same in mbuf among different hardware for the same packet.
> > > With your question, I think the hardware capability flags may be
> > > needed. Applications can query the packet type capabilities on each
> > > port, then it knows what type of packet type information can be reported by
> the corresponding hardware.
> > > What do you think? And are they any better ideas from you?
> >
> > I'm not sure renaming the field would change something here.
> >
> > The high-level question is: how a software can take advantage of this
> > information given by the hardware? If the same packet_type does not
> > have the same meaning depending on the hardware, it's not worth having
> > this info.
> >
> > I think the API should describe for each packet type what can be
> > expected by the application. Here is an example. When a driver sets
> > the
> > RTE_PTYPE_L3_IPV4 type, it means that:
> >
> > - the layer 3 is identified as IP by underlying layer (ex: ethertype=IP
> > if layer 2 is ethernet)
> > - the IP version field is 4
> > - there is no IP options (i.e the size of header is 20)
>
> Yes, I suppose that's what supported HW can guarantee when
> RTE_PTYPE_L3_IPV4 is set.
>
> > - the checksum field has been verified by hw, and if wrong, the
> > flag PKT_RX_IP_CKSUM_BAD is set
>
> Hmm, why is that?
> As I remember on many devices it is configurable by SW should HW do RX
> checksum offload or not.
> From DPDK point of view there is hw_ip_checksum field in rte_eth_rxmode.
> So it is a possible situation, when at RX HW does packet type determination,
> but doesn't make L3/L4 checksum calculation.
>
> I suppose for checksum(s) it should be a separate flags (in ol_flags) with 3
> possible values:
> CKSUM_UNKNOWN, CKSUM_BAD, CKSUM_OK.
>
> Konstantin
I think packet type and checksum are totally different things in DPDK, though
they might have dependencies in hardware.
Checksum good/bad is still indicated in ol_flags. Packet type is nothing about
checksum.
Regards,
Helin
>
> >
> > If the hardware is not able to give all this information, there are
> > 2 solutions:
> > - do the remaining tests in the driver
> > - or set l3 pkt_type to unknown
> >
> > All other conditions that are not described in the API should be
> > checked by the applition if it needs the information (ex: check that
> > IP dest address is legal, that ip->len is >= 20, ...).
> >
> >
> > If we are able to describe this for all packet types, it would really
> > help application to take advantage of these packet types.
> >
> > Regards,
> > Olivier
^ permalink raw reply [flat|nested] 257+ messages in thread
* Re: [dpdk-dev] [PATCH 01/17] mbuf: add definitions of unified packet types
2015-02-03 3:18 ` Zhang, Helin
@ 2015-02-03 6:37 ` Zhang, Helin
2015-02-03 9:12 ` Olivier MATZ
0 siblings, 1 reply; 257+ messages in thread
From: Zhang, Helin @ 2015-02-03 6:37 UTC (permalink / raw)
To: 'Olivier MATZ', 'dev@dpdk.org'
> -----Original Message-----
> From: Zhang, Helin
> Sent: Tuesday, February 3, 2015 11:19 AM
> To: Olivier MATZ; dev@dpdk.org
> Cc: Stephen Hemminger
> Subject: RE: [dpdk-dev] [PATCH 01/17] mbuf: add definitions of unified packet
> types
>
>
>
> > -----Original Message-----
> > From: Olivier MATZ [mailto:olivier.matz@6wind.com]
> > Sent: Monday, February 2, 2015 7:18 PM
> > To: Zhang, Helin; dev@dpdk.org
> > Cc: Stephen Hemminger
> > Subject: Re: [dpdk-dev] [PATCH 01/17] mbuf: add definitions of unified
> > packet types
> >
> > Hi Helin,
> >
> > On 02/02/2015 02:43 AM, Zhang, Helin wrote:
> > >>> +/*
> > >>> + * Sixteen bits are divided into several fields to mark packet types.
> > >>> +Note that
> > >>> + * each field is indexical.
> > >>> + * - Bit 3:0 is for tunnel types.
> > >>> + * - Bit 7:4 is for L3 or outer L3 (for tunneling case) types.
> > >>> + * - Bit 10:8 is for L4 types. It can also be used for inner L4 types for
> > >>> + * tunneling packets.
> > >>> + * - Bit 13:11 is for inner L3 types.
> > >>> + * - Bit 15:14 is reserved.
> > >>
> > >> Is there a reason why using this specific order?
> > > Yes, to support ixgbe Vector PMD, outer L3 types and L4 types need
> > > to be contiguous and in this order.
> >
> > When you say "need to be", do you mean it's impossible to do in
> > another manner or just that it would be slower?
> It was designed to be like this, otherwise, performance drop must be expected.
>
> >
> > >> Also, there are 4 bits for outer L3 types and 3 bits for inner L3
> > >> types, but both of them have 6 different supported types. Is it intentional?
> > > Yes, it is to support ixgbe Vector PMD. Contiguous 7 bits are
> > > needed, though
> > 1 bit wasted.
> >
> > To be honnest, I'm always a surprised that in dpdk we prefer having a
> > strange API just because it's faster or easier to do on one specific
> > driver (usually i40e or ixgbe). Unfortunately, trying to optimize the
> > API for one driver may result in making the rest of the code
> > (application and other drivers) slower and more complex.
> Based on my understanding, 'faster' is most of DPDK customers wanted.
> Otherwise, they don't need DPDK. Different hardware must have different
> capabilities, I am trying to unify at least packet types to get things easier.
>
> >
> > In your proposition, there is no inner l4_type. I consider it's as
> > useful as the other fields. From what I see, there are only 2 bits
> > left. What do you think about changing the packet type to 64 bits now?
> For tunneling cases, L4_type is for inner L4 type, outer L4 type is not needed, as
> it can be in tunnel type.
> I can expect 64 bits are needed in the future. But for now, I don't see any
> strong demand on that for currently supported hardware.
> In addition, there is no free bit in the first cache line of mbuf header, mbuf
> changes are needed to expand it. I'd prefer to do it later to make things easier.
Sorry, I misremember the usage of the first cache line of mbuf. It still has some
free space. Based on this, enlarging (to 32 or 64 bits) the packet type might be good.
>
> >
> > From an API point of view, I think it would be good to have the same
> > structure for inner and outer types. For instance (this is just an example):
> >
> > union layer_pkt_type {
> > struct {
> > uint16_t l2_type:4;
> > uint16_t l3_type:4;
> > uint16_t l4_type:4;
> > uint16_t tun_type:4;
> > };
> > uint16_t u16;
> > };
> >
> > struct pkt_type {
> > union layer_pkt_type outer;
> > union layer_pkt_type inner;
> > };
> >
> > When your application decapsulates tunnels, you can just do outer =
> > inner and enter into the same code.
> Expanding packet_type is not easy, as there is no free bits in the first cache
> line.
> Is there any tunnel type in inner packet? Is it a waste?
> Is L2 type really needed? I don't know.
If it is now not short of space in mbuf, the definition as yours might be good.
But tun_type is not required for inner packet, I'd prefer to define it as needed
with taking into account the Vector PMD support. It seems 32 bits might be enough,
like below,
struct pkt_type {
uint32_t l2_type:4;
uint32_t l3_type:4;
uint32_t l4_type:4;
uint32_t tun_type:4;
uint32_t inner_l2_type:4;
uint32_t inner_l3_type:4;
uint32_t inner_l4_type:4;
}
Regards,
Helin
>
> >
> >
> > >>> + * RTE_PTYPE_L3_IPV6, RTE_PTYPE_L3_IPV6_EXT, RTE_PTYPE_L4_TCP,
> > >>> +RTE_PTYPE_L4_UDP
> > >>> + * and RTE_PTYPE_L4_SCTP should be kept as below in a contiguous
> > >>> +7
> > bits.
> > >>> + *
> > >>> + * Note that L3 types values are selected for checking IPV4/IPV6
> > >>> +header from
> > >>> + * performance point of view. Reading annotations of
> > >>> +RTE_ETH_IS_IPV4_HDR and
> > >>> + * RTE_ETH_IS_IPV6_HDR is needed for any future changes of L3
> > >>> +type
> > >> values.
> > >>> + */
> > >>> +#define RTE_PTYPE_UNKNOWN 0x0000 /*
> > >> 0b0000000000000000 */
> > >>> +/* bit 3:0 for tunnel types */
> > >>> +#define RTE_PTYPE_TUNNEL_IP 0x0001 /*
> > >> 0b0000000000000001 */
> > >>> +#define RTE_PTYPE_TUNNEL_TCP 0x0002 /*
> > >> 0b0000000000000010 */
> > >>> +#define RTE_PTYPE_TUNNEL_UDP 0x0003 /*
> > >> 0b0000000000000011 */
> > >>> +#define RTE_PTYPE_TUNNEL_GRE 0x0004 /*
> > >> 0b0000000000000100 */
> > >>> +#define RTE_PTYPE_TUNNEL_VXLAN 0x0005 /*
> > >> 0b0000000000000101 */
> > >>> +#define RTE_PTYPE_TUNNEL_NVGRE 0x0006 /*
> > >> 0b0000000000000110 */
> > >>> +#define RTE_PTYPE_TUNNEL_GENEVE 0x0007 /*
> > >> 0b0000000000000111 */
> > >>> +#define RTE_PTYPE_TUNNEL_GRENAT 0x0008 /*
> > >> 0b0000000000001000 */
> > >>> +#define RTE_PTYPE_TUNNEL_GRENAT_MAC 0x0009 /*
> > >> 0b0000000000001001 */
> > >>> +#define RTE_PTYPE_TUNNEL_GRENAT_MACVLAN 0x000a /*
> > >> 0b0000000000001010 */
> > >>> +#define RTE_PTYPE_TUNNEL_MASK 0x000f /*
> > >> 0b0000000000001111 */
> > >>> +/* bit 7:4 for L3 types */
> > >>> +#define RTE_PTYPE_L3_IPV4 0x0010 /*
> > >> 0b0000000000010000 */
> > >>> +#define RTE_PTYPE_L3_IPV4_EXT 0x0030 /*
> > >> 0b0000000000110000 */
> > >>> +#define RTE_PTYPE_L3_IPV6 0x0040 /*
> > >> 0b0000000001000000 */
> > >>> +#define RTE_PTYPE_L3_IPV4_EXT_UNKNOWN 0x0090 /*
> > >> 0b0000000010010000 */
> > >>> +#define RTE_PTYPE_L3_IPV6_EXT 0x00c0 /*
> > >> 0b0000000011000000 */
> > >>> +#define RTE_PTYPE_L3_IPV6_EXT_UNKNOWN 0x00e0 /*
> > >> 0b0000000011100000 */
> > >>> +#define RTE_PTYPE_L3_MASK 0x00f0 /*
> > >> 0b0000000011110000 */
> > >>
> > >> can we expect that when RTE_PTYPE_L3_IPV4, RTE_PTYPE_L3_IPV4_EXT
> or
> > >> RTE_PTYPE_L3_IPV4_EXT_UNKNOWN is set, the hardware also verified
> > >> the
> > >> L3 checksum?
> > > RTE_PTYPE_L3_IPV4 means there is NONE-EXT. Each time only one of
> > > above 3
> > can be set.
> > > These bits don't indicate any checksum, checksum should be indicated
> > > by
> > other flags.
> > > They are just for packet types hardware can recognized.
> >
> > I think these 2 information are linked:
> >
> > - if the hardware cannot recognize packet, it cannot calculate the
> > checksum because it does not know the packet type
> > - if the hardware can recognize the packet, it can verify that the
> > checksum is good or wrong
> We cannot know how hardware works, we care about what hardware can
> report.
>
> >
> > Today, we have:
> >
> > - PKT_RX_IPV4_HDR and PKT_RX_IPV4_HDR_EXT to tell if the packet is
> > seen as IPv4 by the hw.
> >
> > - We can suppose that:
> >
> > - PKT_RX_IPV4_HDR(_EXT)=0 -> no hw checksum information
> > - PKT_RX_IPV4_HDR(_EXT)=1 and PKT_RX_IP_CKSUM_BAD=0 ->
> checksum
> > is correct
> > - PKT_RX_IPV4_HDR(_EXT)=1 and PKT_RX_IP_CKSUM_BAD=1 ->
> checksum
> > is not correct
> >
> > - We cannot do the same with L4 because we have no L4 type info,
> > but it would be good to be able to do the same.
> >
> > With your patch, you are removing the PKT_RX_IPV4_HDR and
> > PKT_RX_IPV4_HDR_EXT flags, but I think the above assumption about
> > checksum should be kept. As you are adding a L4 type info, the same
> > method could be applied to L4 checksums.
> >
> > I think this would definitely solve the problem described by Stephen.
> I think packet type and checksum are different things. They are reported by
> different fields.
> PKT_RX_IPV4_HDR and PKT_RX_IPV4_HDR_EXT mean packet type only,
> nothing about checksum. Checksum GOOD/BAD can be reported by other flags
> in ol_flags.
>
> >
> >
> > >> My understanding is:
> > >>
> > >> - if packet_type is IPv4* and PKT_RX_IP_CKSUM_BAD is 0
> > >> -> checksum was checked by hw and is good
> > >> - if packet_type is IPv4* and PKT_RX_IP_CKSUM_BAD is 1
> > >> -> checksum was checked by hw and is bad
> > >> - if packet_type is not IPv4*
> > >> -> checksum was not checked by hw
> > >>
> > >> I think it would solve the problem asked by Stephen
> > >> http://dpdk.org/ml/archives/dev/2015-January/011550.html
> > >>
> > >>> +/* bit 10:8 for L4 types */
> > >>> +#define RTE_PTYPE_L4_TCP 0x0100 /*
> > >> 0b0000000100000000 */
> > >>> +#define RTE_PTYPE_L4_UDP 0x0200 /*
> > >> 0b0000001000000000 */
> > >>> +#define RTE_PTYPE_L4_FRAG 0x0300 /*
> > >> 0b0000001100000000 */
> > >>> +#define RTE_PTYPE_L4_SCTP 0x0400 /*
> > >> 0b0000010000000000 */
> > >>> +#define RTE_PTYPE_L4_ICMP 0x0500 /*
> > >> 0b0000010100000000 */
> > >>> +#define RTE_PTYPE_L4_NONFRAG 0x0600 /*
> > >> 0b0000011000000000 */
> > >>> +#define RTE_PTYPE_L4_MASK 0x0700 /*
> > >> 0b0000011100000000 */
> > >>
> > >> Same question for L4.
> > >>
> > >> Note: it would means that if a hardware is able to recognize a TCP
> > >> packet but not to verify the checksum, it has to set RTE_PTYPE_L4
> > >> to
> > unknown.
> > >>
> > >>> +/* bit 13:11 for inner L3 types */
> > >>> +#define RTE_PTYPE_INNER_L3_IPV4 0x0800 /*
> > >> 0b0000100000000000 */
> > >>> +#define RTE_PTYPE_INNER_L3_IPV4_EXT 0x1000 /*
> > >> 0b0001000000000000 */
> > >>> +#define RTE_PTYPE_INNER_L3_IPV6 0x1800 /*
> > >> 0b0001100000000000 */
> > >>> +#define RTE_PTYPE_INNER_L3_IPV6_EXT 0x2000 /*
> > >> 0b0010000000000000 */
> > >>> +#define RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN 0x2800 /*
> > >>> +0b0010100000000000 */ #define
> > >> RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN 0x3000 /*
> > 0b0011000000000000 */
> > > We cannot define the hardware behaviors, it just reports the
> > > hardware recognized packet information directly to the mbuf.
> > > Based on my experiment on i40e hardware, if a IPV4 packet with wrong
> > > checksum, by default, the PMD driver cannot see the packet at all.
> > > So we don't need to care about it too much!
> >
> > I agree that the hardware reports some info that can be different
> > depending on the hw. But the role of the driver is to convert these
> > info into a common API with a well-defined behavior.
> Yes, driver should report the received packet information to a well-defined
> behavior, but not the same behavior, even for the same packet.
> Capability can be queried for each port, and then the application can know the
> port capability well, and know what the hardware can report, and what the
> hardware cannot report.
> Driver should enable the hardware with its advanced capabilities as most as
> possible.
>
> >
> > Regards,
> > Olivier
^ permalink raw reply [flat|nested] 257+ messages in thread
* Re: [dpdk-dev] [PATCH 00/17] unified packet type
[not found] ` <2601191342CEEE43887BDE71AB977258213E28EC@irsmsx105.ger.corp.intel.com>
2015-02-03 3:25 ` Zhang, Helin
@ 2015-02-03 8:55 ` Olivier MATZ
1 sibling, 0 replies; 257+ messages in thread
From: Olivier MATZ @ 2015-02-03 8:55 UTC (permalink / raw)
To: Ananyev, Konstantin, Zhang, Helin, dev
Hi Konstantin,
On 02/02/2015 06:20 PM, Ananyev, Konstantin wrote:
>> I think the API should describe for each packet type what can be
>> expected by the application. Here is an example. When a driver sets the
>> RTE_PTYPE_L3_IPV4 type, it means that:
>>
>> - the layer 3 is identified as IP by underlying layer (ex: ethertype=IP
>> if layer 2 is ethernet)
>> - the IP version field is 4
>> - there is no IP options (i.e the size of header is 20)
>
> Yes, I suppose that's what supported HW can guarantee when RTE_PTYPE_L3_IPV4 is set.
>
>> - the checksum field has been verified by hw, and if wrong, the
>> flag PKT_RX_IP_CKSUM_BAD is set
>
> Hmm, why is that?
> As I remember on many devices it is configurable by SW should HW do RX checksum offload or not.
> From DPDK point of view there is hw_ip_checksum field in rte_eth_rxmode.
> So it is a possible situation, when at RX HW does packet type determination, but doesn't make L3/L4
> checksum calculation.
>
> I suppose for checksum(s) it should be a separate flags (in ol_flags) with 3 possible values:
> CKSUM_UNKNOWN, CKSUM_BAD, CKSUM_OK.
Indeed you are right, it's probably better to have specific flags
for checksum.
Regards,
Olivier
^ permalink raw reply [flat|nested] 257+ messages in thread
* Re: [dpdk-dev] [PATCH 01/17] mbuf: add definitions of unified packet types
2015-02-03 6:37 ` Zhang, Helin
@ 2015-02-03 9:12 ` Olivier MATZ
0 siblings, 0 replies; 257+ messages in thread
From: Olivier MATZ @ 2015-02-03 9:12 UTC (permalink / raw)
To: Zhang, Helin, 'dev@dpdk.org'
Hi Helin,
On 02/03/2015 07:37 AM, Zhang, Helin wrote:
>>> When your application decapsulates tunnels, you can just do outer =
>>> inner and enter into the same code.
>> Expanding packet_type is not easy, as there is no free bits in the first cache
>> line.
>> Is there any tunnel type in inner packet? Is it a waste?
>> Is L2 type really needed? I don't know.
> If it is now not short of space in mbuf, the definition as yours might be good.
> But tun_type is not required for inner packet, I'd prefer to define it as needed
> with taking into account the Vector PMD support. It seems 32 bits might be enough,
> like below,
> struct pkt_type {
> uint32_t l2_type:4;
> uint32_t l3_type:4;
> uint32_t l4_type:4;
> uint32_t tun_type:4;
> uint32_t inner_l2_type:4;
> uint32_t inner_l3_type:4;
> uint32_t inner_l4_type:4;
> }
Yes, I think a structure like this would be much better!
Maybe a union with a u32 could also help to assign the value
in one operation.
Thanks,
Olivier
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v2 00/15] unified packet type
2015-01-29 3:15 ` [dpdk-dev] [PATCH 00/17] unified packet type Helin Zhang
` (17 preceding siblings ...)
2015-01-30 13:31 ` [dpdk-dev] [PATCH 00/17] unified packet type Olivier MATZ
@ 2015-02-09 6:40 ` Helin Zhang
2015-02-09 6:40 ` [dpdk-dev] [PATCH v2 01/15] mbuf: add definitions of unified packet types Helin Zhang
` (14 more replies)
2015-02-17 6:59 ` [dpdk-dev] [PATCH v3 00/16] unified packet type Helin Zhang
19 siblings, 15 replies; 257+ messages in thread
From: Helin Zhang @ 2015-02-09 6:40 UTC (permalink / raw)
To: dev
Currently only 6 bits which are stored in ol_flags are used to indicate the
packet types. This is not enough, as some NIC hardware can recognize quite
a lot of packet types, e.g i40e hardware can recognize more than 150 packet
types. Hiding those packet types hides hardware offload capabilities which
could be quite useful for improving performance and for end users. So an
unified packet types are needed to support all possible PMDs. A 16 bits
packet_type in mbuf structure can be changed to 32 bits and used for this
purpose. In addition, all packet types stored in ol_flag field should be
deleted at all, and 6 bits of ol_flags can be save as the benifit.
Initially, 32 bits of packet_type can be divided into several sub fields to
indicate different packet type information of a packet. The initial design
is to divide those bits into fields for L2 types, L3 types, L4 types, tunnel
types, inner L2 types, inner L3 types and inner L4 types. All PMDs should
translate the offloaded packet types into these 7 fields of information,
for user applications.
v2 changes:
* Enlarged the packet_type field from 16 bits to 32 bits.
* Redefined the packet type sub-fields.
* Updated the 'struct rte_kni_mbuf' for KNI according to the mbuf changes.
* Used redefined packet types and enlarged packet_type field for all PMDs
and corresponding applications.
* Removed changes in bond and its relevant application, as there is no need
at all according to the recent bond changes.
Helin Zhang (15):
mbuf: add definitions of unified packet types
e1000: support of unified packet type
ixgbe: support of unified packet type
ixgbe: support of unified packet type for vector
i40e: support of unified packet type
enic: support of unified packet type
vmxnet3: support of unified packet type
app/test-pipeline: support of unified packet type
app/test: support of unified packet type
examples/ip_fragmentation: support of unified packet type
examples/ip_reassembly: support of unified packet type
examples/l3fwd-acl: support of unified packet type
examples/l3fwd-power: support of unified packet type
examples/l3fwd: support of unified packet type
mbuf: remove old packet type bit masks
app/test-pipeline/pipeline_hash.c | 7 +-
app/test-pmd/csumonly.c | 6 +-
app/test-pmd/rxonly.c | 9 +-
examples/ip_fragmentation/main.c | 7 +-
examples/ip_reassembly/main.c | 7 +-
examples/l3fwd-acl/main.c | 19 +-
examples/l3fwd-power/main.c | 5 +-
examples/l3fwd/main.c | 64 +-
.../linuxapp/eal/include/exec-env/rte_kni_common.h | 4 +-
lib/librte_mbuf/rte_mbuf.c | 6 -
lib/librte_mbuf/rte_mbuf.h | 127 +++-
lib/librte_pmd_e1000/igb_rxtx.c | 98 ++-
lib/librte_pmd_enic/enic_main.c | 14 +-
lib/librte_pmd_i40e/i40e_rxtx.c | 786 ++++++++++++++-------
lib/librte_pmd_ixgbe/ixgbe_rxtx.c | 146 +++-
lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c | 49 +-
lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c | 4 +-
17 files changed, 914 insertions(+), 444 deletions(-)
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v2 01/15] mbuf: add definitions of unified packet types
2015-02-09 6:40 ` [dpdk-dev] [PATCH v2 00/15] " Helin Zhang
@ 2015-02-09 6:40 ` Helin Zhang
2015-02-09 10:27 ` Bruce Richardson
2015-02-09 6:40 ` [dpdk-dev] [PATCH v2 02/15] e1000: support of unified packet type Helin Zhang
` (13 subsequent siblings)
14 siblings, 1 reply; 257+ messages in thread
From: Helin Zhang @ 2015-02-09 6:40 UTC (permalink / raw)
To: dev
As there are only 6 bit flags in ol_flags for indicating packet types,
which is not enough to describe all the possible packet types hardware
can recognize. For example, i40e hardware can recognize more than 150
packet types. Unified packet type is composed of tunnel type, L3 type,
L4 type and inner L3 type fields, and can be stored in mbuf field of
'packet_type' which is modified from 16 bits to 32 bits in mbuf structure.
Accordingly, the structure of 'rte_kni_mbuf' needs to be modifed as well.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Signed-off-by: Cunming Liang <cunming.liang@intel.com>
Signed-off-by: Jijiang Liu <jijiang.liu@intel.com>
---
.../linuxapp/eal/include/exec-env/rte_kni_common.h | 4 +-
lib/librte_mbuf/rte_mbuf.h | 113 +++++++++++++++++++--
2 files changed, 108 insertions(+), 9 deletions(-)
v2 changes:
* Enlarged the packet_type field from 16 bits to 32 bits.
* Redefined the packet type sub-fields.
* Updated the 'struct rte_kni_mbuf' for KNI according to the mbuf changes.
diff --git a/lib/librte_eal/linuxapp/eal/include/exec-env/rte_kni_common.h b/lib/librte_eal/linuxapp/eal/include/exec-env/rte_kni_common.h
index 1e55c2d..bd1cc09 100644
--- a/lib/librte_eal/linuxapp/eal/include/exec-env/rte_kni_common.h
+++ b/lib/librte_eal/linuxapp/eal/include/exec-env/rte_kni_common.h
@@ -117,9 +117,9 @@ struct rte_kni_mbuf {
uint16_t data_off; /**< Start address of data in segment buffer. */
char pad1[4];
uint64_t ol_flags; /**< Offload features. */
- char pad2[2];
- uint16_t data_len; /**< Amount of data in segment buffer. */
+ char pad2[4];
uint32_t pkt_len; /**< Total pkt len: sum of all segment data_len. */
+ uint16_t data_len; /**< Amount of data in segment buffer. */
/* fields on second cache line */
char pad3[8] __attribute__((__aligned__(RTE_CACHE_LINE_SIZE)));
diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
index 16059c6..ee912d6 100644
--- a/lib/librte_mbuf/rte_mbuf.h
+++ b/lib/librte_mbuf/rte_mbuf.h
@@ -165,6 +165,96 @@ extern "C" {
/* Use final bit of flags to indicate a control mbuf */
#define CTRL_MBUF_FLAG (1ULL << 63) /**< Mbuf contains control data */
+/*
+ * 32 bits are divided into several fields to mark packet types. Note that
+ * each field is indexical.
+ * - Bit 3:0 is for L2 types.
+ * - Bit 7:4 is for L3 or outer L3 (for tunneling case) types.
+ * - Bit 11:8 is for L4 or outer L4 (for tunneling case) types.
+ * - Bit 15:12 is for tunnel types.
+ * - Bit 19:16 is for inner L2 types.
+ * - Bit 23:20 is for inner L3 types.
+ * - Bit 27:24 is for inner L4 types.
+ * - Bit 31:28 is reserved.
+ *
+ * To be compatible with Vector PMD, RTE_PTYPE_L3_IPV4, RTE_PTYPE_L3_IPV4_EXT,
+ * RTE_PTYPE_L3_IPV6, RTE_PTYPE_L3_IPV6_EXT, RTE_PTYPE_L4_TCP, RTE_PTYPE_L4_UDP
+ * and RTE_PTYPE_L4_SCTP should be kept as below in a contiguous 7 bits.
+ *
+ * Note that L3 types values are selected for checking IPV4/IPV6 header from
+ * performance point of view. Reading annotations of RTE_ETH_IS_IPV4_HDR and
+ * RTE_ETH_IS_IPV6_HDR is needed for any future changes of L3 type values.
+ */
+#define RTE_PTYPE_UNKNOWN 0x00000000
+/* bit 3:0 for L2 types */
+#define RTE_PTYPE_L2_MAC 0x00000001
+#define RTE_PTYPE_L2_MAC_TIMESYNC 0x00000002
+#define RTE_PTYPE_L2_ARP 0x00000003
+#define RTE_PTYPE_L2_LLDP 0x00000004
+#define RTE_PTYPE_L2_MASK 0x0000000f
+/* bit 7:4 for L3 types */
+#define RTE_PTYPE_L3_IPV4 0x00000010
+#define RTE_PTYPE_L3_IPV4_EXT 0x00000030
+#define RTE_PTYPE_L3_IPV6 0x00000040
+#define RTE_PTYPE_L3_IPV4_EXT_UNKNOWN 0x00000090
+#define RTE_PTYPE_L3_IPV6_EXT 0x000000c0
+#define RTE_PTYPE_L3_IPV6_EXT_UNKNOWN 0x000000e0
+#define RTE_PTYPE_L3_MASK 0x000000f0
+/* bit 11:8 for L4 types */
+#define RTE_PTYPE_L4_TCP 0x00000100
+#define RTE_PTYPE_L4_UDP 0x00000200
+#define RTE_PTYPE_L4_FRAG 0x00000300
+#define RTE_PTYPE_L4_SCTP 0x00000400
+#define RTE_PTYPE_L4_ICMP 0x00000500
+#define RTE_PTYPE_L4_NONFRAG 0x00000600
+#define RTE_PTYPE_L4_MASK 0x00000f00
+/* bit 15:12 for tunnel types */
+#define RTE_PTYPE_TUNNEL_IP 0x00001000
+#define RTE_PTYPE_TUNNEL_GRE 0x00002000
+#define RTE_PTYPE_TUNNEL_VXLAN 0x00003000
+#define RTE_PTYPE_TUNNEL_NVGRE 0x00004000
+#define RTE_PTYPE_TUNNEL_GENEVE 0x00005000
+#define RTE_PTYPE_TUNNEL_GRENAT 0x00006000
+#define RTE_PTYPE_TUNNEL_MASK 0x0000f000
+/* bit 19:16 for inner L2 types */
+#define RTE_PTYPE_INNER_L2_MAC 0x00010000
+#define RTE_PTYPE_INNER_L2_MAC_VLAN 0x00020000
+#define RTE_PTYPE_INNER_L2_MASK 0x000f0000
+/* bit 23:20 for inner L3 types */
+#define RTE_PTYPE_INNER_L3_IPV4 0x00100000
+#define RTE_PTYPE_INNER_L3_IPV4_EXT 0x00200000
+#define RTE_PTYPE_INNER_L3_IPV6 0x00300000
+#define RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN 0x00400000
+#define RTE_PTYPE_INNER_L3_IPV6_EXT 0x00500000
+#define RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN 0x00600000
+#define RTE_PTYPE_INNER_INNER_L3_MASK 0x00f00000
+/* bit 27:24 for inner L4 types */
+#define RTE_PTYPE_INNER_L4_TCP 0x01000000
+#define RTE_PTYPE_INNER_L4_UDP 0x02000000
+#define RTE_PTYPE_INNER_L4_FRAG 0x03000000
+#define RTE_PTYPE_INNER_L4_SCTP 0x04000000
+#define RTE_PTYPE_INNER_L4_ICMP 0x05000000
+#define RTE_PTYPE_INNER_L4_NONFRAG 0x06000000
+#define RTE_PTYPE_INNER_L4_MASK 0x0f000000
+/* bit 31:28 reserved */
+
+/**
+ * Check if the (outer) L3 header is IPv4. To avoid comparing IPv4 types one by
+ * one, bit 4 is selected to be used for IPv4 only. Then checking bit 4 can
+ * determin if it is an IPV4 packet.
+ */
+#define RTE_ETH_IS_IPV4_HDR(ptype) ((ptype) & RTE_PTYPE_L3_IPV4)
+
+/**
+ * Check if the (outer) L3 header is IPv4. To avoid comparing IPv4 types one by
+ * one, bit 6 is selected to be used for IPv4 only. Then checking bit 6 can
+ * determin if it is an IPV4 packet.
+ */
+#define RTE_ETH_IS_IPV6_HDR(ptype) ((ptype) & RTE_PTYPE_L3_IPV6)
+
+/* Check if it is a tunneling packet */
+#define RTE_ETH_IS_TUNNEL_PKT(ptype) ((ptype) & RTE_PTYPE_TUNNEL_MASK)
+
/**
* Get the name of a RX offload flag
*
@@ -232,17 +322,26 @@ struct rte_mbuf {
/* remaining bytes are set on RX when pulling packet from descriptor */
MARKER rx_descriptor_fields1;
- /**
- * The packet type, which is used to indicate ordinary packet and also
- * tunneled packet format, i.e. each number is represented a type of
- * packet.
+ /*
+ * The packet type, which is the combination of outer/inner L2, L3, L4
+ * and tunnel types.
*/
- uint16_t packet_type;
+ union {
+ uint32_t packet_type; /**< L2/L3/L4 and tunnel information. */
+ struct {
+ uint32_t l2_type:4; /**< (Outer) L2 type. */
+ uint32_t l3_type:4; /**< (Outer) L3 type. */
+ uint32_t l4_type:4; /**< (Outer) L4 type. */
+ uint32_t tun_type:4; /**< Tunnel type. */
+ uint32_t inner_l2_type:4; /**< Inner L2 type. */
+ uint32_t inner_l3_type:4; /**< Inner L3 type. */
+ uint32_t inner_l4_type:4; /**< Inner L4 type. */
+ };
+ };
- uint16_t data_len; /**< Amount of data in segment buffer. */
uint32_t pkt_len; /**< Total pkt len: sum of all segments. */
+ uint16_t data_len; /**< Amount of data in segment buffer. */
uint16_t vlan_tci; /**< VLAN Tag Control Identifier (CPU order) */
- uint16_t reserved;
union {
uint32_t rss; /**< RSS hash result if RSS enabled */
struct {
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v2 02/15] e1000: support of unified packet type
2015-02-09 6:40 ` [dpdk-dev] [PATCH v2 00/15] " Helin Zhang
2015-02-09 6:40 ` [dpdk-dev] [PATCH v2 01/15] mbuf: add definitions of unified packet types Helin Zhang
@ 2015-02-09 6:40 ` Helin Zhang
2015-02-09 6:40 ` [dpdk-dev] [PATCH v2 03/15] ixgbe: " Helin Zhang
` (12 subsequent siblings)
14 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-02-09 6:40 UTC (permalink / raw)
To: dev
To unify packet types among all PMDs, bit masks of packet type for
ol_flags are replaced by unified packet type.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
lib/librte_pmd_e1000/igb_rxtx.c | 98 ++++++++++++++++++++++++++++++++++-------
1 file changed, 83 insertions(+), 15 deletions(-)
v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.
diff --git a/lib/librte_pmd_e1000/igb_rxtx.c b/lib/librte_pmd_e1000/igb_rxtx.c
index 5c394a9..12a68f4 100644
--- a/lib/librte_pmd_e1000/igb_rxtx.c
+++ b/lib/librte_pmd_e1000/igb_rxtx.c
@@ -602,17 +602,85 @@ eth_igb_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
* RX functions
*
**********************************************************************/
+#define IGB_PACKET_TYPE_IPV4 0X01
+#define IGB_PACKET_TYPE_IPV4_TCP 0X11
+#define IGB_PACKET_TYPE_IPV4_UDP 0X21
+#define IGB_PACKET_TYPE_IPV4_SCTP 0X41
+#define IGB_PACKET_TYPE_IPV4_EXT 0X03
+#define IGB_PACKET_TYPE_IPV4_EXT_SCTP 0X43
+#define IGB_PACKET_TYPE_IPV6 0X04
+#define IGB_PACKET_TYPE_IPV6_TCP 0X14
+#define IGB_PACKET_TYPE_IPV6_UDP 0X24
+#define IGB_PACKET_TYPE_IPV6_EXT 0X0C
+#define IGB_PACKET_TYPE_IPV6_EXT_TCP 0X1C
+#define IGB_PACKET_TYPE_IPV6_EXT_UDP 0X2C
+#define IGB_PACKET_TYPE_IPV4_IPV6 0X05
+#define IGB_PACKET_TYPE_IPV4_IPV6_TCP 0X15
+#define IGB_PACKET_TYPE_IPV4_IPV6_UDP 0X25
+#define IGB_PACKET_TYPE_IPV4_IPV6_EXT 0X0D
+#define IGB_PACKET_TYPE_IPV4_IPV6_EXT_TCP 0X1D
+#define IGB_PACKET_TYPE_IPV4_IPV6_EXT_UDP 0X2D
+#define IGB_PACKET_TYPE_MAX 0X80
+#define IGB_PACKET_TYPE_MASK 0X7F
+#define IGB_PACKET_TYPE_SHIFT 0X04
+static inline uint32_t
+igb_rxd_pkt_info_to_pkt_type(uint16_t pkt_info)
+{
+ static const uint32_t
+ ptype_table[IGB_PACKET_TYPE_MAX] __rte_cache_aligned = {
+ [IGB_PACKET_TYPE_IPV4] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4,
+ [IGB_PACKET_TYPE_IPV4_EXT] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4_EXT,
+ [IGB_PACKET_TYPE_IPV6] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6,
+ [IGB_PACKET_TYPE_IPV4_IPV6] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6,
+ [IGB_PACKET_TYPE_IPV6_EXT] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV6_EXT,
+ [IGB_PACKET_TYPE_IPV4_IPV6_EXT] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT,
+ [IGB_PACKET_TYPE_IPV4_TCP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_TCP,
+ [IGB_PACKET_TYPE_IPV6_TCP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_TCP,
+ [IGB_PACKET_TYPE_IPV4_IPV6_TCP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6 | RTE_PTYPE_INNER_L4_TCP,
+ [IGB_PACKET_TYPE_IPV6_EXT_TCP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV6_EXT | RTE_PTYPE_L4_TCP,
+ [IGB_PACKET_TYPE_IPV4_IPV6_EXT_TCP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT | RTE_PTYPE_INNER_L4_TCP,
+ [IGB_PACKET_TYPE_IPV4_UDP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_UDP,
+ [IGB_PACKET_TYPE_IPV6_UDP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_UDP,
+ [IGB_PACKET_TYPE_IPV4_IPV6_UDP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6 | RTE_PTYPE_INNER_L4_UDP,
+ [IGB_PACKET_TYPE_IPV6_EXT_UDP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV6_EXT | RTE_PTYPE_L4_UDP,
+ [IGB_PACKET_TYPE_IPV4_IPV6_EXT_UDP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT | RTE_PTYPE_INNER_L4_UDP,
+ [IGB_PACKET_TYPE_IPV4_SCTP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_SCTP,
+ [IGB_PACKET_TYPE_IPV4_EXT_SCTP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4_EXT | RTE_PTYPE_L4_SCTP,
+ };
+ if (unlikely(pkt_info & E1000_RXDADV_PKTTYPE_ETQF))
+ return RTE_PTYPE_UNKNOWN;
+
+ pkt_info = (pkt_info >> IGB_PACKET_TYPE_SHIFT) & IGB_PACKET_TYPE_MASK;
+
+ return ptype_table[pkt_info];
+}
+
static inline uint64_t
rx_desc_hlen_type_rss_to_pkt_flags(uint32_t hl_tp_rs)
{
- uint64_t pkt_flags;
-
- static uint64_t ip_pkt_types_map[16] = {
- 0, PKT_RX_IPV4_HDR, PKT_RX_IPV4_HDR_EXT, PKT_RX_IPV4_HDR_EXT,
- PKT_RX_IPV6_HDR, 0, 0, 0,
- PKT_RX_IPV6_HDR_EXT, 0, 0, 0,
- PKT_RX_IPV6_HDR_EXT, 0, 0, 0,
- };
+ uint64_t pkt_flags = ((hl_tp_rs & 0x0F) == 0) ? 0 : PKT_RX_RSS_HASH;
#if defined(RTE_LIBRTE_IEEE1588)
static uint32_t ip_pkt_etqf_map[8] = {
@@ -620,14 +688,10 @@ rx_desc_hlen_type_rss_to_pkt_flags(uint32_t hl_tp_rs)
0, 0, 0, 0,
};
- pkt_flags = (hl_tp_rs & E1000_RXDADV_PKTTYPE_ETQF) ?
- ip_pkt_etqf_map[(hl_tp_rs >> 4) & 0x07] :
- ip_pkt_types_map[(hl_tp_rs >> 4) & 0x0F];
-#else
- pkt_flags = (hl_tp_rs & E1000_RXDADV_PKTTYPE_ETQF) ? 0 :
- ip_pkt_types_map[(hl_tp_rs >> 4) & 0x0F];
+ pkt_flags |= ip_pkt_etqf_map[(hl_tp_rs >> 4) & 0x07];
#endif
- return pkt_flags | (((hl_tp_rs & 0x0F) == 0) ? 0 : PKT_RX_RSS_HASH);
+
+ return pkt_flags;
}
static inline uint64_t
@@ -802,6 +866,8 @@ eth_igb_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
pkt_flags = pkt_flags | rx_desc_status_to_pkt_flags(staterr);
pkt_flags = pkt_flags | rx_desc_error_to_pkt_flags(staterr);
rxm->ol_flags = pkt_flags;
+ rxm->packet_type = igb_rxd_pkt_info_to_pkt_type(rxd.wb.lower.
+ lo_dword.hs_rss.pkt_info);
/*
* Store the mbuf address into the next entry of the array
@@ -1036,6 +1102,8 @@ eth_igb_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
pkt_flags = pkt_flags | rx_desc_status_to_pkt_flags(staterr);
pkt_flags = pkt_flags | rx_desc_error_to_pkt_flags(staterr);
first_seg->ol_flags = pkt_flags;
+ first_seg->packet_type = igb_rxd_pkt_info_to_pkt_type(rxd.wb.
+ lower.lo_dword.hs_rss.pkt_info);
/* Prefetch data of first segment, if configured to do so. */
rte_packet_prefetch((char *)first_seg->buf_addr +
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v2 03/15] ixgbe: support of unified packet type
2015-02-09 6:40 ` [dpdk-dev] [PATCH v2 00/15] " Helin Zhang
2015-02-09 6:40 ` [dpdk-dev] [PATCH v2 01/15] mbuf: add definitions of unified packet types Helin Zhang
2015-02-09 6:40 ` [dpdk-dev] [PATCH v2 02/15] e1000: support of unified packet type Helin Zhang
@ 2015-02-09 6:40 ` Helin Zhang
2015-02-09 6:40 ` [dpdk-dev] [PATCH v2 04/15] ixgbe: support of unified packet type for vector Helin Zhang
` (11 subsequent siblings)
14 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-02-09 6:40 UTC (permalink / raw)
To: dev
To unify packet types among all PMDs, bit masks of packet type for
ol_flags are replaced by unified packet type.
Note that around 2.5% performance drop (64B) was observed of doing
4 ports (1 port per 82599 card) IO forwarding on the same SNB core.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
lib/librte_pmd_ixgbe/ixgbe_rxtx.c | 146 +++++++++++++++++++++++++++++---------
1 file changed, 112 insertions(+), 34 deletions(-)
v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.
diff --git a/lib/librte_pmd_ixgbe/ixgbe_rxtx.c b/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
index e6766b3..a2e4234 100644
--- a/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
+++ b/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
@@ -866,40 +866,107 @@ end_of_tx:
* RX functions
*
**********************************************************************/
-static inline uint64_t
-rx_desc_hlen_type_rss_to_pkt_flags(uint32_t hl_tp_rs)
+#define IXGBE_PACKET_TYPE_IPV4 0X01
+#define IXGBE_PACKET_TYPE_IPV4_TCP 0X11
+#define IXGBE_PACKET_TYPE_IPV4_UDP 0X21
+#define IXGBE_PACKET_TYPE_IPV4_SCTP 0X41
+#define IXGBE_PACKET_TYPE_IPV4_EXT 0X03
+#define IXGBE_PACKET_TYPE_IPV4_EXT_SCTP 0X43
+#define IXGBE_PACKET_TYPE_IPV6 0X04
+#define IXGBE_PACKET_TYPE_IPV6_TCP 0X14
+#define IXGBE_PACKET_TYPE_IPV6_UDP 0X24
+#define IXGBE_PACKET_TYPE_IPV6_EXT 0X0C
+#define IXGBE_PACKET_TYPE_IPV6_EXT_TCP 0X1C
+#define IXGBE_PACKET_TYPE_IPV6_EXT_UDP 0X2C
+#define IXGBE_PACKET_TYPE_IPV4_IPV6 0X05
+#define IXGBE_PACKET_TYPE_IPV4_IPV6_TCP 0X15
+#define IXGBE_PACKET_TYPE_IPV4_IPV6_UDP 0X25
+#define IXGBE_PACKET_TYPE_IPV4_IPV6_EXT 0X0D
+#define IXGBE_PACKET_TYPE_IPV4_IPV6_EXT_TCP 0X1D
+#define IXGBE_PACKET_TYPE_IPV4_IPV6_EXT_UDP 0X2D
+#define IXGBE_PACKET_TYPE_MAX 0X80
+#define IXGBE_PACKET_TYPE_MASK 0X7F
+#define IXGBE_PACKET_TYPE_SHIFT 0X04
+static inline uint32_t
+ixgbe_rxd_pkt_info_to_pkt_type(uint16_t pkt_info)
{
- uint64_t pkt_flags;
-
- static uint64_t ip_pkt_types_map[16] = {
- 0, PKT_RX_IPV4_HDR, PKT_RX_IPV4_HDR_EXT, PKT_RX_IPV4_HDR_EXT,
- PKT_RX_IPV6_HDR, 0, 0, 0,
- PKT_RX_IPV6_HDR_EXT, 0, 0, 0,
- PKT_RX_IPV6_HDR_EXT, 0, 0, 0,
+ static const uint32_t
+ ptype_table[IXGBE_PACKET_TYPE_MAX] __rte_cache_aligned = {
+ [IXGBE_PACKET_TYPE_IPV4] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4,
+ [IXGBE_PACKET_TYPE_IPV4_EXT] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4_EXT,
+ [IXGBE_PACKET_TYPE_IPV6] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV6,
+ [IXGBE_PACKET_TYPE_IPV4_IPV6] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6,
+ [IXGBE_PACKET_TYPE_IPV6_EXT] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV6_EXT,
+ [IXGBE_PACKET_TYPE_IPV4_IPV6_EXT] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT,
+ [IXGBE_PACKET_TYPE_IPV4_TCP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_TCP,
+ [IXGBE_PACKET_TYPE_IPV6_TCP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_TCP,
+ [IXGBE_PACKET_TYPE_IPV4_IPV6_TCP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6 | RTE_PTYPE_INNER_L4_TCP,
+ [IXGBE_PACKET_TYPE_IPV6_EXT_TCP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV6_EXT | RTE_PTYPE_L4_TCP,
+ [IXGBE_PACKET_TYPE_IPV4_IPV6_EXT_TCP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT | RTE_PTYPE_INNER_L4_TCP,
+ [IXGBE_PACKET_TYPE_IPV4_UDP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_UDP,
+ [IXGBE_PACKET_TYPE_IPV6_UDP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_UDP,
+ [IXGBE_PACKET_TYPE_IPV4_IPV6_UDP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6 | RTE_PTYPE_INNER_L4_UDP,
+ [IXGBE_PACKET_TYPE_IPV6_EXT_UDP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV6_EXT | RTE_PTYPE_L4_UDP,
+ [IXGBE_PACKET_TYPE_IPV4_IPV6_EXT_UDP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT | RTE_PTYPE_INNER_L4_UDP,
+ [IXGBE_PACKET_TYPE_IPV4_SCTP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_SCTP,
+ [IXGBE_PACKET_TYPE_IPV4_EXT_SCTP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4_EXT | RTE_PTYPE_L4_SCTP,
};
+ if (unlikely(pkt_info & IXGBE_RXDADV_PKTTYPE_ETQF))
+ return RTE_PTYPE_UNKNOWN;
- static uint64_t ip_rss_types_map[16] = {
+ pkt_info = (pkt_info >> IXGBE_PACKET_TYPE_SHIFT) &
+ IXGBE_PACKET_TYPE_MASK;
+
+ return ptype_table[pkt_info];
+}
+
+static inline uint64_t
+ixgbe_rxd_pkt_info_to_pkt_flags(uint16_t pkt_info)
+{
+ static uint64_t ip_rss_types_map[16] __rte_cache_aligned = {
0, PKT_RX_RSS_HASH, PKT_RX_RSS_HASH, PKT_RX_RSS_HASH,
0, PKT_RX_RSS_HASH, 0, PKT_RX_RSS_HASH,
PKT_RX_RSS_HASH, 0, 0, 0,
0, 0, 0, PKT_RX_FDIR,
};
-
#ifdef RTE_LIBRTE_IEEE1588
static uint64_t ip_pkt_etqf_map[8] = {
0, 0, 0, PKT_RX_IEEE1588_PTP,
0, 0, 0, 0,
};
- pkt_flags = (hl_tp_rs & IXGBE_RXDADV_PKTTYPE_ETQF) ?
- ip_pkt_etqf_map[(hl_tp_rs >> 4) & 0x07] :
- ip_pkt_types_map[(hl_tp_rs >> 4) & 0x0F];
+ if (likely(pkt_info & IXGBE_RXDADV_PKTTYPE_ETQF))
+ return ip_pkt_etqf_map[(pkt_info >> 4) & 0X07] |
+ ip_rss_types_map[pkt_info & 0xF];
+ else
+ return ip_rss_types_map[pkt_info & 0xF];
#else
- pkt_flags = (hl_tp_rs & IXGBE_RXDADV_PKTTYPE_ETQF) ? 0 :
- ip_pkt_types_map[(hl_tp_rs >> 4) & 0x0F];
-
+ return ip_rss_types_map[pkt_info & 0xF];
#endif
- return pkt_flags | ip_rss_types_map[hl_tp_rs & 0xF];
}
static inline uint64_t
@@ -956,7 +1023,9 @@ ixgbe_rx_scan_hw_ring(struct igb_rx_queue *rxq)
struct rte_mbuf *mb;
uint16_t pkt_len;
uint64_t pkt_flags;
- int s[LOOK_AHEAD], nb_dd;
+ int nb_dd;
+ uint32_t s[LOOK_AHEAD];
+ uint16_t pkt_info[LOOK_AHEAD];
int i, j, nb_rx = 0;
@@ -979,6 +1048,9 @@ ixgbe_rx_scan_hw_ring(struct igb_rx_queue *rxq)
for (j = LOOK_AHEAD-1; j >= 0; --j)
s[j] = rxdp[j].wb.upper.status_error;
+ for (j = LOOK_AHEAD-1; j >= 0; --j)
+ pkt_info[j] = rxdp[j].wb.lower.lo_dword.hs_rss.pkt_info;
+
/* Compute how many status bits were set */
nb_dd = 0;
for (j = 0; j < LOOK_AHEAD; ++j)
@@ -996,12 +1068,13 @@ ixgbe_rx_scan_hw_ring(struct igb_rx_queue *rxq)
mb->vlan_tci = rte_le_to_cpu_16(rxdp[j].wb.upper.vlan);
/* convert descriptor fields to rte mbuf flags */
- pkt_flags = rx_desc_hlen_type_rss_to_pkt_flags(
- rxdp[j].wb.lower.lo_dword.data);
- /* reuse status field from scan list */
- pkt_flags |= rx_desc_status_to_pkt_flags(s[j]);
+ pkt_flags = rx_desc_status_to_pkt_flags(s[j]);
pkt_flags |= rx_desc_error_to_pkt_flags(s[j]);
+ pkt_flags |=
+ ixgbe_rxd_pkt_info_to_pkt_flags(pkt_info[j]);
mb->ol_flags = pkt_flags;
+ mb->packet_type =
+ ixgbe_rxd_pkt_info_to_pkt_type(pkt_info[j]);
if (likely(pkt_flags & PKT_RX_RSS_HASH))
mb->hash.rss = rxdp[j].wb.lower.hi_dword.rss;
@@ -1198,7 +1271,7 @@ ixgbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
union ixgbe_adv_rx_desc rxd;
uint64_t dma_addr;
uint32_t staterr;
- uint32_t hlen_type_rss;
+ uint32_t pkt_info;
uint16_t pkt_len;
uint16_t rx_id;
uint16_t nb_rx;
@@ -1316,14 +1389,17 @@ ixgbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
rxm->data_len = pkt_len;
rxm->port = rxq->port_id;
- hlen_type_rss = rte_le_to_cpu_32(rxd.wb.lower.lo_dword.data);
+ pkt_info = rte_le_to_cpu_32(rxd.wb.lower.lo_dword.hs_rss.
+ pkt_info);
/* Only valid if PKT_RX_VLAN_PKT set in pkt_flags */
rxm->vlan_tci = rte_le_to_cpu_16(rxd.wb.upper.vlan);
- pkt_flags = rx_desc_hlen_type_rss_to_pkt_flags(hlen_type_rss);
- pkt_flags = pkt_flags | rx_desc_status_to_pkt_flags(staterr);
+ pkt_flags = rx_desc_status_to_pkt_flags(staterr);
pkt_flags = pkt_flags | rx_desc_error_to_pkt_flags(staterr);
+ pkt_flags = pkt_flags |
+ ixgbe_rxd_pkt_info_to_pkt_flags(pkt_info);
rxm->ol_flags = pkt_flags;
+ rxm->packet_type = ixgbe_rxd_pkt_info_to_pkt_type(pkt_info);
if (likely(pkt_flags & PKT_RX_RSS_HASH))
rxm->hash.rss = rxd.wb.lower.hi_dword.rss;
@@ -1382,7 +1458,7 @@ ixgbe_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
union ixgbe_adv_rx_desc rxd;
uint64_t dma; /* Physical address of mbuf data buffer */
uint32_t staterr;
- uint32_t hlen_type_rss;
+ uint16_t pkt_info;
uint16_t rx_id;
uint16_t nb_rx;
uint16_t nb_hold;
@@ -1561,13 +1637,15 @@ ixgbe_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
* set in the pkt_flags field.
*/
first_seg->vlan_tci = rte_le_to_cpu_16(rxd.wb.upper.vlan);
- hlen_type_rss = rte_le_to_cpu_32(rxd.wb.lower.lo_dword.data);
- pkt_flags = rx_desc_hlen_type_rss_to_pkt_flags(hlen_type_rss);
- pkt_flags = (pkt_flags |
- rx_desc_status_to_pkt_flags(staterr));
- pkt_flags = (pkt_flags |
- rx_desc_error_to_pkt_flags(staterr));
+ pkt_info = rte_le_to_cpu_32(rxd.wb.lower.lo_dword.hs_rss.
+ pkt_info);
+ pkt_flags = rx_desc_status_to_pkt_flags(staterr);
+ pkt_flags = pkt_flags | rx_desc_error_to_pkt_flags(staterr);
+ pkt_flags = pkt_flags |
+ ixgbe_rxd_pkt_info_to_pkt_flags(pkt_info);
first_seg->ol_flags = pkt_flags;
+ first_seg->packet_type =
+ ixgbe_rxd_pkt_info_to_pkt_type(pkt_info);
if (likely(pkt_flags & PKT_RX_RSS_HASH))
first_seg->hash.rss = rxd.wb.lower.hi_dword.rss;
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v2 04/15] ixgbe: support of unified packet type for vector
2015-02-09 6:40 ` [dpdk-dev] [PATCH v2 00/15] " Helin Zhang
` (2 preceding siblings ...)
2015-02-09 6:40 ` [dpdk-dev] [PATCH v2 03/15] ixgbe: " Helin Zhang
@ 2015-02-09 6:40 ` Helin Zhang
2015-02-09 6:40 ` [dpdk-dev] [PATCH v2 05/15] i40e: support of unified packet type Helin Zhang
` (10 subsequent siblings)
14 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-02-09 6:40 UTC (permalink / raw)
To: dev
To unify packet types among all PMDs, bit masks of packet type for
ol_flags are replaced by unified packet type.
Note that around 2% performance drop (64B) was observed of doing
4 ports (1 port per 82599 card) IO forwarding on the same SNB core.
Signed-off-by: Cunming Liang <cunming.liang@intel.com>
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c | 49 +++++++++++++++++++----------------
1 file changed, 26 insertions(+), 23 deletions(-)
v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.
diff --git a/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c b/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c
index b54cb19..357eb1d 100644
--- a/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c
+++ b/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c
@@ -134,44 +134,35 @@ ixgbe_rxq_rearm(struct igb_rx_queue *rxq)
*/
#ifdef RTE_IXGBE_RX_OLFLAGS_ENABLE
-#define OLFLAGS_MASK ((uint16_t)(PKT_RX_VLAN_PKT | PKT_RX_IPV4_HDR |\
- PKT_RX_IPV4_HDR_EXT | PKT_RX_IPV6_HDR |\
- PKT_RX_IPV6_HDR_EXT))
-#define OLFLAGS_MASK_V (((uint64_t)OLFLAGS_MASK << 48) | \
- ((uint64_t)OLFLAGS_MASK << 32) | \
- ((uint64_t)OLFLAGS_MASK << 16) | \
- ((uint64_t)OLFLAGS_MASK))
-#define PTYPE_SHIFT (1)
+#define OLFLAGS_MASK_V (((uint64_t)PKT_RX_VLAN_PKT << 48) | \
+ ((uint64_t)PKT_RX_VLAN_PKT << 32) | \
+ ((uint64_t)PKT_RX_VLAN_PKT << 16) | \
+ ((uint64_t)PKT_RX_VLAN_PKT))
#define VTAG_SHIFT (3)
static inline void
desc_to_olflags_v(__m128i descs[4], struct rte_mbuf **rx_pkts)
{
- __m128i ptype0, ptype1, vtag0, vtag1;
+ __m128i vtag0, vtag1;
union {
uint16_t e[4];
uint64_t dword;
} vol;
- ptype0 = _mm_unpacklo_epi16(descs[0], descs[1]);
- ptype1 = _mm_unpacklo_epi16(descs[2], descs[3]);
vtag0 = _mm_unpackhi_epi16(descs[0], descs[1]);
vtag1 = _mm_unpackhi_epi16(descs[2], descs[3]);
- ptype1 = _mm_unpacklo_epi32(ptype0, ptype1);
vtag1 = _mm_unpacklo_epi32(vtag0, vtag1);
-
- ptype1 = _mm_slli_epi16(ptype1, PTYPE_SHIFT);
vtag1 = _mm_srli_epi16(vtag1, VTAG_SHIFT);
- ptype1 = _mm_or_si128(ptype1, vtag1);
- vol.dword = _mm_cvtsi128_si64(ptype1) & OLFLAGS_MASK_V;
+ vol.dword = _mm_cvtsi128_si64(vtag1) & OLFLAGS_MASK_V;
rx_pkts[0]->ol_flags = vol.e[0];
rx_pkts[1]->ol_flags = vol.e[1];
rx_pkts[2]->ol_flags = vol.e[2];
rx_pkts[3]->ol_flags = vol.e[3];
}
+
#else
#define desc_to_olflags_v(desc, rx_pkts) do {} while (0)
#endif
@@ -197,13 +188,15 @@ _recv_raw_pkts_vec(struct igb_rx_queue *rxq, struct rte_mbuf **rx_pkts,
uint64_t var;
__m128i shuf_msk;
__m128i crc_adjust = _mm_set_epi16(
- 0, 0, 0, 0, /* ignore non-length fields */
+ 0, 0, 0, /* ignore non-length fields */
+ -rxq->crc_len, /* sub crc on data_len */
0, /* ignore high-16bits of pkt_len */
-rxq->crc_len, /* sub crc on pkt_len */
- -rxq->crc_len, /* sub crc on data_len */
- 0 /* ignore pkt_type field */
+ 0, 0 /* ignore pkt_type field */
);
__m128i dd_check, eop_check;
+ __m128i desc_mask = _mm_set_epi32(0xFFFFFFFF, 0xFFFFFFFF,
+ 0xFFFFFFFF, 0xFFFF07F0);
if (unlikely(nb_pkts < RTE_IXGBE_VPMD_RX_BURST))
return 0;
@@ -234,12 +227,13 @@ _recv_raw_pkts_vec(struct igb_rx_queue *rxq, struct rte_mbuf **rx_pkts,
/* mask to shuffle from desc. to mbuf */
shuf_msk = _mm_set_epi8(
7, 6, 5, 4, /* octet 4~7, 32bits rss */
- 0xFF, 0xFF, /* skip high 16 bits vlan_macip, zero out */
15, 14, /* octet 14~15, low 16 bits vlan_macip */
+ 13, 12, /* octet 12~13, 16 bits data_len */
0xFF, 0xFF, /* skip high 16 bits pkt_len, zero out */
13, 12, /* octet 12~13, low 16 bits pkt_len */
- 13, 12, /* octet 12~13, 16 bits data_len */
- 0xFF, 0xFF /* skip pkt_type field */
+ 0xFF, 0xFF, /* skip high 16 bits pkt_type */
+ 1, /* octet 1, 8 bits pkt_type field */
+ 0 /* octet 0, 4 bits offset 4 pkt_type field */
);
/* Cache is empty -> need to scan the buffer rings, but first move
@@ -248,6 +242,7 @@ _recv_raw_pkts_vec(struct igb_rx_queue *rxq, struct rte_mbuf **rx_pkts,
/*
* A. load 4 packet in one loop
+ * [A*. mask out 4 unused dirty field in desc]
* B. copy 4 mbuf point from swring to rx_pkts
* C. calc the number of DD bits among the 4 packets
* [C*. extract the end-of-packet bit, if requested]
@@ -289,6 +284,14 @@ _recv_raw_pkts_vec(struct igb_rx_queue *rxq, struct rte_mbuf **rx_pkts,
/* B.2 copy 2 mbuf point into rx_pkts */
_mm_storeu_si128((__m128i *)&rx_pkts[pos+2], mbp2);
+ /* A* mask out 0~3 bits RSS type */
+ descs[3] = _mm_and_si128(descs[3], desc_mask);
+ descs[2] = _mm_and_si128(descs[2], desc_mask);
+
+ /* A* mask out 0~3 bits RSS type */
+ descs[1] = _mm_and_si128(descs[1], desc_mask);
+ descs[0] = _mm_and_si128(descs[0], desc_mask);
+
/* avoid compiler reorder optimization */
rte_compiler_barrier();
@@ -301,7 +304,7 @@ _recv_raw_pkts_vec(struct igb_rx_queue *rxq, struct rte_mbuf **rx_pkts,
/* C.1 4=>2 filter staterr info only */
sterr_tmp1 = _mm_unpackhi_epi32(descs[1], descs[0]);
- /* set ol_flags with packet type and vlan tag */
+ /* set ol_flags with vlan packet type */
desc_to_olflags_v(descs, &rx_pkts[pos]);
/* D.2 pkt 3,4 set in_port/nb_seg and remove crc */
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v2 05/15] i40e: support of unified packet type
2015-02-09 6:40 ` [dpdk-dev] [PATCH v2 00/15] " Helin Zhang
` (3 preceding siblings ...)
2015-02-09 6:40 ` [dpdk-dev] [PATCH v2 04/15] ixgbe: support of unified packet type for vector Helin Zhang
@ 2015-02-09 6:40 ` Helin Zhang
2015-02-09 6:40 ` [dpdk-dev] [PATCH v2 06/15] enic: " Helin Zhang
` (9 subsequent siblings)
14 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-02-09 6:40 UTC (permalink / raw)
To: dev
To unify packet types among all PMDs, bit masks of packet type for
ol_flags are replaced by unified packet type.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
lib/librte_pmd_i40e/i40e_rxtx.c | 786 ++++++++++++++++++++++++++--------------
1 file changed, 512 insertions(+), 274 deletions(-)
v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.
diff --git a/lib/librte_pmd_i40e/i40e_rxtx.c b/lib/librte_pmd_i40e/i40e_rxtx.c
index 2beae3c..bcb49f0 100644
--- a/lib/librte_pmd_i40e/i40e_rxtx.c
+++ b/lib/librte_pmd_i40e/i40e_rxtx.c
@@ -146,272 +146,511 @@ i40e_rxd_error_to_pkt_flags(uint64_t qword)
return flags;
}
-/* Translate pkt types to pkt flags */
-static inline uint64_t
-i40e_rxd_ptype_to_pkt_flags(uint64_t qword)
+/* For each value it means, datasheet of hardware can tell more details */
+static inline uint32_t
+i40e_rxd_pkt_type_mapping(uint8_t ptype)
{
- uint8_t ptype = (uint8_t)((qword & I40E_RXD_QW1_PTYPE_MASK) >>
- I40E_RXD_QW1_PTYPE_SHIFT);
- static const uint64_t ip_ptype_map[I40E_MAX_PKT_TYPE] = {
- 0, /* PTYPE 0 */
- 0, /* PTYPE 1 */
- 0, /* PTYPE 2 */
- 0, /* PTYPE 3 */
- 0, /* PTYPE 4 */
- 0, /* PTYPE 5 */
- 0, /* PTYPE 6 */
- 0, /* PTYPE 7 */
- 0, /* PTYPE 8 */
- 0, /* PTYPE 9 */
- 0, /* PTYPE 10 */
- 0, /* PTYPE 11 */
- 0, /* PTYPE 12 */
- 0, /* PTYPE 13 */
- 0, /* PTYPE 14 */
- 0, /* PTYPE 15 */
- 0, /* PTYPE 16 */
- 0, /* PTYPE 17 */
- 0, /* PTYPE 18 */
- 0, /* PTYPE 19 */
- 0, /* PTYPE 20 */
- 0, /* PTYPE 21 */
- PKT_RX_IPV4_HDR, /* PTYPE 22 */
- PKT_RX_IPV4_HDR, /* PTYPE 23 */
- PKT_RX_IPV4_HDR, /* PTYPE 24 */
- 0, /* PTYPE 25 */
- PKT_RX_IPV4_HDR, /* PTYPE 26 */
- PKT_RX_IPV4_HDR, /* PTYPE 27 */
- PKT_RX_IPV4_HDR, /* PTYPE 28 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 29 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 30 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 31 */
- 0, /* PTYPE 32 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 33 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 34 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 35 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 36 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 37 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 38 */
- 0, /* PTYPE 39 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 40 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 41 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 42 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 43 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 44 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 45 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 46 */
- 0, /* PTYPE 47 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 48 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 49 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 50 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 51 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 52 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 53 */
- 0, /* PTYPE 54 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 55 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 56 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 57 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 58 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 59 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 60 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 61 */
- 0, /* PTYPE 62 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 63 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 64 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 65 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 66 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 67 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 68 */
- 0, /* PTYPE 69 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 70 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 71 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 72 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 73 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 74 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 75 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 76 */
- 0, /* PTYPE 77 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 78 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 79 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 80 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 81 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 82 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 83 */
- 0, /* PTYPE 84 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 85 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 86 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 87 */
- PKT_RX_IPV6_HDR, /* PTYPE 88 */
- PKT_RX_IPV6_HDR, /* PTYPE 89 */
- PKT_RX_IPV6_HDR, /* PTYPE 90 */
- 0, /* PTYPE 91 */
- PKT_RX_IPV6_HDR, /* PTYPE 92 */
- PKT_RX_IPV6_HDR, /* PTYPE 93 */
- PKT_RX_IPV6_HDR, /* PTYPE 94 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 95 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 96 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 97 */
- 0, /* PTYPE 98 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 99 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 100 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 101 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 102 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 103 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 104 */
- 0, /* PTYPE 105 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 106 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 107 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 108 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 109 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 110 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 111 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 112 */
- 0, /* PTYPE 113 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 114 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 115 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 116 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 117 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 118 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 119 */
- 0, /* PTYPE 120 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 121 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 122 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 123 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 124 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 125 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 126 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 127 */
- 0, /* PTYPE 128 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 129 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 130 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 131 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 132 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 133 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 134 */
- 0, /* PTYPE 135 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 136 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 137 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 138 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 139 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 140 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 141 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 142 */
- 0, /* PTYPE 143 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 144 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 145 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 146 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 147 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 148 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 149 */
- 0, /* PTYPE 150 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 151 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 152 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 153 */
- 0, /* PTYPE 154 */
- 0, /* PTYPE 155 */
- 0, /* PTYPE 156 */
- 0, /* PTYPE 157 */
- 0, /* PTYPE 158 */
- 0, /* PTYPE 159 */
- 0, /* PTYPE 160 */
- 0, /* PTYPE 161 */
- 0, /* PTYPE 162 */
- 0, /* PTYPE 163 */
- 0, /* PTYPE 164 */
- 0, /* PTYPE 165 */
- 0, /* PTYPE 166 */
- 0, /* PTYPE 167 */
- 0, /* PTYPE 168 */
- 0, /* PTYPE 169 */
- 0, /* PTYPE 170 */
- 0, /* PTYPE 171 */
- 0, /* PTYPE 172 */
- 0, /* PTYPE 173 */
- 0, /* PTYPE 174 */
- 0, /* PTYPE 175 */
- 0, /* PTYPE 176 */
- 0, /* PTYPE 177 */
- 0, /* PTYPE 178 */
- 0, /* PTYPE 179 */
- 0, /* PTYPE 180 */
- 0, /* PTYPE 181 */
- 0, /* PTYPE 182 */
- 0, /* PTYPE 183 */
- 0, /* PTYPE 184 */
- 0, /* PTYPE 185 */
- 0, /* PTYPE 186 */
- 0, /* PTYPE 187 */
- 0, /* PTYPE 188 */
- 0, /* PTYPE 189 */
- 0, /* PTYPE 190 */
- 0, /* PTYPE 191 */
- 0, /* PTYPE 192 */
- 0, /* PTYPE 193 */
- 0, /* PTYPE 194 */
- 0, /* PTYPE 195 */
- 0, /* PTYPE 196 */
- 0, /* PTYPE 197 */
- 0, /* PTYPE 198 */
- 0, /* PTYPE 199 */
- 0, /* PTYPE 200 */
- 0, /* PTYPE 201 */
- 0, /* PTYPE 202 */
- 0, /* PTYPE 203 */
- 0, /* PTYPE 204 */
- 0, /* PTYPE 205 */
- 0, /* PTYPE 206 */
- 0, /* PTYPE 207 */
- 0, /* PTYPE 208 */
- 0, /* PTYPE 209 */
- 0, /* PTYPE 210 */
- 0, /* PTYPE 211 */
- 0, /* PTYPE 212 */
- 0, /* PTYPE 213 */
- 0, /* PTYPE 214 */
- 0, /* PTYPE 215 */
- 0, /* PTYPE 216 */
- 0, /* PTYPE 217 */
- 0, /* PTYPE 218 */
- 0, /* PTYPE 219 */
- 0, /* PTYPE 220 */
- 0, /* PTYPE 221 */
- 0, /* PTYPE 222 */
- 0, /* PTYPE 223 */
- 0, /* PTYPE 224 */
- 0, /* PTYPE 225 */
- 0, /* PTYPE 226 */
- 0, /* PTYPE 227 */
- 0, /* PTYPE 228 */
- 0, /* PTYPE 229 */
- 0, /* PTYPE 230 */
- 0, /* PTYPE 231 */
- 0, /* PTYPE 232 */
- 0, /* PTYPE 233 */
- 0, /* PTYPE 234 */
- 0, /* PTYPE 235 */
- 0, /* PTYPE 236 */
- 0, /* PTYPE 237 */
- 0, /* PTYPE 238 */
- 0, /* PTYPE 239 */
- 0, /* PTYPE 240 */
- 0, /* PTYPE 241 */
- 0, /* PTYPE 242 */
- 0, /* PTYPE 243 */
- 0, /* PTYPE 244 */
- 0, /* PTYPE 245 */
- 0, /* PTYPE 246 */
- 0, /* PTYPE 247 */
- 0, /* PTYPE 248 */
- 0, /* PTYPE 249 */
- 0, /* PTYPE 250 */
- 0, /* PTYPE 251 */
- 0, /* PTYPE 252 */
- 0, /* PTYPE 253 */
- 0, /* PTYPE 254 */
- 0, /* PTYPE 255 */
+ static const uint32_t ptype_table[UINT8_MAX] __rte_cache_aligned = {
+ /* L2 types */
+ /* [0] reserved */
+ [1] = RTE_PTYPE_L2_MAC,
+ [2] = RTE_PTYPE_L2_MAC_TIMESYNC,
+ /* [3] - [5] reserved */
+ [6] = RTE_PTYPE_L2_LLDP,
+ /* [7] - [10] reserved */
+ [11] = RTE_PTYPE_L2_ARP,
+ /* [12] - [21] reserved */
+
+ /* Non tunneled IPv4 */
+ [22] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_FRAG,
+ [23] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_NONFRAG,
+ [24] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_UDP,
+ /* [25] reserved */
+ [26] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_TCP,
+ [27] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_SCTP,
+ [28] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_ICMP,
+
+ /* IPv4 --> IPv4 */
+ [29] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [30] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [31] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [32] reserved */
+ [33] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [34] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [35] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv4 --> IPv6 */
+ [36] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [37] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [38] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [39] reserved */
+ [40] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [41] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [42] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv4 --> GRE/Teredo/VXLAN */
+ [43] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT,
+
+ /* IPv4 --> GRE/Teredo/VXLAN --> IPv4 */
+ [44] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [45] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [46] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [47] reserved */
+ [48] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [49] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [50] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv4 --> GRE/Teredo/VXLAN --> IPv6 */
+ [51] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [52] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [53] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [54] reserved */
+ [55] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [56] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [57] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv4 --> GRE/Teredo/VXLAN --> MAC */
+ [58] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC,
+
+ /* IPv4 --> GRE/Teredo/VXLAN --> MAC --> IPv4 */
+ [59] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [60] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [61] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [62] reserved */
+ [63] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [64] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [65] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv4 --> GRE/Teredo/VXLAN --> MAC --> IPv6 */
+ [66] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [67] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [68] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [69] reserved */
+ [70] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [71] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [72] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv4 --> GRE/Teredo/VXLAN --> MAC/VLAN */
+ [73] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN,
+
+ /* IPv4 --> GRE/Teredo/VXLAN --> MAC/VLAN --> IPv4 */
+ [74] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [75] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [76] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [77] reserved */
+ [78] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [79] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [80] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv4 --> GRE/Teredo/VXLAN --> MAC/VLAN --> IPv6 */
+ [81] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [82] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [83] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [84] reserved */
+ [85] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [86] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [87] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* Non tunneled IPv6 */
+ [88] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_FRAG,
+ [89] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_NONFRAG,
+ [90] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_UDP,
+ /* [91] reserved */
+ [92] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_TCP,
+ [93] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_SCTP,
+ [94] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_ICMP,
+
+ /* IPv6 --> IPv4 */
+ [95] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [96] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [97] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [98] reserved */
+ [99] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [100] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [101] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv6 --> IPv6 */
+ [102] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [103] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [104] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [105] reserved */
+ [106] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [107] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [108] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv6 --> GRE/Teredo/VXLAN */
+ [109] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT,
+
+ /* IPv6 --> GRE/Teredo/VXLAN --> IPv4 */
+ [110] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [111] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [112] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [113] reserved */
+ [114] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [115] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [116] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv6 --> GRE/Teredo/VXLAN --> IPv6 */
+ [117] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [118] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [119] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [120] reserved */
+ [121] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [122] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [123] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv6 --> GRE/Teredo/VXLAN --> MAC */
+ [124] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC,
+
+ /* IPv6 --> GRE/Teredo/VXLAN --> MAC --> IPv4 */
+ [125] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [126] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [127] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [128] reserved */
+ [129] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [130] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [131] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv6 --> GRE/Teredo/VXLAN --> MAC --> IPv6 */
+ [132] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [133] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [134] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [135] reserved */
+ [136] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [137] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [138] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv6 --> GRE/Teredo/VXLAN --> MAC/VLAN */
+ [139] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN,
+
+ /* IPv6 --> GRE/Teredo/VXLAN --> MAC/VLAN --> IPv4 */
+ [140] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [141] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [142] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [143] reserved */
+ [144] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [145] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [146] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv6 --> GRE/Teredo/VXLAN --> MAC/VLAN --> IPv6 */
+ [147] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [148] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [149] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [150] reserved */
+ [151] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [152] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [153] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* All others reserved */
};
- return ip_ptype_map[ptype];
+ return ptype_table[ptype];
}
#define I40E_RX_DESC_EXT_STATUS_FLEXBH_MASK 0x03
@@ -708,11 +947,11 @@ i40e_rx_scan_hw_ring(struct i40e_rx_queue *rxq)
rxdp[j].wb.qword0.lo_dword.l2tag1) : 0;
pkt_flags = i40e_rxd_status_to_pkt_flags(qword1);
pkt_flags |= i40e_rxd_error_to_pkt_flags(qword1);
- pkt_flags |= i40e_rxd_ptype_to_pkt_flags(qword1);
- mb->packet_type = (uint16_t)((qword1 &
- I40E_RXD_QW1_PTYPE_MASK) >>
- I40E_RXD_QW1_PTYPE_SHIFT);
+ mb->packet_type =
+ i40e_rxd_pkt_type_mapping((uint8_t)((qword1 &
+ I40E_RXD_QW1_PTYPE_MASK) >>
+ I40E_RXD_QW1_PTYPE_SHIFT));
if (pkt_flags & PKT_RX_RSS_HASH)
mb->hash.rss = rte_le_to_cpu_32(\
rxdp[j].wb.qword0.hi_dword.rss);
@@ -951,9 +1190,9 @@ i40e_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
rte_le_to_cpu_16(rxd.wb.qword0.lo_dword.l2tag1) : 0;
pkt_flags = i40e_rxd_status_to_pkt_flags(qword1);
pkt_flags |= i40e_rxd_error_to_pkt_flags(qword1);
- pkt_flags |= i40e_rxd_ptype_to_pkt_flags(qword1);
- rxm->packet_type = (uint16_t)((qword1 & I40E_RXD_QW1_PTYPE_MASK) >>
- I40E_RXD_QW1_PTYPE_SHIFT);
+ rxm->packet_type =
+ i40e_rxd_pkt_type_mapping((uint8_t)((qword1 &
+ I40E_RXD_QW1_PTYPE_MASK) >> I40E_RXD_QW1_PTYPE_SHIFT));
if (pkt_flags & PKT_RX_RSS_HASH)
rxm->hash.rss =
rte_le_to_cpu_32(rxd.wb.qword0.hi_dword.rss);
@@ -1110,10 +1349,9 @@ i40e_recv_scattered_pkts(void *rx_queue,
rte_le_to_cpu_16(rxd.wb.qword0.lo_dword.l2tag1) : 0;
pkt_flags = i40e_rxd_status_to_pkt_flags(qword1);
pkt_flags |= i40e_rxd_error_to_pkt_flags(qword1);
- pkt_flags |= i40e_rxd_ptype_to_pkt_flags(qword1);
- first_seg->packet_type = (uint16_t)((qword1 &
- I40E_RXD_QW1_PTYPE_MASK) >>
- I40E_RXD_QW1_PTYPE_SHIFT);
+ first_seg->packet_type =
+ i40e_rxd_pkt_type_mapping((uint8_t)((qword1 &
+ I40E_RXD_QW1_PTYPE_MASK) >> I40E_RXD_QW1_PTYPE_SHIFT));
if (pkt_flags & PKT_RX_RSS_HASH)
rxm->hash.rss =
rte_le_to_cpu_32(rxd.wb.qword0.hi_dword.rss);
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v2 06/15] enic: support of unified packet type
2015-02-09 6:40 ` [dpdk-dev] [PATCH v2 00/15] " Helin Zhang
` (4 preceding siblings ...)
2015-02-09 6:40 ` [dpdk-dev] [PATCH v2 05/15] i40e: support of unified packet type Helin Zhang
@ 2015-02-09 6:40 ` Helin Zhang
2015-02-09 6:40 ` [dpdk-dev] [PATCH v2 07/15] vmxnet3: " Helin Zhang
` (8 subsequent siblings)
14 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-02-09 6:40 UTC (permalink / raw)
To: dev
To unify packet types among all PMDs, bit masks of packet type for
ol_flags are replaced by unified packet type.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
lib/librte_pmd_enic/enic_main.c | 14 ++++++++------
1 file changed, 8 insertions(+), 6 deletions(-)
v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.
diff --git a/lib/librte_pmd_enic/enic_main.c b/lib/librte_pmd_enic/enic_main.c
index 48fdca2..9acba9a 100644
--- a/lib/librte_pmd_enic/enic_main.c
+++ b/lib/librte_pmd_enic/enic_main.c
@@ -423,7 +423,7 @@ static int enic_rq_indicate_buf(struct vnic_rq *rq,
rx_pkt->pkt_len = bytes_written;
if (ipv4) {
- rx_pkt->ol_flags |= PKT_RX_IPV4_HDR;
+ rx_pkt->packet_type = RTE_PTYPE_L3_IPV4;
if (!csum_not_calc) {
if (unlikely(!ipv4_csum_ok))
rx_pkt->ol_flags |= PKT_RX_IP_CKSUM_BAD;
@@ -432,7 +432,7 @@ static int enic_rq_indicate_buf(struct vnic_rq *rq,
rx_pkt->ol_flags |= PKT_RX_L4_CKSUM_BAD;
}
} else if (ipv6)
- rx_pkt->ol_flags |= PKT_RX_IPV6_HDR;
+ rx_pkt->packet_type = RTE_PTYPE_L3_IPV6;
} else {
/* Header split */
if (sop && !eop) {
@@ -445,7 +445,7 @@ static int enic_rq_indicate_buf(struct vnic_rq *rq,
*rx_pkt_bucket = rx_pkt;
rx_pkt->pkt_len = bytes_written;
if (ipv4) {
- rx_pkt->ol_flags |= PKT_RX_IPV4_HDR;
+ rx_pkt->packet_type = RTE_PTYPE_L3_IPV4;
if (!csum_not_calc) {
if (unlikely(!ipv4_csum_ok))
rx_pkt->ol_flags |=
@@ -457,13 +457,14 @@ static int enic_rq_indicate_buf(struct vnic_rq *rq,
PKT_RX_L4_CKSUM_BAD;
}
} else if (ipv6)
- rx_pkt->ol_flags |= PKT_RX_IPV6_HDR;
+ rx_pkt->packet_type = RTE_PTYPE_L3_IPV6;
} else {
/* Payload */
hdr_rx_pkt = *rx_pkt_bucket;
hdr_rx_pkt->pkt_len += bytes_written;
if (ipv4) {
- hdr_rx_pkt->ol_flags |= PKT_RX_IPV4_HDR;
+ hdr_rx_pkt->packet_type =
+ RTE_PTYPE_L3_IPV4;
if (!csum_not_calc) {
if (unlikely(!ipv4_csum_ok))
hdr_rx_pkt->ol_flags |=
@@ -475,7 +476,8 @@ static int enic_rq_indicate_buf(struct vnic_rq *rq,
PKT_RX_L4_CKSUM_BAD;
}
} else if (ipv6)
- hdr_rx_pkt->ol_flags |= PKT_RX_IPV6_HDR;
+ hdr_rx_pkt->packet_type =
+ RTE_PTYPE_L3_IPV6;
}
}
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v2 07/15] vmxnet3: support of unified packet type
2015-02-09 6:40 ` [dpdk-dev] [PATCH v2 00/15] " Helin Zhang
` (5 preceding siblings ...)
2015-02-09 6:40 ` [dpdk-dev] [PATCH v2 06/15] enic: " Helin Zhang
@ 2015-02-09 6:40 ` Helin Zhang
2015-02-11 1:46 ` Yong Wang
2015-02-09 6:40 ` [dpdk-dev] [PATCH v2 08/15] app/test-pipeline: " Helin Zhang
` (7 subsequent siblings)
14 siblings, 1 reply; 257+ messages in thread
From: Helin Zhang @ 2015-02-09 6:40 UTC (permalink / raw)
To: dev
To unify packet types among all PMDs, bit masks of packet type for
ol_flags are replaced by unified packet type.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.
diff --git a/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c b/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c
index 8425f32..c85ebd8 100644
--- a/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c
+++ b/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c
@@ -650,9 +650,9 @@ vmxnet3_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
struct ipv4_hdr *ip = (struct ipv4_hdr *)(eth + 1);
if (((ip->version_ihl & 0xf) << 2) > (int)sizeof(struct ipv4_hdr))
- rxm->ol_flags |= PKT_RX_IPV4_HDR_EXT;
+ rxm->packet_type = RTE_PTYPE_L3_IPV4_EXT;
else
- rxm->ol_flags |= PKT_RX_IPV4_HDR;
+ rxm->packet_type = RTE_PTYPE_L3_IPV4;
if (!rcd->cnc) {
if (!rcd->ipc)
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v2 08/15] app/test-pipeline: support of unified packet type
2015-02-09 6:40 ` [dpdk-dev] [PATCH v2 00/15] " Helin Zhang
` (6 preceding siblings ...)
2015-02-09 6:40 ` [dpdk-dev] [PATCH v2 07/15] vmxnet3: " Helin Zhang
@ 2015-02-09 6:40 ` Helin Zhang
2015-02-09 6:40 ` [dpdk-dev] [PATCH v2 09/15] app/test: " Helin Zhang
` (6 subsequent siblings)
14 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-02-09 6:40 UTC (permalink / raw)
To: dev
To unify packet types among all PMDs, bit masks and relevant macros
of packet type for ol_flags are replaced by unified packet type and
relevant macros.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
app/test-pipeline/pipeline_hash.c | 7 ++++---
1 file changed, 4 insertions(+), 3 deletions(-)
v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.
diff --git a/app/test-pipeline/pipeline_hash.c b/app/test-pipeline/pipeline_hash.c
index 4598ad4..548615f 100644
--- a/app/test-pipeline/pipeline_hash.c
+++ b/app/test-pipeline/pipeline_hash.c
@@ -459,20 +459,21 @@ app_main_loop_rx_metadata(void) {
signature = RTE_MBUF_METADATA_UINT32_PTR(m, 0);
key = RTE_MBUF_METADATA_UINT8_PTR(m, 32);
- if (m->ol_flags & PKT_RX_IPV4_HDR) {
+ if (RTE_ETH_IS_IPV4_HDR(m->packet_type)) {
ip_hdr = (struct ipv4_hdr *)
&m_data[sizeof(struct ether_hdr)];
ip_dst = ip_hdr->dst_addr;
k32 = (uint32_t *) key;
k32[0] = ip_dst & 0xFFFFFF00;
- } else {
+ } else if (RTE_ETH_IS_IPV6_HDR(m->packet_type)) {
ipv6_hdr = (struct ipv6_hdr *)
&m_data[sizeof(struct ether_hdr)];
ipv6_dst = ipv6_hdr->dst_addr;
memcpy(key, ipv6_dst, 16);
- }
+ } else
+ continue;
*signature = test_hash(key, 0, 0);
}
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v2 09/15] app/test: support of unified packet type
2015-02-09 6:40 ` [dpdk-dev] [PATCH v2 00/15] " Helin Zhang
` (7 preceding siblings ...)
2015-02-09 6:40 ` [dpdk-dev] [PATCH v2 08/15] app/test-pipeline: " Helin Zhang
@ 2015-02-09 6:40 ` Helin Zhang
2015-02-09 6:40 ` [dpdk-dev] [PATCH v2 10/15] examples/ip_fragmentation: " Helin Zhang
` (5 subsequent siblings)
14 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-02-09 6:40 UTC (permalink / raw)
To: dev
To unify packet types among all PMDs, bit masks and relevant macros
of packet type for ol_flags are replaced by unified packet type and
relevant macros.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Signed-off-by: Jijiang Liu <jijiang.liu@intel.com>
---
app/test-pmd/csumonly.c | 6 +++---
app/test-pmd/rxonly.c | 9 +++------
2 files changed, 6 insertions(+), 9 deletions(-)
v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.
diff --git a/app/test-pmd/csumonly.c b/app/test-pmd/csumonly.c
index 41711fd..5e08272 100644
--- a/app/test-pmd/csumonly.c
+++ b/app/test-pmd/csumonly.c
@@ -319,7 +319,7 @@ pkt_burst_checksum_forward(struct fwd_stream *fs)
uint16_t nb_tx;
uint16_t i;
uint64_t ol_flags;
- uint16_t testpmd_ol_flags;
+ uint16_t testpmd_ol_flags, packet_type;
uint8_t l4_proto, l4_tun_len = 0;
uint16_t ethertype = 0, outer_ethertype = 0;
uint16_t l2_len = 0, l3_len = 0, l4_len = 0;
@@ -362,6 +362,7 @@ pkt_burst_checksum_forward(struct fwd_stream *fs)
tunnel = 0;
l4_tun_len = 0;
m = pkts_burst[i];
+ packet_type = m->packet_type;
/* Update the L3/L4 checksum error packet statistics */
rx_bad_ip_csum += ((m->ol_flags & PKT_RX_IP_CKSUM_BAD) != 0);
@@ -387,8 +388,7 @@ pkt_burst_checksum_forward(struct fwd_stream *fs)
/* currently, this flag is set by i40e only if the
* packet is vxlan */
- } else if (m->ol_flags & (PKT_RX_TUNNEL_IPV4_HDR |
- PKT_RX_TUNNEL_IPV6_HDR))
+ } else if (RTE_ETH_IS_TUNNEL_PKT(packet_type))
tunnel = 1;
if (tunnel == 1) {
diff --git a/app/test-pmd/rxonly.c b/app/test-pmd/rxonly.c
index fdfe990..8eb68c4 100644
--- a/app/test-pmd/rxonly.c
+++ b/app/test-pmd/rxonly.c
@@ -92,7 +92,7 @@ pkt_burst_receive(struct fwd_stream *fs)
uint64_t ol_flags;
uint16_t nb_rx;
uint16_t i, packet_type;
- uint64_t is_encapsulation;
+ uint16_t is_encapsulation;
#ifdef RTE_TEST_PMD_RECORD_CORE_CYCLES
uint64_t start_tsc;
@@ -135,10 +135,7 @@ pkt_burst_receive(struct fwd_stream *fs)
eth_type = RTE_BE_TO_CPU_16(eth_hdr->ether_type);
ol_flags = mb->ol_flags;
packet_type = mb->packet_type;
-
- is_encapsulation = ol_flags & (PKT_RX_TUNNEL_IPV4_HDR |
- PKT_RX_TUNNEL_IPV6_HDR);
-
+ is_encapsulation = RTE_ETH_IS_TUNNEL_PKT(packet_type);
print_ether_addr(" src=", ð_hdr->s_addr);
print_ether_addr(" - dst=", ð_hdr->d_addr);
printf(" - type=0x%04x - length=%u - nb_segs=%d",
@@ -174,7 +171,7 @@ pkt_burst_receive(struct fwd_stream *fs)
l2_len = sizeof(struct ether_hdr);
/* Do not support ipv4 option field */
- if (ol_flags & PKT_RX_TUNNEL_IPV4_HDR) {
+ if (RTE_ETH_IS_IPV4_HDR(packet_type)) {
l3_len = sizeof(struct ipv4_hdr);
ipv4_hdr = (struct ipv4_hdr *) (rte_pktmbuf_mtod(mb,
unsigned char *) + l2_len);
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v2 10/15] examples/ip_fragmentation: support of unified packet type
2015-02-09 6:40 ` [dpdk-dev] [PATCH v2 00/15] " Helin Zhang
` (8 preceding siblings ...)
2015-02-09 6:40 ` [dpdk-dev] [PATCH v2 09/15] app/test: " Helin Zhang
@ 2015-02-09 6:40 ` Helin Zhang
2015-02-09 6:40 ` [dpdk-dev] [PATCH v2 11/15] examples/ip_reassembly: " Helin Zhang
` (4 subsequent siblings)
14 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-02-09 6:40 UTC (permalink / raw)
To: dev
To unify packet types among all PMDs, bit masks and relevant macros
of packet type for ol_flags are replaced by unified packet type and
relevant macros.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
examples/ip_fragmentation/main.c | 7 +++----
1 file changed, 3 insertions(+), 4 deletions(-)
v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.
diff --git a/examples/ip_fragmentation/main.c b/examples/ip_fragmentation/main.c
index eac5427..152844e 100644
--- a/examples/ip_fragmentation/main.c
+++ b/examples/ip_fragmentation/main.c
@@ -286,7 +286,7 @@ l3fwd_simple_forward(struct rte_mbuf *m, struct lcore_queue_conf *qconf,
len = qconf->tx_mbufs[port_out].len;
/* if this is an IPv4 packet */
- if (m->ol_flags & PKT_RX_IPV4_HDR) {
+ if (RTE_ETH_IS_IPV4_HDR(m->packet_type)) {
struct ipv4_hdr *ip_hdr;
uint32_t ip_dst;
/* Read the lookup key (i.e. ip_dst) from the input packet */
@@ -320,9 +320,8 @@ l3fwd_simple_forward(struct rte_mbuf *m, struct lcore_queue_conf *qconf,
if (unlikely (len2 < 0))
return;
}
- }
- /* if this is an IPv6 packet */
- else if (m->ol_flags & PKT_RX_IPV6_HDR) {
+ } else if (RTE_ETH_IS_IPV6_HDR(m->packet_type)) {
+ /* if this is an IPv6 packet */
struct ipv6_hdr *ip_hdr;
ipv6 = 1;
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v2 11/15] examples/ip_reassembly: support of unified packet type
2015-02-09 6:40 ` [dpdk-dev] [PATCH v2 00/15] " Helin Zhang
` (9 preceding siblings ...)
2015-02-09 6:40 ` [dpdk-dev] [PATCH v2 10/15] examples/ip_fragmentation: " Helin Zhang
@ 2015-02-09 6:40 ` Helin Zhang
2015-02-09 6:40 ` [dpdk-dev] [PATCH v2 12/15] examples/l3fwd-acl: " Helin Zhang
` (3 subsequent siblings)
14 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-02-09 6:40 UTC (permalink / raw)
To: dev
To unify packet types among all PMDs, bit masks and relevant macros
of packet type for ol_flags are replaced by unified packet type and
relevant macros.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
examples/ip_reassembly/main.c | 7 +++----
1 file changed, 3 insertions(+), 4 deletions(-)
v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.
diff --git a/examples/ip_reassembly/main.c b/examples/ip_reassembly/main.c
index 8492153..5ef2135 100644
--- a/examples/ip_reassembly/main.c
+++ b/examples/ip_reassembly/main.c
@@ -357,7 +357,7 @@ reassemble(struct rte_mbuf *m, uint8_t portid, uint32_t queue,
dst_port = portid;
/* if packet is IPv4 */
- if (m->ol_flags & (PKT_RX_IPV4_HDR)) {
+ if (RTE_ETH_IS_IPV4_HDR(m->packet_type)) {
struct ipv4_hdr *ip_hdr;
uint32_t ip_dst;
@@ -397,9 +397,8 @@ reassemble(struct rte_mbuf *m, uint8_t portid, uint32_t queue,
}
eth_hdr->ether_type = rte_be_to_cpu_16(ETHER_TYPE_IPv4);
- }
- /* if packet is IPv6 */
- else if (m->ol_flags & (PKT_RX_IPV6_HDR | PKT_RX_IPV6_HDR_EXT)) {
+ } else if (RTE_ETH_IS_IPV6_HDR(m->packet_type)) {
+ /* if packet is IPv6 */
struct ipv6_extension_fragment *frag_hdr;
struct ipv6_hdr *ip_hdr;
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v2 12/15] examples/l3fwd-acl: support of unified packet type
2015-02-09 6:40 ` [dpdk-dev] [PATCH v2 00/15] " Helin Zhang
` (10 preceding siblings ...)
2015-02-09 6:40 ` [dpdk-dev] [PATCH v2 11/15] examples/ip_reassembly: " Helin Zhang
@ 2015-02-09 6:40 ` Helin Zhang
2015-02-09 6:40 ` [dpdk-dev] [PATCH v2 13/15] examples/l3fwd-power: " Helin Zhang
` (2 subsequent siblings)
14 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-02-09 6:40 UTC (permalink / raw)
To: dev
To unify packet types among all PMDs, bit masks and relevant macros
of packet type for ol_flags are replaced by unified packet type and
relevant macros.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
examples/l3fwd-acl/main.c | 19 ++++++-------------
1 file changed, 6 insertions(+), 13 deletions(-)
v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.
diff --git a/examples/l3fwd-acl/main.c b/examples/l3fwd-acl/main.c
index f1f7601..af70ccd 100644
--- a/examples/l3fwd-acl/main.c
+++ b/examples/l3fwd-acl/main.c
@@ -651,9 +651,7 @@ prepare_one_packet(struct rte_mbuf **pkts_in, struct acl_search_t *acl,
struct ipv4_hdr *ipv4_hdr;
struct rte_mbuf *pkt = pkts_in[index];
- int type = pkt->ol_flags & (PKT_RX_IPV4_HDR | PKT_RX_IPV6_HDR);
-
- if (type == PKT_RX_IPV4_HDR) {
+ if (RTE_ETH_IS_IPV4_HDR(pkt->packet_type)) {
ipv4_hdr = (struct ipv4_hdr *)(rte_pktmbuf_mtod(pkt,
unsigned char *) + sizeof(struct ether_hdr));
@@ -674,8 +672,7 @@ prepare_one_packet(struct rte_mbuf **pkts_in, struct acl_search_t *acl,
rte_pktmbuf_free(pkt);
}
- } else if (type == PKT_RX_IPV6_HDR) {
-
+ } else if (RTE_ETH_IS_IPV6_HDR(pkt->packet_type)) {
/* Fill acl structure */
acl->data_ipv6[acl->num_ipv6] = MBUF_IPV6_2PROTO(pkt);
acl->m_ipv6[(acl->num_ipv6)++] = pkt;
@@ -693,17 +690,13 @@ prepare_one_packet(struct rte_mbuf **pkts_in, struct acl_search_t *acl,
{
struct rte_mbuf *pkt = pkts_in[index];
- int type = pkt->ol_flags & (PKT_RX_IPV4_HDR | PKT_RX_IPV6_HDR);
-
- if (type == PKT_RX_IPV4_HDR) {
-
+ if (RTE_ETH_IS_IPV4_HDR(pkt->packet_type)) {
/* Fill acl structure */
acl->data_ipv4[acl->num_ipv4] = MBUF_IPV4_2PROTO(pkt);
acl->m_ipv4[(acl->num_ipv4)++] = pkt;
- } else if (type == PKT_RX_IPV6_HDR) {
-
+ } else if (RTE_ETH_IS_IPV6_HDR(pkt->packet_type)) {
/* Fill acl structure */
acl->data_ipv6[acl->num_ipv6] = MBUF_IPV6_2PROTO(pkt);
acl->m_ipv6[(acl->num_ipv6)++] = pkt;
@@ -751,9 +744,9 @@ send_one_packet(struct rte_mbuf *m, uint32_t res)
/* in the ACL list, drop it */
#ifdef L3FWDACL_DEBUG
if ((res & ACL_DENY_SIGNATURE) != 0) {
- if (m->ol_flags & PKT_RX_IPV4_HDR)
+ if (RTE_ETH_IS_IPV4_HDR(m->packet_type))
dump_acl4_rule(m, res);
- else
+ else if (RTE_ETH_IS_IPV6_HDR(m->packet_type))
dump_acl6_rule(m, res);
}
#endif
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v2 13/15] examples/l3fwd-power: support of unified packet type
2015-02-09 6:40 ` [dpdk-dev] [PATCH v2 00/15] " Helin Zhang
` (11 preceding siblings ...)
2015-02-09 6:40 ` [dpdk-dev] [PATCH v2 12/15] examples/l3fwd-acl: " Helin Zhang
@ 2015-02-09 6:40 ` Helin Zhang
2015-02-09 6:40 ` [dpdk-dev] [PATCH v2 14/15] examples/l3fwd: " Helin Zhang
2015-02-09 6:40 ` [dpdk-dev] [PATCH v2 15/15] mbuf: remove old packet type bit masks Helin Zhang
14 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-02-09 6:40 UTC (permalink / raw)
To: dev
To unify packet types among all PMDs, bit masks and relevant macros
of packet type for ol_flags are replaced by unified packet type and
relevant macros.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
examples/l3fwd-power/main.c | 5 ++---
1 file changed, 2 insertions(+), 3 deletions(-)
v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.
diff --git a/examples/l3fwd-power/main.c b/examples/l3fwd-power/main.c
index f6b55b9..964e5b9 100644
--- a/examples/l3fwd-power/main.c
+++ b/examples/l3fwd-power/main.c
@@ -638,7 +638,7 @@ l3fwd_simple_forward(struct rte_mbuf *m, uint8_t portid,
eth_hdr = rte_pktmbuf_mtod(m, struct ether_hdr *);
- if (m->ol_flags & PKT_RX_IPV4_HDR) {
+ if (RTE_ETH_IS_IPV4_HDR(m->packet_type)) {
/* Handle IPv4 headers.*/
ipv4_hdr =
(struct ipv4_hdr *)(rte_pktmbuf_mtod(m, unsigned char*)
@@ -673,8 +673,7 @@ l3fwd_simple_forward(struct rte_mbuf *m, uint8_t portid,
ether_addr_copy(&ports_eth_addr[dst_port], ð_hdr->s_addr);
send_single_packet(m, dst_port);
- }
- else {
+ } else if (RTE_ETH_IS_IPV6_HDR(m->packet_type)) {
/* Handle IPv6 headers.*/
#if (APP_LOOKUP_METHOD == APP_LOOKUP_EXACT_MATCH)
struct ipv6_hdr *ipv6_hdr;
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v2 14/15] examples/l3fwd: support of unified packet type
2015-02-09 6:40 ` [dpdk-dev] [PATCH v2 00/15] " Helin Zhang
` (12 preceding siblings ...)
2015-02-09 6:40 ` [dpdk-dev] [PATCH v2 13/15] examples/l3fwd-power: " Helin Zhang
@ 2015-02-09 6:40 ` Helin Zhang
2015-02-16 17:04 ` Ananyev, Konstantin
2015-02-09 6:40 ` [dpdk-dev] [PATCH v2 15/15] mbuf: remove old packet type bit masks Helin Zhang
14 siblings, 1 reply; 257+ messages in thread
From: Helin Zhang @ 2015-02-09 6:40 UTC (permalink / raw)
To: dev
To unify packet types among all PMDs, bit masks and relevant macros
of packet type for ol_flags are replaced by unified packet type and
relevant macros.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
examples/l3fwd/main.c | 64 ++++++++++++++++++++++++++++-----------------------
1 file changed, 35 insertions(+), 29 deletions(-)
v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.
diff --git a/examples/l3fwd/main.c b/examples/l3fwd/main.c
index 6f7d7d4..302322e 100644
--- a/examples/l3fwd/main.c
+++ b/examples/l3fwd/main.c
@@ -958,7 +958,7 @@ l3fwd_simple_forward(struct rte_mbuf *m, uint8_t portid, struct lcore_conf *qcon
eth_hdr = rte_pktmbuf_mtod(m, struct ether_hdr *);
- if (m->ol_flags & PKT_RX_IPV4_HDR) {
+ if (RTE_ETH_IS_IPV4_HDR(m->packet_type)) {
/* Handle IPv4 headers.*/
ipv4_hdr = (struct ipv4_hdr *)(rte_pktmbuf_mtod(m, unsigned char *) +
sizeof(struct ether_hdr));
@@ -993,7 +993,7 @@ l3fwd_simple_forward(struct rte_mbuf *m, uint8_t portid, struct lcore_conf *qcon
send_single_packet(m, dst_port);
- } else {
+ } else if (RTE_ETH_IS_IPV6_HDR(m->packet_type)) {
/* Handle IPv6 headers.*/
struct ipv6_hdr *ipv6_hdr;
@@ -1039,11 +1039,11 @@ l3fwd_simple_forward(struct rte_mbuf *m, uint8_t portid, struct lcore_conf *qcon
* to BAD_PORT value.
*/
static inline __attribute__((always_inline)) void
-rfc1812_process(struct ipv4_hdr *ipv4_hdr, uint16_t *dp, uint32_t flags)
+rfc1812_process(struct ipv4_hdr *ipv4_hdr, uint16_t *dp, uint16_t ptype)
{
uint8_t ihl;
- if ((flags & PKT_RX_IPV4_HDR) != 0) {
+ if (RTE_ETH_IS_IPV4_HDR(ptype)) {
ihl = ipv4_hdr->version_ihl - IPV4_MIN_VER_IHL;
@@ -1074,11 +1074,11 @@ get_dst_port(const struct lcore_conf *qconf, struct rte_mbuf *pkt,
struct ipv6_hdr *ipv6_hdr;
struct ether_hdr *eth_hdr;
- if (pkt->ol_flags & PKT_RX_IPV4_HDR) {
+ if (RTE_ETH_IS_IPV4_HDR(pkt->packet_type)) {
if (rte_lpm_lookup(qconf->ipv4_lookup_struct, dst_ipv4,
&next_hop) != 0)
next_hop = portid;
- } else if (pkt->ol_flags & PKT_RX_IPV6_HDR) {
+ } else if (RTE_ETH_IS_IPV6_HDR(pkt->packet_type)) {
eth_hdr = rte_pktmbuf_mtod(pkt, struct ether_hdr *);
ipv6_hdr = (struct ipv6_hdr *)(eth_hdr + 1);
if (rte_lpm6_lookup(qconf->ipv6_lookup_struct,
@@ -1112,17 +1112,19 @@ process_packet(struct lcore_conf *qconf, struct rte_mbuf *pkt,
ve = val_eth[dp];
dst_port[0] = dp;
- rfc1812_process(ipv4_hdr, dst_port, pkt->ol_flags);
+ rfc1812_process(ipv4_hdr, dst_port, pkt->packet_type);
te = _mm_blend_epi16(te, ve, MASK_ETH);
_mm_store_si128((__m128i *)eth_hdr, te);
}
/*
- * Read ol_flags and destination IPV4 addresses from 4 mbufs.
+ * Read packet_type and destination IPV4 addresses from 4 mbufs.
*/
static inline void
-processx4_step1(struct rte_mbuf *pkt[FWDSTEP], __m128i *dip, uint32_t *flag)
+processx4_step1(struct rte_mbuf *pkt[FWDSTEP],
+ __m128i *dip,
+ uint32_t *ipv4_flag)
{
struct ipv4_hdr *ipv4_hdr;
struct ether_hdr *eth_hdr;
@@ -1131,22 +1133,20 @@ processx4_step1(struct rte_mbuf *pkt[FWDSTEP], __m128i *dip, uint32_t *flag)
eth_hdr = rte_pktmbuf_mtod(pkt[0], struct ether_hdr *);
ipv4_hdr = (struct ipv4_hdr *)(eth_hdr + 1);
x0 = ipv4_hdr->dst_addr;
- flag[0] = pkt[0]->ol_flags & PKT_RX_IPV4_HDR;
eth_hdr = rte_pktmbuf_mtod(pkt[1], struct ether_hdr *);
ipv4_hdr = (struct ipv4_hdr *)(eth_hdr + 1);
x1 = ipv4_hdr->dst_addr;
- flag[0] &= pkt[1]->ol_flags;
eth_hdr = rte_pktmbuf_mtod(pkt[2], struct ether_hdr *);
ipv4_hdr = (struct ipv4_hdr *)(eth_hdr + 1);
x2 = ipv4_hdr->dst_addr;
- flag[0] &= pkt[2]->ol_flags;
eth_hdr = rte_pktmbuf_mtod(pkt[3], struct ether_hdr *);
ipv4_hdr = (struct ipv4_hdr *)(eth_hdr + 1);
x3 = ipv4_hdr->dst_addr;
- flag[0] &= pkt[3]->ol_flags;
+ *ipv4_flag = pkt[0]->packet_type & pkt[1]->packet_type &
+ pkt[2]->packet_type & pkt[3]->packet_type & RTE_PTYPE_L3_IPV4;
dip[0] = _mm_set_epi32(x3, x2, x1, x0);
}
@@ -1156,8 +1156,12 @@ processx4_step1(struct rte_mbuf *pkt[FWDSTEP], __m128i *dip, uint32_t *flag)
* If lookup fails, use incoming port (portid) as destination port.
*/
static inline void
-processx4_step2(const struct lcore_conf *qconf, __m128i dip, uint32_t flag,
- uint8_t portid, struct rte_mbuf *pkt[FWDSTEP], uint16_t dprt[FWDSTEP])
+processx4_step2(const struct lcore_conf *qconf,
+ __m128i dip,
+ uint32_t ipv4_flag,
+ uint8_t portid,
+ struct rte_mbuf *pkt[FWDSTEP],
+ uint16_t dprt[FWDSTEP])
{
rte_xmm_t dst;
const __m128i bswap_mask = _mm_set_epi8(12, 13, 14, 15, 8, 9, 10, 11,
@@ -1167,7 +1171,7 @@ processx4_step2(const struct lcore_conf *qconf, __m128i dip, uint32_t flag,
dip = _mm_shuffle_epi8(dip, bswap_mask);
/* if all 4 packets are IPV4. */
- if (likely(flag != 0)) {
+ if (likely(ipv4_flag)) {
rte_lpm_lookupx4(qconf->ipv4_lookup_struct, dip, dprt, portid);
} else {
dst.x = dip;
@@ -1218,13 +1222,13 @@ processx4_step3(struct rte_mbuf *pkt[FWDSTEP], uint16_t dst_port[FWDSTEP])
_mm_store_si128(p[3], te[3]);
rfc1812_process((struct ipv4_hdr *)((struct ether_hdr *)p[0] + 1),
- &dst_port[0], pkt[0]->ol_flags);
+ &dst_port[0], pkt[0]->packet_type);
rfc1812_process((struct ipv4_hdr *)((struct ether_hdr *)p[1] + 1),
- &dst_port[1], pkt[1]->ol_flags);
+ &dst_port[1], pkt[1]->packet_type);
rfc1812_process((struct ipv4_hdr *)((struct ether_hdr *)p[2] + 1),
- &dst_port[2], pkt[2]->ol_flags);
+ &dst_port[2], pkt[2]->packet_type);
rfc1812_process((struct ipv4_hdr *)((struct ether_hdr *)p[3] + 1),
- &dst_port[3], pkt[3]->ol_flags);
+ &dst_port[3], pkt[3]->packet_type);
}
/*
@@ -1411,7 +1415,7 @@ main_loop(__attribute__((unused)) void *dummy)
uint16_t *lp;
uint16_t dst_port[MAX_PKT_BURST];
__m128i dip[MAX_PKT_BURST / FWDSTEP];
- uint32_t flag[MAX_PKT_BURST / FWDSTEP];
+ uint32_t ipv4_flag[MAX_PKT_BURST / FWDSTEP];
uint16_t pnum[MAX_PKT_BURST + 1];
#endif
@@ -1481,14 +1485,16 @@ main_loop(__attribute__((unused)) void *dummy)
*/
int32_t n = RTE_ALIGN_FLOOR(nb_rx, 4);
for (j = 0; j < n ; j+=4) {
- uint32_t ol_flag = pkts_burst[j]->ol_flags
- & pkts_burst[j+1]->ol_flags
- & pkts_burst[j+2]->ol_flags
- & pkts_burst[j+3]->ol_flags;
- if (ol_flag & PKT_RX_IPV4_HDR ) {
+ uint32_t pkt_type =
+ pkts_burst[j]->packet_type &
+ pkts_burst[j+1]->packet_type &
+ pkts_burst[j+2]->packet_type &
+ pkts_burst[j+3]->packet_type;
+ if (pkt_type & RTE_PTYPE_L3_IPV4) {
simple_ipv4_fwd_4pkts(&pkts_burst[j],
portid, qconf);
- } else if (ol_flag & PKT_RX_IPV6_HDR) {
+ } else if (pkt_type &
+ RTE_PTYPE_L3_IPV6) {
simple_ipv6_fwd_4pkts(&pkts_burst[j],
portid, qconf);
} else {
@@ -1513,13 +1519,13 @@ main_loop(__attribute__((unused)) void *dummy)
for (j = 0; j != k; j += FWDSTEP) {
processx4_step1(&pkts_burst[j],
&dip[j / FWDSTEP],
- &flag[j / FWDSTEP]);
+ &ipv4_flag[j / FWDSTEP]);
}
k = RTE_ALIGN_FLOOR(nb_rx, FWDSTEP);
for (j = 0; j != k; j += FWDSTEP) {
processx4_step2(qconf, dip[j / FWDSTEP],
- flag[j / FWDSTEP], portid,
+ ipv4_flag[j / FWDSTEP], portid,
&pkts_burst[j], &dst_port[j]);
}
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v2 15/15] mbuf: remove old packet type bit masks
2015-02-09 6:40 ` [dpdk-dev] [PATCH v2 00/15] " Helin Zhang
` (13 preceding siblings ...)
2015-02-09 6:40 ` [dpdk-dev] [PATCH v2 14/15] examples/l3fwd: " Helin Zhang
@ 2015-02-09 6:40 ` Helin Zhang
14 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-02-09 6:40 UTC (permalink / raw)
To: dev
As unified packet types are used instead, those old bit masks
and the relevant macros for packet type indication need to be
removed.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
lib/librte_mbuf/rte_mbuf.c | 6 ------
lib/librte_mbuf/rte_mbuf.h | 14 ++++----------
2 files changed, 4 insertions(+), 16 deletions(-)
v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.
* Redefined the bit masks for packet RX offload flags.
diff --git a/lib/librte_mbuf/rte_mbuf.c b/lib/librte_mbuf/rte_mbuf.c
index 1b14e02..8050ccf 100644
--- a/lib/librte_mbuf/rte_mbuf.c
+++ b/lib/librte_mbuf/rte_mbuf.c
@@ -215,14 +215,8 @@ const char *rte_get_rx_ol_flag_name(uint64_t mask)
/* case PKT_RX_HBUF_OVERFLOW: return "PKT_RX_HBUF_OVERFLOW"; */
/* case PKT_RX_RECIP_ERR: return "PKT_RX_RECIP_ERR"; */
/* case PKT_RX_MAC_ERR: return "PKT_RX_MAC_ERR"; */
- case PKT_RX_IPV4_HDR: return "PKT_RX_IPV4_HDR";
- case PKT_RX_IPV4_HDR_EXT: return "PKT_RX_IPV4_HDR_EXT";
- case PKT_RX_IPV6_HDR: return "PKT_RX_IPV6_HDR";
- case PKT_RX_IPV6_HDR_EXT: return "PKT_RX_IPV6_HDR_EXT";
case PKT_RX_IEEE1588_PTP: return "PKT_RX_IEEE1588_PTP";
case PKT_RX_IEEE1588_TMST: return "PKT_RX_IEEE1588_TMST";
- case PKT_RX_TUNNEL_IPV4_HDR: return "PKT_RX_TUNNEL_IPV4_HDR";
- case PKT_RX_TUNNEL_IPV6_HDR: return "PKT_RX_TUNNEL_IPV6_HDR";
default: return NULL;
}
}
diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
index ee912d6..55336b2 100644
--- a/lib/librte_mbuf/rte_mbuf.h
+++ b/lib/librte_mbuf/rte_mbuf.h
@@ -90,16 +90,10 @@ extern "C" {
#define PKT_RX_HBUF_OVERFLOW (0ULL << 0) /**< Header buffer overflow. */
#define PKT_RX_RECIP_ERR (0ULL << 0) /**< Hardware processing error. */
#define PKT_RX_MAC_ERR (0ULL << 0) /**< MAC error. */
-#define PKT_RX_IPV4_HDR (1ULL << 5) /**< RX packet with IPv4 header. */
-#define PKT_RX_IPV4_HDR_EXT (1ULL << 6) /**< RX packet with extended IPv4 header. */
-#define PKT_RX_IPV6_HDR (1ULL << 7) /**< RX packet with IPv6 header. */
-#define PKT_RX_IPV6_HDR_EXT (1ULL << 8) /**< RX packet with extended IPv6 header. */
-#define PKT_RX_IEEE1588_PTP (1ULL << 9) /**< RX IEEE1588 L2 Ethernet PT Packet. */
-#define PKT_RX_IEEE1588_TMST (1ULL << 10) /**< RX IEEE1588 L2/L4 timestamped packet.*/
-#define PKT_RX_TUNNEL_IPV4_HDR (1ULL << 11) /**< RX tunnel packet with IPv4 header.*/
-#define PKT_RX_TUNNEL_IPV6_HDR (1ULL << 12) /**< RX tunnel packet with IPv6 header. */
-#define PKT_RX_FDIR_ID (1ULL << 13) /**< FD id reported if FDIR match. */
-#define PKT_RX_FDIR_FLX (1ULL << 14) /**< Flexible bytes reported if FDIR match. */
+#define PKT_RX_IEEE1588_PTP (1ULL << 5) /**< RX IEEE1588 L2 Ethernet PT Packet. */
+#define PKT_RX_IEEE1588_TMST (1ULL << 6) /**< RX IEEE1588 L2/L4 timestamped packet.*/
+#define PKT_RX_FDIR_ID (1ULL << 7) /**< FD id reported if FDIR match. */
+#define PKT_RX_FDIR_FLX (1ULL << 8) /**< Flexible bytes reported if FDIR match. */
/* add new RX flags here */
/* add new TX flags here */
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* Re: [dpdk-dev] [PATCH v2 01/15] mbuf: add definitions of unified packet types
2015-02-09 6:40 ` [dpdk-dev] [PATCH v2 01/15] mbuf: add definitions of unified packet types Helin Zhang
@ 2015-02-09 10:27 ` Bruce Richardson
2015-02-10 0:53 ` Zhang, Helin
0 siblings, 1 reply; 257+ messages in thread
From: Bruce Richardson @ 2015-02-09 10:27 UTC (permalink / raw)
To: Helin Zhang; +Cc: dev
On Mon, Feb 09, 2015 at 02:40:35PM +0800, Helin Zhang wrote:
> As there are only 6 bit flags in ol_flags for indicating packet types,
> which is not enough to describe all the possible packet types hardware
> can recognize. For example, i40e hardware can recognize more than 150
> packet types. Unified packet type is composed of tunnel type, L3 type,
> L4 type and inner L3 type fields, and can be stored in mbuf field of
> 'packet_type' which is modified from 16 bits to 32 bits in mbuf structure.
> Accordingly, the structure of 'rte_kni_mbuf' needs to be modifed as well.
>
> Signed-off-by: Helin Zhang <helin.zhang@intel.com>
> Signed-off-by: Cunming Liang <cunming.liang@intel.com>
> Signed-off-by: Jijiang Liu <jijiang.liu@intel.com>
> ---
> .../linuxapp/eal/include/exec-env/rte_kni_common.h | 4 +-
> lib/librte_mbuf/rte_mbuf.h | 113 +++++++++++++++++++--
> 2 files changed, 108 insertions(+), 9 deletions(-)
>
> v2 changes:
> * Enlarged the packet_type field from 16 bits to 32 bits.
> * Redefined the packet type sub-fields.
> * Updated the 'struct rte_kni_mbuf' for KNI according to the mbuf changes.
>
Since these changes to the mbuf will break the operation of the vector driver,
that vector driver needs to be taken into account here.
Some suggestions/options:
1. Temporarily disable the VPMD at compile time or at run time as part of this
patch, and put the vector changes as the next patch (re-enabling the driver too)
2. Put in the minimum changes for the new mbuf layout into this patch. It will
make this patch a little longer, but may still be doable as it's only a couple
of fields changing, not the whole structure.
/Bruce
^ permalink raw reply [flat|nested] 257+ messages in thread
* Re: [dpdk-dev] [PATCH v2 01/15] mbuf: add definitions of unified packet types
2015-02-09 10:27 ` Bruce Richardson
@ 2015-02-10 0:53 ` Zhang, Helin
2015-02-10 10:12 ` Bruce Richardson
0 siblings, 1 reply; 257+ messages in thread
From: Zhang, Helin @ 2015-02-10 0:53 UTC (permalink / raw)
To: Richardson, Bruce; +Cc: dev
Hi Bruce
Fortunately I have Steve as the author of a sub-patch for vector PMD in this
patch set. That means we have already taken into account the VPMD in it.
All is workable with vPMD, and with performance result mentioned.
Everything is done for this mbuf changes.
Regards,
Helin
> -----Original Message-----
> From: Richardson, Bruce
> Sent: Monday, February 9, 2015 6:27 PM
> To: Zhang, Helin
> Cc: dev@dpdk.org; Cao, Waterman; Liang, Cunming; Liu, Jijiang; Ananyev,
> Konstantin
> Subject: Re: [PATCH v2 01/15] mbuf: add definitions of unified packet types
>
> On Mon, Feb 09, 2015 at 02:40:35PM +0800, Helin Zhang wrote:
> > As there are only 6 bit flags in ol_flags for indicating packet types,
> > which is not enough to describe all the possible packet types hardware
> > can recognize. For example, i40e hardware can recognize more than 150
> > packet types. Unified packet type is composed of tunnel type, L3 type,
> > L4 type and inner L3 type fields, and can be stored in mbuf field of
> > 'packet_type' which is modified from 16 bits to 32 bits in mbuf structure.
> > Accordingly, the structure of 'rte_kni_mbuf' needs to be modifed as well.
> >
> > Signed-off-by: Helin Zhang <helin.zhang@intel.com>
> > Signed-off-by: Cunming Liang <cunming.liang@intel.com>
> > Signed-off-by: Jijiang Liu <jijiang.liu@intel.com>
> > ---
> > .../linuxapp/eal/include/exec-env/rte_kni_common.h | 4 +-
> > lib/librte_mbuf/rte_mbuf.h | 113
> +++++++++++++++++++--
> > 2 files changed, 108 insertions(+), 9 deletions(-)
> >
> > v2 changes:
> > * Enlarged the packet_type field from 16 bits to 32 bits.
> > * Redefined the packet type sub-fields.
> > * Updated the 'struct rte_kni_mbuf' for KNI according to the mbuf changes.
> >
>
> Since these changes to the mbuf will break the operation of the vector driver,
> that vector driver needs to be taken into account here.
>
> Some suggestions/options:
> 1. Temporarily disable the VPMD at compile time or at run time as part of this
> patch, and put the vector changes as the next patch (re-enabling the driver too)
> 2. Put in the minimum changes for the new mbuf layout into this patch. It will
> make this patch a little longer, but may still be doable as it's only a couple of
> fields changing, not the whole structure.
>
> /Bruce
^ permalink raw reply [flat|nested] 257+ messages in thread
* Re: [dpdk-dev] [PATCH v2 01/15] mbuf: add definitions of unified packet types
2015-02-10 0:53 ` Zhang, Helin
@ 2015-02-10 10:12 ` Bruce Richardson
0 siblings, 0 replies; 257+ messages in thread
From: Bruce Richardson @ 2015-02-10 10:12 UTC (permalink / raw)
To: Zhang, Helin; +Cc: dev
On Tue, Feb 10, 2015 at 12:53:52AM +0000, Zhang, Helin wrote:
> Hi Bruce
>
> Fortunately I have Steve as the author of a sub-patch for vector PMD in this
> patch set. That means we have already taken into account the VPMD in it.
> All is workable with vPMD, and with performance result mentioned.
> Everything is done for this mbuf changes.
>
> Regards,
> Helin
>
I see that helin, but between applying this patch and applying the subsequent
patch for the vector PMD, the DPDK vector PMD code is broken, which would cause
problems for anyone doing a git bisect. Hence my suggestion that changes to take
account of the vpmd need to go in this patch (not just in the patch set) to
avoid having broken code following this commit.
/Bruce
> > -----Original Message-----
> > From: Richardson, Bruce
> > Sent: Monday, February 9, 2015 6:27 PM
> > To: Zhang, Helin
> > Cc: dev@dpdk.org; Cao, Waterman; Liang, Cunming; Liu, Jijiang; Ananyev,
> > Konstantin
> > Subject: Re: [PATCH v2 01/15] mbuf: add definitions of unified packet types
> >
> > On Mon, Feb 09, 2015 at 02:40:35PM +0800, Helin Zhang wrote:
> > > As there are only 6 bit flags in ol_flags for indicating packet types,
> > > which is not enough to describe all the possible packet types hardware
> > > can recognize. For example, i40e hardware can recognize more than 150
> > > packet types. Unified packet type is composed of tunnel type, L3 type,
> > > L4 type and inner L3 type fields, and can be stored in mbuf field of
> > > 'packet_type' which is modified from 16 bits to 32 bits in mbuf structure.
> > > Accordingly, the structure of 'rte_kni_mbuf' needs to be modifed as well.
> > >
> > > Signed-off-by: Helin Zhang <helin.zhang@intel.com>
> > > Signed-off-by: Cunming Liang <cunming.liang@intel.com>
> > > Signed-off-by: Jijiang Liu <jijiang.liu@intel.com>
> > > ---
> > > .../linuxapp/eal/include/exec-env/rte_kni_common.h | 4 +-
> > > lib/librte_mbuf/rte_mbuf.h | 113
> > +++++++++++++++++++--
> > > 2 files changed, 108 insertions(+), 9 deletions(-)
> > >
> > > v2 changes:
> > > * Enlarged the packet_type field from 16 bits to 32 bits.
> > > * Redefined the packet type sub-fields.
> > > * Updated the 'struct rte_kni_mbuf' for KNI according to the mbuf changes.
> > >
> >
> > Since these changes to the mbuf will break the operation of the vector driver,
> > that vector driver needs to be taken into account here.
> >
> > Some suggestions/options:
> > 1. Temporarily disable the VPMD at compile time or at run time as part of this
> > patch, and put the vector changes as the next patch (re-enabling the driver too)
> > 2. Put in the minimum changes for the new mbuf layout into this patch. It will
> > make this patch a little longer, but may still be doable as it's only a couple of
> > fields changing, not the whole structure.
> >
> > /Bruce
^ permalink raw reply [flat|nested] 257+ messages in thread
* Re: [dpdk-dev] [PATCH v2 07/15] vmxnet3: support of unified packet type
2015-02-09 6:40 ` [dpdk-dev] [PATCH v2 07/15] vmxnet3: " Helin Zhang
@ 2015-02-11 1:46 ` Yong Wang
0 siblings, 0 replies; 257+ messages in thread
From: Yong Wang @ 2015-02-11 1:46 UTC (permalink / raw)
To: Helin Zhang, dev
On 2/8/15, 10:40 PM, "Helin Zhang" <helin.zhang@intel.com> wrote:
>To unify packet types among all PMDs, bit masks of packet type for
>ol_flags are replaced by unified packet type.
>
>Signed-off-by: Helin Zhang <helin.zhang@intel.com>
>---
Acked-by: Yong Wang <yongwang@vmware.com>
> lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>
>v2 changes:
>* Used redefined packet types and enlarged packet_type field in mbuf.
>
>diff --git a/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c
>b/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c
>index 8425f32..c85ebd8 100644
>--- a/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c
>+++ b/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c
>@@ -650,9 +650,9 @@ vmxnet3_recv_pkts(void *rx_queue, struct rte_mbuf
>**rx_pkts, uint16_t nb_pkts)
> struct ipv4_hdr *ip = (struct ipv4_hdr *)(eth + 1);
>
> if (((ip->version_ihl & 0xf) << 2) > (int)sizeof(struct ipv4_hdr))
>- rxm->ol_flags |= PKT_RX_IPV4_HDR_EXT;
>+ rxm->packet_type = RTE_PTYPE_L3_IPV4_EXT;
> else
>- rxm->ol_flags |= PKT_RX_IPV4_HDR;
>+ rxm->packet_type = RTE_PTYPE_L3_IPV4;
>
> if (!rcd->cnc) {
> if (!rcd->ipc)
>--
>1.9.3
>
^ permalink raw reply [flat|nested] 257+ messages in thread
* Re: [dpdk-dev] [PATCH 06/17] bond: support of unified packet type
2015-01-29 3:15 ` [dpdk-dev] [PATCH 06/17] bond: " Helin Zhang
@ 2015-02-11 15:01 ` Declan Doherty
2015-02-13 0:36 ` Zhang, Helin
0 siblings, 1 reply; 257+ messages in thread
From: Declan Doherty @ 2015-02-11 15:01 UTC (permalink / raw)
To: Helin Zhang, dev
On 29/01/15 03:15, Helin Zhang wrote:
> To unify packet types among all PMDs, bit masks of packet type for
> ol_flags are replaced by unified packet type.
>
> Signed-off-by: Helin Zhang <helin.zhang@intel.com>
> ---
> lib/librte_pmd_bond/rte_eth_bond_pmd.c | 9 ++++-----
> 1 file changed, 4 insertions(+), 5 deletions(-)
>
> diff --git a/lib/librte_pmd_bond/rte_eth_bond_pmd.c b/lib/librte_pmd_bond/rte_eth_bond_pmd.c
> index 8b80297..acd8e77 100644
> --- a/lib/librte_pmd_bond/rte_eth_bond_pmd.c
> +++ b/lib/librte_pmd_bond/rte_eth_bond_pmd.c
> @@ -319,12 +319,11 @@ xmit_l23_hash(const struct rte_mbuf *buf, uint8_t slave_count)
>
> hash = ether_hash(eth_hdr);
>
> - if (buf->ol_flags & PKT_RX_IPV4_HDR) {
> + if (RTE_ETH_IS_IPV4_HDR(buf->packet_type)) {
> struct ipv4_hdr *ipv4_hdr = (struct ipv4_hdr *)
> ((char *)(eth_hdr + 1) + vlan_offset);
> l3hash = ipv4_hash(ipv4_hdr);
> -
> - } else if (buf->ol_flags & PKT_RX_IPV6_HDR) {
> + } else if (RTE_ETH_IS_IPV6_HDR(buf->packet_type)) {
> struct ipv6_hdr *ipv6_hdr = (struct ipv6_hdr *)
> ((char *)(eth_hdr + 1) + vlan_offset);
> l3hash = ipv6_hash(ipv6_hdr);
> @@ -346,7 +345,7 @@ xmit_l34_hash(const struct rte_mbuf *buf, uint8_t slave_count)
> struct tcp_hdr *tcp_hdr = NULL;
> uint32_t hash, l3hash = 0, l4hash = 0;
>
> - if (buf->ol_flags & PKT_RX_IPV4_HDR) {
> + if (RTE_ETH_IS_IPV4_HDR(buf->packet_type)) {
> struct ipv4_hdr *ipv4_hdr = (struct ipv4_hdr *)
> ((char *)(eth_hdr + 1) + vlan_offset);
> size_t ip_hdr_offset;
> @@ -365,7 +364,7 @@ xmit_l34_hash(const struct rte_mbuf *buf, uint8_t slave_count)
> ip_hdr_offset);
> l4hash = HASH_L4_PORTS(udp_hdr);
> }
> - } else if (buf->ol_flags & PKT_RX_IPV6_HDR) {
> + } else if (RTE_ETH_IS_IPV6_HDR(buf->packet_type)) {
> struct ipv6_hdr *ipv6_hdr = (struct ipv6_hdr *)
> ((char *)(eth_hdr + 1) + vlan_offset);
> l3hash = ipv6_hash(ipv6_hdr);
>
Hey Helin,
this patch should no longer be necessary as commit #
bffc9b35e3acd70895b73616c850d8d37fe5732e removed all references to the
ol_flags in the link bonding code.
Declan
^ permalink raw reply [flat|nested] 257+ messages in thread
* Re: [dpdk-dev] [PATCH 06/17] bond: support of unified packet type
2015-02-11 15:01 ` Declan Doherty
@ 2015-02-13 0:36 ` Zhang, Helin
0 siblings, 0 replies; 257+ messages in thread
From: Zhang, Helin @ 2015-02-13 0:36 UTC (permalink / raw)
To: Doherty, Declan, dev
Hi Declan
Yes, I got it. I already have v2 patch of it which has no changes for bond anymore. Thanks!
Regards,
Helin
> -----Original Message-----
> From: Doherty, Declan
> Sent: Wednesday, February 11, 2015 11:01 PM
> To: Zhang, Helin; dev@dpdk.org
> Subject: Re: [dpdk-dev] [PATCH 06/17] bond: support of unified packet type
>
> On 29/01/15 03:15, Helin Zhang wrote:
> > To unify packet types among all PMDs, bit masks of packet type for
> > ol_flags are replaced by unified packet type.
> >
> > Signed-off-by: Helin Zhang <helin.zhang@intel.com>
> > ---
> > lib/librte_pmd_bond/rte_eth_bond_pmd.c | 9 ++++-----
> > 1 file changed, 4 insertions(+), 5 deletions(-)
> >
> > diff --git a/lib/librte_pmd_bond/rte_eth_bond_pmd.c
> > b/lib/librte_pmd_bond/rte_eth_bond_pmd.c
> > index 8b80297..acd8e77 100644
> > --- a/lib/librte_pmd_bond/rte_eth_bond_pmd.c
> > +++ b/lib/librte_pmd_bond/rte_eth_bond_pmd.c
> > @@ -319,12 +319,11 @@ xmit_l23_hash(const struct rte_mbuf *buf,
> > uint8_t slave_count)
> >
> > hash = ether_hash(eth_hdr);
> >
> > - if (buf->ol_flags & PKT_RX_IPV4_HDR) {
> > + if (RTE_ETH_IS_IPV4_HDR(buf->packet_type)) {
> > struct ipv4_hdr *ipv4_hdr = (struct ipv4_hdr *)
> > ((char *)(eth_hdr + 1) + vlan_offset);
> > l3hash = ipv4_hash(ipv4_hdr);
> > -
> > - } else if (buf->ol_flags & PKT_RX_IPV6_HDR) {
> > + } else if (RTE_ETH_IS_IPV6_HDR(buf->packet_type)) {
> > struct ipv6_hdr *ipv6_hdr = (struct ipv6_hdr *)
> > ((char *)(eth_hdr + 1) + vlan_offset);
> > l3hash = ipv6_hash(ipv6_hdr);
> > @@ -346,7 +345,7 @@ xmit_l34_hash(const struct rte_mbuf *buf, uint8_t
> slave_count)
> > struct tcp_hdr *tcp_hdr = NULL;
> > uint32_t hash, l3hash = 0, l4hash = 0;
> >
> > - if (buf->ol_flags & PKT_RX_IPV4_HDR) {
> > + if (RTE_ETH_IS_IPV4_HDR(buf->packet_type)) {
> > struct ipv4_hdr *ipv4_hdr = (struct ipv4_hdr *)
> > ((char *)(eth_hdr + 1) + vlan_offset);
> > size_t ip_hdr_offset;
> > @@ -365,7 +364,7 @@ xmit_l34_hash(const struct rte_mbuf *buf, uint8_t
> slave_count)
> > ip_hdr_offset);
> > l4hash = HASH_L4_PORTS(udp_hdr);
> > }
> > - } else if (buf->ol_flags & PKT_RX_IPV6_HDR) {
> > + } else if (RTE_ETH_IS_IPV6_HDR(buf->packet_type)) {
> > struct ipv6_hdr *ipv6_hdr = (struct ipv6_hdr *)
> > ((char *)(eth_hdr + 1) + vlan_offset);
> > l3hash = ipv6_hash(ipv6_hdr);
> >
>
> Hey Helin,
> this patch should no longer be necessary as commit #
> bffc9b35e3acd70895b73616c850d8d37fe5732e removed all references to the
> ol_flags in the link bonding code.
>
> Declan
^ permalink raw reply [flat|nested] 257+ messages in thread
* Re: [dpdk-dev] [PATCH v2 14/15] examples/l3fwd: support of unified packet type
2015-02-09 6:40 ` [dpdk-dev] [PATCH v2 14/15] examples/l3fwd: " Helin Zhang
@ 2015-02-16 17:04 ` Ananyev, Konstantin
2015-02-17 2:57 ` Zhang, Helin
0 siblings, 1 reply; 257+ messages in thread
From: Ananyev, Konstantin @ 2015-02-16 17:04 UTC (permalink / raw)
To: Zhang, Helin, dev
Hi Helin,
> -----Original Message-----
> From: Zhang, Helin
> Sent: Monday, February 09, 2015 6:41 AM
> To: dev@dpdk.org
> Cc: Cao, Waterman; Liang, Cunming; Liu, Jijiang; Ananyev, Konstantin; Richardson, Bruce; Zhang, Helin
> Subject: [PATCH v2 14/15] examples/l3fwd: support of unified packet type
>
> To unify packet types among all PMDs, bit masks and relevant macros
> of packet type for ol_flags are replaced by unified packet type and
> relevant macros.
>
> Signed-off-by: Helin Zhang <helin.zhang@intel.com>
> ---
> examples/l3fwd/main.c | 64 ++++++++++++++++++++++++++++-----------------------
> 1 file changed, 35 insertions(+), 29 deletions(-)
>
> v2 changes:
> * Used redefined packet types and enlarged packet_type field in mbuf.
>
> diff --git a/examples/l3fwd/main.c b/examples/l3fwd/main.c
> index 6f7d7d4..302322e 100644
> --- a/examples/l3fwd/main.c
> +++ b/examples/l3fwd/main.c
> @@ -958,7 +958,7 @@ l3fwd_simple_forward(struct rte_mbuf *m, uint8_t portid, struct lcore_conf *qcon
>
> eth_hdr = rte_pktmbuf_mtod(m, struct ether_hdr *);
>
> - if (m->ol_flags & PKT_RX_IPV4_HDR) {
> + if (RTE_ETH_IS_IPV4_HDR(m->packet_type)) {
> /* Handle IPv4 headers.*/
> ipv4_hdr = (struct ipv4_hdr *)(rte_pktmbuf_mtod(m, unsigned char *) +
> sizeof(struct ether_hdr));
> @@ -993,7 +993,7 @@ l3fwd_simple_forward(struct rte_mbuf *m, uint8_t portid, struct lcore_conf *qcon
>
> send_single_packet(m, dst_port);
>
> - } else {
> + } else if (RTE_ETH_IS_IPV6_HDR(m->packet_type)) {
If you changed to from 'else' to ' else if' here, then I suppose you'll need to add another 'else' after it:
to handle case, where input packets are neither IPV4 neither IPv6.
Otherwise you might start 'leaking' such mbufs.
> /* Handle IPv6 headers.*/
> struct ipv6_hdr *ipv6_hdr;
>
> @@ -1039,11 +1039,11 @@ l3fwd_simple_forward(struct rte_mbuf *m, uint8_t portid, struct lcore_conf *qcon
> * to BAD_PORT value.
> */
> static inline __attribute__((always_inline)) void
> -rfc1812_process(struct ipv4_hdr *ipv4_hdr, uint16_t *dp, uint32_t flags)
> +rfc1812_process(struct ipv4_hdr *ipv4_hdr, uint16_t *dp, uint16_t ptype)
Shouldn't it be 'uint32_t ptype'?
> {
> uint8_t ihl;
>
> - if ((flags & PKT_RX_IPV4_HDR) != 0) {
> + if (RTE_ETH_IS_IPV4_HDR(ptype)) {
>
> ihl = ipv4_hdr->version_ihl - IPV4_MIN_VER_IHL;
>
> @@ -1074,11 +1074,11 @@ get_dst_port(const struct lcore_conf *qconf, struct rte_mbuf *pkt,
> struct ipv6_hdr *ipv6_hdr;
> struct ether_hdr *eth_hdr;
>
> - if (pkt->ol_flags & PKT_RX_IPV4_HDR) {
> + if (RTE_ETH_IS_IPV4_HDR(pkt->packet_type)) {
> if (rte_lpm_lookup(qconf->ipv4_lookup_struct, dst_ipv4,
> &next_hop) != 0)
> next_hop = portid;
> - } else if (pkt->ol_flags & PKT_RX_IPV6_HDR) {
> + } else if (RTE_ETH_IS_IPV6_HDR(pkt->packet_type)) {
> eth_hdr = rte_pktmbuf_mtod(pkt, struct ether_hdr *);
> ipv6_hdr = (struct ipv6_hdr *)(eth_hdr + 1);
> if (rte_lpm6_lookup(qconf->ipv6_lookup_struct,
> @@ -1112,17 +1112,19 @@ process_packet(struct lcore_conf *qconf, struct rte_mbuf *pkt,
> ve = val_eth[dp];
>
> dst_port[0] = dp;
> - rfc1812_process(ipv4_hdr, dst_port, pkt->ol_flags);
> + rfc1812_process(ipv4_hdr, dst_port, pkt->packet_type);
>
> te = _mm_blend_epi16(te, ve, MASK_ETH);
> _mm_store_si128((__m128i *)eth_hdr, te);
> }
>
> /*
> - * Read ol_flags and destination IPV4 addresses from 4 mbufs.
> + * Read packet_type and destination IPV4 addresses from 4 mbufs.
> */
> static inline void
> -processx4_step1(struct rte_mbuf *pkt[FWDSTEP], __m128i *dip, uint32_t *flag)
> +processx4_step1(struct rte_mbuf *pkt[FWDSTEP],
> + __m128i *dip,
> + uint32_t *ipv4_flag)
> {
> struct ipv4_hdr *ipv4_hdr;
> struct ether_hdr *eth_hdr;
> @@ -1131,22 +1133,20 @@ processx4_step1(struct rte_mbuf *pkt[FWDSTEP], __m128i *dip, uint32_t *flag)
> eth_hdr = rte_pktmbuf_mtod(pkt[0], struct ether_hdr *);
> ipv4_hdr = (struct ipv4_hdr *)(eth_hdr + 1);
> x0 = ipv4_hdr->dst_addr;
> - flag[0] = pkt[0]->ol_flags & PKT_RX_IPV4_HDR;
>
> eth_hdr = rte_pktmbuf_mtod(pkt[1], struct ether_hdr *);
> ipv4_hdr = (struct ipv4_hdr *)(eth_hdr + 1);
> x1 = ipv4_hdr->dst_addr;
> - flag[0] &= pkt[1]->ol_flags;
>
> eth_hdr = rte_pktmbuf_mtod(pkt[2], struct ether_hdr *);
> ipv4_hdr = (struct ipv4_hdr *)(eth_hdr + 1);
> x2 = ipv4_hdr->dst_addr;
> - flag[0] &= pkt[2]->ol_flags;
>
> eth_hdr = rte_pktmbuf_mtod(pkt[3], struct ether_hdr *);
> ipv4_hdr = (struct ipv4_hdr *)(eth_hdr + 1);
> x3 = ipv4_hdr->dst_addr;
> - flag[0] &= pkt[3]->ol_flags;
> + *ipv4_flag = pkt[0]->packet_type & pkt[1]->packet_type &
> + pkt[2]->packet_type & pkt[3]->packet_type & RTE_PTYPE_L3_IPV4;
Why not as it was before:
flag[0] = pkt[0]->packet-type & ...
...
flag[0] &= pkt[1]->packet_type;
...
Why do you need to unite them?
>
> dip[0] = _mm_set_epi32(x3, x2, x1, x0);
> }
> @@ -1156,8 +1156,12 @@ processx4_step1(struct rte_mbuf *pkt[FWDSTEP], __m128i *dip, uint32_t *flag)
> * If lookup fails, use incoming port (portid) as destination port.
> */
> static inline void
> -processx4_step2(const struct lcore_conf *qconf, __m128i dip, uint32_t flag,
> - uint8_t portid, struct rte_mbuf *pkt[FWDSTEP], uint16_t dprt[FWDSTEP])
> +processx4_step2(const struct lcore_conf *qconf,
> + __m128i dip,
> + uint32_t ipv4_flag,
> + uint8_t portid,
> + struct rte_mbuf *pkt[FWDSTEP],
> + uint16_t dprt[FWDSTEP])
> {
> rte_xmm_t dst;
> const __m128i bswap_mask = _mm_set_epi8(12, 13, 14, 15, 8, 9, 10, 11,
> @@ -1167,7 +1171,7 @@ processx4_step2(const struct lcore_conf *qconf, __m128i dip, uint32_t flag,
> dip = _mm_shuffle_epi8(dip, bswap_mask);
>
> /* if all 4 packets are IPV4. */
> - if (likely(flag != 0)) {
> + if (likely(ipv4_flag)) {
> rte_lpm_lookupx4(qconf->ipv4_lookup_struct, dip, dprt, portid);
> } else {
> dst.x = dip;
> @@ -1218,13 +1222,13 @@ processx4_step3(struct rte_mbuf *pkt[FWDSTEP], uint16_t dst_port[FWDSTEP])
> _mm_store_si128(p[3], te[3]);
>
> rfc1812_process((struct ipv4_hdr *)((struct ether_hdr *)p[0] + 1),
> - &dst_port[0], pkt[0]->ol_flags);
> + &dst_port[0], pkt[0]->packet_type);
> rfc1812_process((struct ipv4_hdr *)((struct ether_hdr *)p[1] + 1),
> - &dst_port[1], pkt[1]->ol_flags);
> + &dst_port[1], pkt[1]->packet_type);
> rfc1812_process((struct ipv4_hdr *)((struct ether_hdr *)p[2] + 1),
> - &dst_port[2], pkt[2]->ol_flags);
> + &dst_port[2], pkt[2]->packet_type);
> rfc1812_process((struct ipv4_hdr *)((struct ether_hdr *)p[3] + 1),
> - &dst_port[3], pkt[3]->ol_flags);
> + &dst_port[3], pkt[3]->packet_type);
> }
>
> /*
> @@ -1411,7 +1415,7 @@ main_loop(__attribute__((unused)) void *dummy)
> uint16_t *lp;
> uint16_t dst_port[MAX_PKT_BURST];
> __m128i dip[MAX_PKT_BURST / FWDSTEP];
> - uint32_t flag[MAX_PKT_BURST / FWDSTEP];
> + uint32_t ipv4_flag[MAX_PKT_BURST / FWDSTEP];
> uint16_t pnum[MAX_PKT_BURST + 1];
> #endif
>
> @@ -1481,14 +1485,16 @@ main_loop(__attribute__((unused)) void *dummy)
> */
> int32_t n = RTE_ALIGN_FLOOR(nb_rx, 4);
> for (j = 0; j < n ; j+=4) {
> - uint32_t ol_flag = pkts_burst[j]->ol_flags
> - & pkts_burst[j+1]->ol_flags
> - & pkts_burst[j+2]->ol_flags
> - & pkts_burst[j+3]->ol_flags;
> - if (ol_flag & PKT_RX_IPV4_HDR ) {
> + uint32_t pkt_type =
> + pkts_burst[j]->packet_type &
> + pkts_burst[j+1]->packet_type &
> + pkts_burst[j+2]->packet_type &
> + pkts_burst[j+3]->packet_type;
> + if (pkt_type & RTE_PTYPE_L3_IPV4) {
> simple_ipv4_fwd_4pkts(&pkts_burst[j],
> portid, qconf);
> - } else if (ol_flag & PKT_RX_IPV6_HDR) {
> + } else if (pkt_type &
> + RTE_PTYPE_L3_IPV6) {
> simple_ipv6_fwd_4pkts(&pkts_burst[j],
> portid, qconf);
> } else {
> @@ -1513,13 +1519,13 @@ main_loop(__attribute__((unused)) void *dummy)
> for (j = 0; j != k; j += FWDSTEP) {
> processx4_step1(&pkts_burst[j],
> &dip[j / FWDSTEP],
> - &flag[j / FWDSTEP]);
> + &ipv4_flag[j / FWDSTEP]);
> }
>
> k = RTE_ALIGN_FLOOR(nb_rx, FWDSTEP);
> for (j = 0; j != k; j += FWDSTEP) {
> processx4_step2(qconf, dip[j / FWDSTEP],
> - flag[j / FWDSTEP], portid,
> + ipv4_flag[j / FWDSTEP], portid,
> &pkts_burst[j], &dst_port[j]);
> }
>
> --
> 1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* Re: [dpdk-dev] [PATCH v2 14/15] examples/l3fwd: support of unified packet type
2015-02-16 17:04 ` Ananyev, Konstantin
@ 2015-02-17 2:57 ` Zhang, Helin
0 siblings, 0 replies; 257+ messages in thread
From: Zhang, Helin @ 2015-02-17 2:57 UTC (permalink / raw)
To: Ananyev, Konstantin, dev
> -----Original Message-----
> From: Ananyev, Konstantin
> Sent: Tuesday, February 17, 2015 1:05 AM
> To: Zhang, Helin; dev@dpdk.org
> Cc: Cao, Waterman; Liang, Cunming; Liu, Jijiang; Richardson, Bruce
> Subject: RE: [PATCH v2 14/15] examples/l3fwd: support of unified packet type
>
> Hi Helin,
>
> > -----Original Message-----
> > From: Zhang, Helin
> > Sent: Monday, February 09, 2015 6:41 AM
> > To: dev@dpdk.org
> > Cc: Cao, Waterman; Liang, Cunming; Liu, Jijiang; Ananyev, Konstantin;
> > Richardson, Bruce; Zhang, Helin
> > Subject: [PATCH v2 14/15] examples/l3fwd: support of unified packet
> > type
> >
> > To unify packet types among all PMDs, bit masks and relevant macros of
> > packet type for ol_flags are replaced by unified packet type and
> > relevant macros.
> >
> > Signed-off-by: Helin Zhang <helin.zhang@intel.com>
> > ---
> > examples/l3fwd/main.c | 64
> > ++++++++++++++++++++++++++++-----------------------
> > 1 file changed, 35 insertions(+), 29 deletions(-)
> >
> > v2 changes:
> > * Used redefined packet types and enlarged packet_type field in mbuf.
> >
> > diff --git a/examples/l3fwd/main.c b/examples/l3fwd/main.c index
> > 6f7d7d4..302322e 100644
> > --- a/examples/l3fwd/main.c
> > +++ b/examples/l3fwd/main.c
> > @@ -958,7 +958,7 @@ l3fwd_simple_forward(struct rte_mbuf *m, uint8_t
> > portid, struct lcore_conf *qcon
> >
> > eth_hdr = rte_pktmbuf_mtod(m, struct ether_hdr *);
> >
> > - if (m->ol_flags & PKT_RX_IPV4_HDR) {
> > + if (RTE_ETH_IS_IPV4_HDR(m->packet_type)) {
> > /* Handle IPv4 headers.*/
> > ipv4_hdr = (struct ipv4_hdr *)(rte_pktmbuf_mtod(m, unsigned char *)
> +
> > sizeof(struct ether_hdr));
> > @@ -993,7 +993,7 @@ l3fwd_simple_forward(struct rte_mbuf *m, uint8_t
> > portid, struct lcore_conf *qcon
> >
> > send_single_packet(m, dst_port);
> >
> > - } else {
> > + } else if (RTE_ETH_IS_IPV6_HDR(m->packet_type)) {
>
> If you changed to from 'else' to ' else if' here, then I suppose you'll need to add
> another 'else' after it:
> to handle case, where input packets are neither IPV4 neither IPv6.
> Otherwise you might start 'leaking' such mbufs.
Agree with you, will add code to free mbuf there.
>
> > /* Handle IPv6 headers.*/
> > struct ipv6_hdr *ipv6_hdr;
> >
> > @@ -1039,11 +1039,11 @@ l3fwd_simple_forward(struct rte_mbuf *m,
> uint8_t portid, struct lcore_conf *qcon
> > * to BAD_PORT value.
> > */
> > static inline __attribute__((always_inline)) void
> > -rfc1812_process(struct ipv4_hdr *ipv4_hdr, uint16_t *dp, uint32_t
> > flags)
> > +rfc1812_process(struct ipv4_hdr *ipv4_hdr, uint16_t *dp, uint16_t
> > +ptype)
>
> Shouldn't it be 'uint32_t ptype'?
Agree with you. Will correct it.
>
> > {
> > uint8_t ihl;
> >
> > - if ((flags & PKT_RX_IPV4_HDR) != 0) {
> > + if (RTE_ETH_IS_IPV4_HDR(ptype)) {
> >
> > ihl = ipv4_hdr->version_ihl - IPV4_MIN_VER_IHL;
> >
> > @@ -1074,11 +1074,11 @@ get_dst_port(const struct lcore_conf *qconf,
> struct rte_mbuf *pkt,
> > struct ipv6_hdr *ipv6_hdr;
> > struct ether_hdr *eth_hdr;
> >
> > - if (pkt->ol_flags & PKT_RX_IPV4_HDR) {
> > + if (RTE_ETH_IS_IPV4_HDR(pkt->packet_type)) {
> > if (rte_lpm_lookup(qconf->ipv4_lookup_struct, dst_ipv4,
> > &next_hop) != 0)
> > next_hop = portid;
> > - } else if (pkt->ol_flags & PKT_RX_IPV6_HDR) {
> > + } else if (RTE_ETH_IS_IPV6_HDR(pkt->packet_type)) {
> > eth_hdr = rte_pktmbuf_mtod(pkt, struct ether_hdr *);
> > ipv6_hdr = (struct ipv6_hdr *)(eth_hdr + 1);
> > if (rte_lpm6_lookup(qconf->ipv6_lookup_struct,
> > @@ -1112,17 +1112,19 @@ process_packet(struct lcore_conf *qconf, struct
> rte_mbuf *pkt,
> > ve = val_eth[dp];
> >
> > dst_port[0] = dp;
> > - rfc1812_process(ipv4_hdr, dst_port, pkt->ol_flags);
> > + rfc1812_process(ipv4_hdr, dst_port, pkt->packet_type);
> >
> > te = _mm_blend_epi16(te, ve, MASK_ETH);
> > _mm_store_si128((__m128i *)eth_hdr, te); }
> >
> > /*
> > - * Read ol_flags and destination IPV4 addresses from 4 mbufs.
> > + * Read packet_type and destination IPV4 addresses from 4 mbufs.
> > */
> > static inline void
> > -processx4_step1(struct rte_mbuf *pkt[FWDSTEP], __m128i *dip, uint32_t
> > *flag)
> > +processx4_step1(struct rte_mbuf *pkt[FWDSTEP],
> > + __m128i *dip,
> > + uint32_t *ipv4_flag)
> > {
> > struct ipv4_hdr *ipv4_hdr;
> > struct ether_hdr *eth_hdr;
> > @@ -1131,22 +1133,20 @@ processx4_step1(struct rte_mbuf
> *pkt[FWDSTEP], __m128i *dip, uint32_t *flag)
> > eth_hdr = rte_pktmbuf_mtod(pkt[0], struct ether_hdr *);
> > ipv4_hdr = (struct ipv4_hdr *)(eth_hdr + 1);
> > x0 = ipv4_hdr->dst_addr;
> > - flag[0] = pkt[0]->ol_flags & PKT_RX_IPV4_HDR;
> >
> > eth_hdr = rte_pktmbuf_mtod(pkt[1], struct ether_hdr *);
> > ipv4_hdr = (struct ipv4_hdr *)(eth_hdr + 1);
> > x1 = ipv4_hdr->dst_addr;
> > - flag[0] &= pkt[1]->ol_flags;
> >
> > eth_hdr = rte_pktmbuf_mtod(pkt[2], struct ether_hdr *);
> > ipv4_hdr = (struct ipv4_hdr *)(eth_hdr + 1);
> > x2 = ipv4_hdr->dst_addr;
> > - flag[0] &= pkt[2]->ol_flags;
> >
> > eth_hdr = rte_pktmbuf_mtod(pkt[3], struct ether_hdr *);
> > ipv4_hdr = (struct ipv4_hdr *)(eth_hdr + 1);
> > x3 = ipv4_hdr->dst_addr;
> > - flag[0] &= pkt[3]->ol_flags;
> > + *ipv4_flag = pkt[0]->packet_type & pkt[1]->packet_type &
> > + pkt[2]->packet_type & pkt[3]->packet_type & RTE_PTYPE_L3_IPV4;
>
> Why not as it was before:
> flag[0] = pkt[0]->packet-type & ...
> ...
> flag[0] &= pkt[1]->packet_type;
> ...
>
> Why do you need to unite them?
No specific reason, will changed it as before. Thanks!
Regards,
Helin
>
> >
> > dip[0] = _mm_set_epi32(x3, x2, x1, x0); } @@ -1156,8 +1156,12 @@
> > processx4_step1(struct rte_mbuf *pkt[FWDSTEP], __m128i *dip, uint32_t
> *flag)
> > * If lookup fails, use incoming port (portid) as destination port.
> > */
> > static inline void
> > -processx4_step2(const struct lcore_conf *qconf, __m128i dip, uint32_t flag,
> > - uint8_t portid, struct rte_mbuf *pkt[FWDSTEP], uint16_t dprt[FWDSTEP])
> > +processx4_step2(const struct lcore_conf *qconf,
> > + __m128i dip,
> > + uint32_t ipv4_flag,
> > + uint8_t portid,
> > + struct rte_mbuf *pkt[FWDSTEP],
> > + uint16_t dprt[FWDSTEP])
> > {
> > rte_xmm_t dst;
> > const __m128i bswap_mask = _mm_set_epi8(12, 13, 14, 15, 8, 9, 10,
> > 11, @@ -1167,7 +1171,7 @@ processx4_step2(const struct lcore_conf
> *qconf, __m128i dip, uint32_t flag,
> > dip = _mm_shuffle_epi8(dip, bswap_mask);
> >
> > /* if all 4 packets are IPV4. */
> > - if (likely(flag != 0)) {
> > + if (likely(ipv4_flag)) {
> > rte_lpm_lookupx4(qconf->ipv4_lookup_struct, dip, dprt, portid);
> > } else {
> > dst.x = dip;
> > @@ -1218,13 +1222,13 @@ processx4_step3(struct rte_mbuf
> *pkt[FWDSTEP], uint16_t dst_port[FWDSTEP])
> > _mm_store_si128(p[3], te[3]);
> >
> > rfc1812_process((struct ipv4_hdr *)((struct ether_hdr *)p[0] + 1),
> > - &dst_port[0], pkt[0]->ol_flags);
> > + &dst_port[0], pkt[0]->packet_type);
> > rfc1812_process((struct ipv4_hdr *)((struct ether_hdr *)p[1] + 1),
> > - &dst_port[1], pkt[1]->ol_flags);
> > + &dst_port[1], pkt[1]->packet_type);
> > rfc1812_process((struct ipv4_hdr *)((struct ether_hdr *)p[2] + 1),
> > - &dst_port[2], pkt[2]->ol_flags);
> > + &dst_port[2], pkt[2]->packet_type);
> > rfc1812_process((struct ipv4_hdr *)((struct ether_hdr *)p[3] + 1),
> > - &dst_port[3], pkt[3]->ol_flags);
> > + &dst_port[3], pkt[3]->packet_type);
> > }
> >
> > /*
> > @@ -1411,7 +1415,7 @@ main_loop(__attribute__((unused)) void *dummy)
> > uint16_t *lp;
> > uint16_t dst_port[MAX_PKT_BURST];
> > __m128i dip[MAX_PKT_BURST / FWDSTEP];
> > - uint32_t flag[MAX_PKT_BURST / FWDSTEP];
> > + uint32_t ipv4_flag[MAX_PKT_BURST / FWDSTEP];
> > uint16_t pnum[MAX_PKT_BURST + 1];
> > #endif
> >
> > @@ -1481,14 +1485,16 @@ main_loop(__attribute__((unused)) void
> *dummy)
> > */
> > int32_t n = RTE_ALIGN_FLOOR(nb_rx, 4);
> > for (j = 0; j < n ; j+=4) {
> > - uint32_t ol_flag = pkts_burst[j]->ol_flags
> > - & pkts_burst[j+1]->ol_flags
> > - & pkts_burst[j+2]->ol_flags
> > - & pkts_burst[j+3]->ol_flags;
> > - if (ol_flag & PKT_RX_IPV4_HDR ) {
> > + uint32_t pkt_type =
> > + pkts_burst[j]->packet_type &
> > + pkts_burst[j+1]->packet_type &
> > + pkts_burst[j+2]->packet_type &
> > + pkts_burst[j+3]->packet_type;
> > + if (pkt_type & RTE_PTYPE_L3_IPV4) {
> > simple_ipv4_fwd_4pkts(&pkts_burst[j],
> > portid, qconf);
> > - } else if (ol_flag & PKT_RX_IPV6_HDR) {
> > + } else if (pkt_type &
> > + RTE_PTYPE_L3_IPV6) {
> > simple_ipv6_fwd_4pkts(&pkts_burst[j],
> > portid, qconf);
> > } else {
> > @@ -1513,13 +1519,13 @@ main_loop(__attribute__((unused)) void
> *dummy)
> > for (j = 0; j != k; j += FWDSTEP) {
> > processx4_step1(&pkts_burst[j],
> > &dip[j / FWDSTEP],
> > - &flag[j / FWDSTEP]);
> > + &ipv4_flag[j / FWDSTEP]);
> > }
> >
> > k = RTE_ALIGN_FLOOR(nb_rx, FWDSTEP);
> > for (j = 0; j != k; j += FWDSTEP) {
> > processx4_step2(qconf, dip[j / FWDSTEP],
> > - flag[j / FWDSTEP], portid,
> > + ipv4_flag[j / FWDSTEP], portid,
> > &pkts_burst[j], &dst_port[j]);
> > }
> >
> > --
> > 1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v3 00/16] unified packet type
2015-01-29 3:15 ` [dpdk-dev] [PATCH 00/17] unified packet type Helin Zhang
` (18 preceding siblings ...)
2015-02-09 6:40 ` [dpdk-dev] [PATCH v2 00/15] " Helin Zhang
@ 2015-02-17 6:59 ` Helin Zhang
2015-02-17 6:59 ` [dpdk-dev] [PATCH v3 01/16] mbuf: redefinition of packet_type in rte_mbuf Helin Zhang
` (18 more replies)
19 siblings, 19 replies; 257+ messages in thread
From: Helin Zhang @ 2015-02-17 6:59 UTC (permalink / raw)
To: dev
Currently only 6 bits which are stored in ol_flags are used to indicate the
packet types. This is not enough, as some NIC hardware can recognize quite
a lot of packet types, e.g i40e hardware can recognize more than 150 packet
types. Hiding those packet types hides hardware offload capabilities which
could be quite useful for improving performance and for end users. So an
unified packet types are needed to support all possible PMDs. A 16 bits
packet_type in mbuf structure can be changed to 32 bits and used for this
purpose. In addition, all packet types stored in ol_flag field should be
deleted at all, and 6 bits of ol_flags can be save as the benifit.
Initially, 32 bits of packet_type can be divided into several sub fields to
indicate different packet type information of a packet. The initial design
is to divide those bits into fields for L2 types, L3 types, L4 types, tunnel
types, inner L2 types, inner L3 types and inner L4 types. All PMDs should
translate the offloaded packet types into these 7 fields of information, for
user applications.
v2 changes:
* Enlarged the packet_type field from 16 bits to 32 bits.
* Redefined the packet type sub-fields.
* Updated the 'struct rte_kni_mbuf' for KNI according to the mbuf changes.
* Used redefined packet types and enlarged packet_type field for all PMDs
and corresponding applications.
* Removed changes in bond and its relevant application, as there is no need
at all according to the recent bond changes.
v3 changes:
* Put the mbuf layout changes into a single patch.
* Put vector ixgbe changes right after mbuf changes.
* Disabled vector ixgbe PMD by default, as mbuf layout changed, and then
re-enabled it after vector ixgbe PMD updated.
* Put the definitions of unified packet type into a single patch.
* Minor bug fixes and enhancements in l3fwd example.
Helin Zhang (16):
mbuf: redefinition of packet_type in rte_mbuf
ixgbe: support of unified packet type for vector
mbuf: add definitions of unified packet types
e1000: support of unified packet type
ixgbe: support of unified packet type
i40e: support of unified packet type
enic: support of unified packet type
vmxnet3: support of unified packet type
app/test-pipeline: support of unified packet type
app/testpmd: support of unified packet type
examples/ip_fragmentation: support of unified packet type
examples/ip_reassembly: support of unified packet type
examples/l3fwd-acl: support of unified packet type
examples/l3fwd-power: support of unified packet type
examples/l3fwd: support of unified packet type
mbuf: remove old packet type bit masks
app/test-pipeline/pipeline_hash.c | 7 +-
app/test-pmd/csumonly.c | 10 +-
app/test-pmd/rxonly.c | 9 +-
examples/ip_fragmentation/main.c | 7 +-
examples/ip_reassembly/main.c | 7 +-
examples/l3fwd-acl/main.c | 19 +-
examples/l3fwd-power/main.c | 5 +-
examples/l3fwd/main.c | 71 +-
.../linuxapp/eal/include/exec-env/rte_kni_common.h | 4 +-
lib/librte_mbuf/rte_mbuf.c | 6 -
lib/librte_mbuf/rte_mbuf.h | 127 +++-
lib/librte_pmd_e1000/igb_rxtx.c | 98 ++-
lib/librte_pmd_enic/enic_main.c | 14 +-
lib/librte_pmd_i40e/i40e_rxtx.c | 786 ++++++++++++++-------
lib/librte_pmd_ixgbe/ixgbe_rxtx.c | 146 +++-
lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c | 49 +-
lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c | 4 +-
17 files changed, 921 insertions(+), 448 deletions(-)
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v3 01/16] mbuf: redefinition of packet_type in rte_mbuf
2015-02-17 6:59 ` [dpdk-dev] [PATCH v3 00/16] unified packet type Helin Zhang
@ 2015-02-17 6:59 ` Helin Zhang
2015-02-17 6:59 ` [dpdk-dev] [PATCH v3 02/16] ixgbe: support of unified packet type for vector Helin Zhang
` (17 subsequent siblings)
18 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-02-17 6:59 UTC (permalink / raw)
To: dev
In order to unify the packet type, the field of 'packet_type' in
'struct rte_mbuf' needs to be extended from 16 to 32 bits.
Accordingly, some fields in 'struct rte_mbuf' are re-organized to
support this change for Vector PMD. As 'struct rte_kni_mbuf' for
KNI should be right mapped to 'struct rte_mbuf', it should be
modified accordingly. In addition, Vector PMD of ixgbe is disabled
by default, as 'struct rte_mbuf' changed.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Signed-off-by: Cunming Liang <cunming.liang@intel.com>
---
config/common_linuxapp | 2 +-
.../linuxapp/eal/include/exec-env/rte_kni_common.h | 4 ++--
lib/librte_mbuf/rte_mbuf.h | 23 +++++++++++++++-------
3 files changed, 19 insertions(+), 10 deletions(-)
v2 changes:
* Enlarged the packet_type field from 16 bits to 32 bits.
* Redefined the packet type sub-fields.
* Updated the 'struct rte_kni_mbuf' for KNI according to the mbuf changes.
v3 changes:
* Put the mbuf layout changes into a single patch.
* Disabled vector ixgbe PMD by default, as mbuf layout changed.
diff --git a/config/common_linuxapp b/config/common_linuxapp
index d428f84..7a530b9 100644
--- a/config/common_linuxapp
+++ b/config/common_linuxapp
@@ -160,7 +160,7 @@ CONFIG_RTE_LIBRTE_IXGBE_DEBUG_TX_FREE=n
CONFIG_RTE_LIBRTE_IXGBE_DEBUG_DRIVER=n
CONFIG_RTE_LIBRTE_IXGBE_PF_DISABLE_STRIP_CRC=n
CONFIG_RTE_LIBRTE_IXGBE_RX_ALLOW_BULK_ALLOC=y
-CONFIG_RTE_IXGBE_INC_VECTOR=y
+CONFIG_RTE_IXGBE_INC_VECTOR=n
CONFIG_RTE_IXGBE_RX_OLFLAGS_ENABLE=y
#
diff --git a/lib/librte_eal/linuxapp/eal/include/exec-env/rte_kni_common.h b/lib/librte_eal/linuxapp/eal/include/exec-env/rte_kni_common.h
index 1e55c2d..bd1cc09 100644
--- a/lib/librte_eal/linuxapp/eal/include/exec-env/rte_kni_common.h
+++ b/lib/librte_eal/linuxapp/eal/include/exec-env/rte_kni_common.h
@@ -117,9 +117,9 @@ struct rte_kni_mbuf {
uint16_t data_off; /**< Start address of data in segment buffer. */
char pad1[4];
uint64_t ol_flags; /**< Offload features. */
- char pad2[2];
- uint16_t data_len; /**< Amount of data in segment buffer. */
+ char pad2[4];
uint32_t pkt_len; /**< Total pkt len: sum of all segment data_len. */
+ uint16_t data_len; /**< Amount of data in segment buffer. */
/* fields on second cache line */
char pad3[8] __attribute__((__aligned__(RTE_CACHE_LINE_SIZE)));
diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
index e3008c6..6f8e1dd 100644
--- a/lib/librte_mbuf/rte_mbuf.h
+++ b/lib/librte_mbuf/rte_mbuf.h
@@ -259,17 +259,26 @@ struct rte_mbuf {
/* remaining bytes are set on RX when pulling packet from descriptor */
MARKER rx_descriptor_fields1;
- /**
- * The packet type, which is used to indicate ordinary packet and also
- * tunneled packet format, i.e. each number is represented a type of
- * packet.
+ /*
+ * The packet type, which is the combination of outer/inner L2, L3, L4
+ * and tunnel types.
*/
- uint16_t packet_type;
+ union {
+ uint32_t packet_type; /**< L2/L3/L4 and tunnel information. */
+ struct {
+ uint32_t l2_type:4; /**< (Outer) L2 type. */
+ uint32_t l3_type:4; /**< (Outer) L3 type. */
+ uint32_t l4_type:4; /**< (Outer) L4 type. */
+ uint32_t tun_type:4; /**< Tunnel type. */
+ uint32_t inner_l2_type:4; /**< Inner L2 type. */
+ uint32_t inner_l3_type:4; /**< Inner L3 type. */
+ uint32_t inner_l4_type:4; /**< Inner L4 type. */
+ };
+ };
- uint16_t data_len; /**< Amount of data in segment buffer. */
uint32_t pkt_len; /**< Total pkt len: sum of all segments. */
+ uint16_t data_len; /**< Amount of data in segment buffer. */
uint16_t vlan_tci; /**< VLAN Tag Control Identifier (CPU order) */
- uint16_t reserved;
union {
uint32_t rss; /**< RSS hash result if RSS enabled */
struct {
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v3 02/16] ixgbe: support of unified packet type for vector
2015-02-17 6:59 ` [dpdk-dev] [PATCH v3 00/16] unified packet type Helin Zhang
2015-02-17 6:59 ` [dpdk-dev] [PATCH v3 01/16] mbuf: redefinition of packet_type in rte_mbuf Helin Zhang
@ 2015-02-17 6:59 ` Helin Zhang
2015-02-17 6:59 ` [dpdk-dev] [PATCH v3 03/16] mbuf: add definitions of unified packet types Helin Zhang
` (16 subsequent siblings)
18 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-02-17 6:59 UTC (permalink / raw)
To: dev
To unify the packet type, bit masks of packet type for ol_flags are
replaced. In addition, more packet types (UDP, TCP and SCTP) are
supported in vectorized ixgbe PMD.
Note that around 2% performance drop (64B) was observed of doing 4
ports (1 port per 82599 card) IO forwarding on the same SNB core.
Signed-off-by: Cunming Liang <cunming.liang@intel.com>
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
config/common_linuxapp | 2 +-
lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c | 49 +++++++++++++++++++----------------
2 files changed, 27 insertions(+), 24 deletions(-)
v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.
v3 changes:
* Put vector ixgbe changes right after mbuf changes.
* Enabled vector ixgbe PMD by default together with changes for updated
vector PMD.
diff --git a/config/common_linuxapp b/config/common_linuxapp
index 7a530b9..d428f84 100644
--- a/config/common_linuxapp
+++ b/config/common_linuxapp
@@ -160,7 +160,7 @@ CONFIG_RTE_LIBRTE_IXGBE_DEBUG_TX_FREE=n
CONFIG_RTE_LIBRTE_IXGBE_DEBUG_DRIVER=n
CONFIG_RTE_LIBRTE_IXGBE_PF_DISABLE_STRIP_CRC=n
CONFIG_RTE_LIBRTE_IXGBE_RX_ALLOW_BULK_ALLOC=y
-CONFIG_RTE_IXGBE_INC_VECTOR=n
+CONFIG_RTE_IXGBE_INC_VECTOR=y
CONFIG_RTE_IXGBE_RX_OLFLAGS_ENABLE=y
#
diff --git a/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c b/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c
index b54cb19..357eb1d 100644
--- a/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c
+++ b/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c
@@ -134,44 +134,35 @@ ixgbe_rxq_rearm(struct igb_rx_queue *rxq)
*/
#ifdef RTE_IXGBE_RX_OLFLAGS_ENABLE
-#define OLFLAGS_MASK ((uint16_t)(PKT_RX_VLAN_PKT | PKT_RX_IPV4_HDR |\
- PKT_RX_IPV4_HDR_EXT | PKT_RX_IPV6_HDR |\
- PKT_RX_IPV6_HDR_EXT))
-#define OLFLAGS_MASK_V (((uint64_t)OLFLAGS_MASK << 48) | \
- ((uint64_t)OLFLAGS_MASK << 32) | \
- ((uint64_t)OLFLAGS_MASK << 16) | \
- ((uint64_t)OLFLAGS_MASK))
-#define PTYPE_SHIFT (1)
+#define OLFLAGS_MASK_V (((uint64_t)PKT_RX_VLAN_PKT << 48) | \
+ ((uint64_t)PKT_RX_VLAN_PKT << 32) | \
+ ((uint64_t)PKT_RX_VLAN_PKT << 16) | \
+ ((uint64_t)PKT_RX_VLAN_PKT))
#define VTAG_SHIFT (3)
static inline void
desc_to_olflags_v(__m128i descs[4], struct rte_mbuf **rx_pkts)
{
- __m128i ptype0, ptype1, vtag0, vtag1;
+ __m128i vtag0, vtag1;
union {
uint16_t e[4];
uint64_t dword;
} vol;
- ptype0 = _mm_unpacklo_epi16(descs[0], descs[1]);
- ptype1 = _mm_unpacklo_epi16(descs[2], descs[3]);
vtag0 = _mm_unpackhi_epi16(descs[0], descs[1]);
vtag1 = _mm_unpackhi_epi16(descs[2], descs[3]);
- ptype1 = _mm_unpacklo_epi32(ptype0, ptype1);
vtag1 = _mm_unpacklo_epi32(vtag0, vtag1);
-
- ptype1 = _mm_slli_epi16(ptype1, PTYPE_SHIFT);
vtag1 = _mm_srli_epi16(vtag1, VTAG_SHIFT);
- ptype1 = _mm_or_si128(ptype1, vtag1);
- vol.dword = _mm_cvtsi128_si64(ptype1) & OLFLAGS_MASK_V;
+ vol.dword = _mm_cvtsi128_si64(vtag1) & OLFLAGS_MASK_V;
rx_pkts[0]->ol_flags = vol.e[0];
rx_pkts[1]->ol_flags = vol.e[1];
rx_pkts[2]->ol_flags = vol.e[2];
rx_pkts[3]->ol_flags = vol.e[3];
}
+
#else
#define desc_to_olflags_v(desc, rx_pkts) do {} while (0)
#endif
@@ -197,13 +188,15 @@ _recv_raw_pkts_vec(struct igb_rx_queue *rxq, struct rte_mbuf **rx_pkts,
uint64_t var;
__m128i shuf_msk;
__m128i crc_adjust = _mm_set_epi16(
- 0, 0, 0, 0, /* ignore non-length fields */
+ 0, 0, 0, /* ignore non-length fields */
+ -rxq->crc_len, /* sub crc on data_len */
0, /* ignore high-16bits of pkt_len */
-rxq->crc_len, /* sub crc on pkt_len */
- -rxq->crc_len, /* sub crc on data_len */
- 0 /* ignore pkt_type field */
+ 0, 0 /* ignore pkt_type field */
);
__m128i dd_check, eop_check;
+ __m128i desc_mask = _mm_set_epi32(0xFFFFFFFF, 0xFFFFFFFF,
+ 0xFFFFFFFF, 0xFFFF07F0);
if (unlikely(nb_pkts < RTE_IXGBE_VPMD_RX_BURST))
return 0;
@@ -234,12 +227,13 @@ _recv_raw_pkts_vec(struct igb_rx_queue *rxq, struct rte_mbuf **rx_pkts,
/* mask to shuffle from desc. to mbuf */
shuf_msk = _mm_set_epi8(
7, 6, 5, 4, /* octet 4~7, 32bits rss */
- 0xFF, 0xFF, /* skip high 16 bits vlan_macip, zero out */
15, 14, /* octet 14~15, low 16 bits vlan_macip */
+ 13, 12, /* octet 12~13, 16 bits data_len */
0xFF, 0xFF, /* skip high 16 bits pkt_len, zero out */
13, 12, /* octet 12~13, low 16 bits pkt_len */
- 13, 12, /* octet 12~13, 16 bits data_len */
- 0xFF, 0xFF /* skip pkt_type field */
+ 0xFF, 0xFF, /* skip high 16 bits pkt_type */
+ 1, /* octet 1, 8 bits pkt_type field */
+ 0 /* octet 0, 4 bits offset 4 pkt_type field */
);
/* Cache is empty -> need to scan the buffer rings, but first move
@@ -248,6 +242,7 @@ _recv_raw_pkts_vec(struct igb_rx_queue *rxq, struct rte_mbuf **rx_pkts,
/*
* A. load 4 packet in one loop
+ * [A*. mask out 4 unused dirty field in desc]
* B. copy 4 mbuf point from swring to rx_pkts
* C. calc the number of DD bits among the 4 packets
* [C*. extract the end-of-packet bit, if requested]
@@ -289,6 +284,14 @@ _recv_raw_pkts_vec(struct igb_rx_queue *rxq, struct rte_mbuf **rx_pkts,
/* B.2 copy 2 mbuf point into rx_pkts */
_mm_storeu_si128((__m128i *)&rx_pkts[pos+2], mbp2);
+ /* A* mask out 0~3 bits RSS type */
+ descs[3] = _mm_and_si128(descs[3], desc_mask);
+ descs[2] = _mm_and_si128(descs[2], desc_mask);
+
+ /* A* mask out 0~3 bits RSS type */
+ descs[1] = _mm_and_si128(descs[1], desc_mask);
+ descs[0] = _mm_and_si128(descs[0], desc_mask);
+
/* avoid compiler reorder optimization */
rte_compiler_barrier();
@@ -301,7 +304,7 @@ _recv_raw_pkts_vec(struct igb_rx_queue *rxq, struct rte_mbuf **rx_pkts,
/* C.1 4=>2 filter staterr info only */
sterr_tmp1 = _mm_unpackhi_epi32(descs[1], descs[0]);
- /* set ol_flags with packet type and vlan tag */
+ /* set ol_flags with vlan packet type */
desc_to_olflags_v(descs, &rx_pkts[pos]);
/* D.2 pkt 3,4 set in_port/nb_seg and remove crc */
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v3 03/16] mbuf: add definitions of unified packet types
2015-02-17 6:59 ` [dpdk-dev] [PATCH v3 00/16] unified packet type Helin Zhang
2015-02-17 6:59 ` [dpdk-dev] [PATCH v3 01/16] mbuf: redefinition of packet_type in rte_mbuf Helin Zhang
2015-02-17 6:59 ` [dpdk-dev] [PATCH v3 02/16] ixgbe: support of unified packet type for vector Helin Zhang
@ 2015-02-17 6:59 ` Helin Zhang
2015-02-17 9:01 ` Olivier MATZ
2015-02-17 6:59 ` [dpdk-dev] [PATCH v3 04/16] e1000: support of unified packet type Helin Zhang
` (15 subsequent siblings)
18 siblings, 1 reply; 257+ messages in thread
From: Helin Zhang @ 2015-02-17 6:59 UTC (permalink / raw)
To: dev
As there are only 6 bit flags in ol_flags for indicating packet
types, which is not enough to describe all the possible packet
types hardware can recognize. For example, i40e hardware can
recognize more than 150 packet types. Unified packet type is
composed of L2 type, L3 type, L4 type, tunnel type, inner L2 type,
inner L3 type and inner L4 type fields, and can be stored in
'struct rte_mbuf' of 32 bits field 'packet_type'.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
lib/librte_mbuf/rte_mbuf.h | 90 ++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 90 insertions(+)
v3 changes:
* Put the definitions of unified packet type into a single patch.
diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
index 6f8e1dd..2cdf8a0 100644
--- a/lib/librte_mbuf/rte_mbuf.h
+++ b/lib/librte_mbuf/rte_mbuf.h
@@ -192,6 +192,96 @@ extern "C" {
/* Use final bit of flags to indicate a control mbuf */
#define CTRL_MBUF_FLAG (1ULL << 63) /**< Mbuf contains control data */
+/*
+ * 32 bits are divided into several fields to mark packet types. Note that
+ * each field is indexical.
+ * - Bit 3:0 is for L2 types.
+ * - Bit 7:4 is for L3 or outer L3 (for tunneling case) types.
+ * - Bit 11:8 is for L4 or outer L4 (for tunneling case) types.
+ * - Bit 15:12 is for tunnel types.
+ * - Bit 19:16 is for inner L2 types.
+ * - Bit 23:20 is for inner L3 types.
+ * - Bit 27:24 is for inner L4 types.
+ * - Bit 31:28 is reserved.
+ *
+ * To be compatible with Vector PMD, RTE_PTYPE_L3_IPV4, RTE_PTYPE_L3_IPV4_EXT,
+ * RTE_PTYPE_L3_IPV6, RTE_PTYPE_L3_IPV6_EXT, RTE_PTYPE_L4_TCP, RTE_PTYPE_L4_UDP
+ * and RTE_PTYPE_L4_SCTP should be kept as below in a contiguous 7 bits.
+ *
+ * Note that L3 types values are selected for checking IPV4/IPV6 header from
+ * performance point of view. Reading annotations of RTE_ETH_IS_IPV4_HDR and
+ * RTE_ETH_IS_IPV6_HDR is needed for any future changes of L3 type values.
+ */
+#define RTE_PTYPE_UNKNOWN 0x00000000
+/* bit 3:0 for L2 types */
+#define RTE_PTYPE_L2_MAC 0x00000001
+#define RTE_PTYPE_L2_MAC_TIMESYNC 0x00000002
+#define RTE_PTYPE_L2_ARP 0x00000003
+#define RTE_PTYPE_L2_LLDP 0x00000004
+#define RTE_PTYPE_L2_MASK 0x0000000f
+/* bit 7:4 for L3 types */
+#define RTE_PTYPE_L3_IPV4 0x00000010
+#define RTE_PTYPE_L3_IPV4_EXT 0x00000030
+#define RTE_PTYPE_L3_IPV6 0x00000040
+#define RTE_PTYPE_L3_IPV4_EXT_UNKNOWN 0x00000090
+#define RTE_PTYPE_L3_IPV6_EXT 0x000000c0
+#define RTE_PTYPE_L3_IPV6_EXT_UNKNOWN 0x000000e0
+#define RTE_PTYPE_L3_MASK 0x000000f0
+/* bit 11:8 for L4 types */
+#define RTE_PTYPE_L4_TCP 0x00000100
+#define RTE_PTYPE_L4_UDP 0x00000200
+#define RTE_PTYPE_L4_FRAG 0x00000300
+#define RTE_PTYPE_L4_SCTP 0x00000400
+#define RTE_PTYPE_L4_ICMP 0x00000500
+#define RTE_PTYPE_L4_NONFRAG 0x00000600
+#define RTE_PTYPE_L4_MASK 0x00000f00
+/* bit 15:12 for tunnel types */
+#define RTE_PTYPE_TUNNEL_IP 0x00001000
+#define RTE_PTYPE_TUNNEL_GRE 0x00002000
+#define RTE_PTYPE_TUNNEL_VXLAN 0x00003000
+#define RTE_PTYPE_TUNNEL_NVGRE 0x00004000
+#define RTE_PTYPE_TUNNEL_GENEVE 0x00005000
+#define RTE_PTYPE_TUNNEL_GRENAT 0x00006000
+#define RTE_PTYPE_TUNNEL_MASK 0x0000f000
+/* bit 19:16 for inner L2 types */
+#define RTE_PTYPE_INNER_L2_MAC 0x00010000
+#define RTE_PTYPE_INNER_L2_MAC_VLAN 0x00020000
+#define RTE_PTYPE_INNER_L2_MASK 0x000f0000
+/* bit 23:20 for inner L3 types */
+#define RTE_PTYPE_INNER_L3_IPV4 0x00100000
+#define RTE_PTYPE_INNER_L3_IPV4_EXT 0x00200000
+#define RTE_PTYPE_INNER_L3_IPV6 0x00300000
+#define RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN 0x00400000
+#define RTE_PTYPE_INNER_L3_IPV6_EXT 0x00500000
+#define RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN 0x00600000
+#define RTE_PTYPE_INNER_INNER_L3_MASK 0x00f00000
+/* bit 27:24 for inner L4 types */
+#define RTE_PTYPE_INNER_L4_TCP 0x01000000
+#define RTE_PTYPE_INNER_L4_UDP 0x02000000
+#define RTE_PTYPE_INNER_L4_FRAG 0x03000000
+#define RTE_PTYPE_INNER_L4_SCTP 0x04000000
+#define RTE_PTYPE_INNER_L4_ICMP 0x05000000
+#define RTE_PTYPE_INNER_L4_NONFRAG 0x06000000
+#define RTE_PTYPE_INNER_L4_MASK 0x0f000000
+/* bit 31:28 reserved */
+
+/**
+ * Check if the (outer) L3 header is IPv4. To avoid comparing IPv4 types one by
+ * one, bit 4 is selected to be used for IPv4 only. Then checking bit 4 can
+ * determin if it is an IPV4 packet.
+ */
+#define RTE_ETH_IS_IPV4_HDR(ptype) ((ptype) & RTE_PTYPE_L3_IPV4)
+
+/**
+ * Check if the (outer) L3 header is IPv4. To avoid comparing IPv4 types one by
+ * one, bit 6 is selected to be used for IPv4 only. Then checking bit 6 can
+ * determin if it is an IPV4 packet.
+ */
+#define RTE_ETH_IS_IPV6_HDR(ptype) ((ptype) & RTE_PTYPE_L3_IPV6)
+
+/* Check if it is a tunneling packet */
+#define RTE_ETH_IS_TUNNEL_PKT(ptype) ((ptype) & RTE_PTYPE_TUNNEL_MASK)
+
/**
* Get the name of a RX offload flag
*
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v3 04/16] e1000: support of unified packet type
2015-02-17 6:59 ` [dpdk-dev] [PATCH v3 00/16] unified packet type Helin Zhang
` (2 preceding siblings ...)
2015-02-17 6:59 ` [dpdk-dev] [PATCH v3 03/16] mbuf: add definitions of unified packet types Helin Zhang
@ 2015-02-17 6:59 ` Helin Zhang
2015-02-17 6:59 ` [dpdk-dev] [PATCH v3 05/16] ixgbe: " Helin Zhang
` (14 subsequent siblings)
18 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-02-17 6:59 UTC (permalink / raw)
To: dev
To unify packet types among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
lib/librte_pmd_e1000/igb_rxtx.c | 98 ++++++++++++++++++++++++++++++++++-------
1 file changed, 83 insertions(+), 15 deletions(-)
v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.
diff --git a/lib/librte_pmd_e1000/igb_rxtx.c b/lib/librte_pmd_e1000/igb_rxtx.c
index 5c394a9..12a68f4 100644
--- a/lib/librte_pmd_e1000/igb_rxtx.c
+++ b/lib/librte_pmd_e1000/igb_rxtx.c
@@ -602,17 +602,85 @@ eth_igb_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
* RX functions
*
**********************************************************************/
+#define IGB_PACKET_TYPE_IPV4 0X01
+#define IGB_PACKET_TYPE_IPV4_TCP 0X11
+#define IGB_PACKET_TYPE_IPV4_UDP 0X21
+#define IGB_PACKET_TYPE_IPV4_SCTP 0X41
+#define IGB_PACKET_TYPE_IPV4_EXT 0X03
+#define IGB_PACKET_TYPE_IPV4_EXT_SCTP 0X43
+#define IGB_PACKET_TYPE_IPV6 0X04
+#define IGB_PACKET_TYPE_IPV6_TCP 0X14
+#define IGB_PACKET_TYPE_IPV6_UDP 0X24
+#define IGB_PACKET_TYPE_IPV6_EXT 0X0C
+#define IGB_PACKET_TYPE_IPV6_EXT_TCP 0X1C
+#define IGB_PACKET_TYPE_IPV6_EXT_UDP 0X2C
+#define IGB_PACKET_TYPE_IPV4_IPV6 0X05
+#define IGB_PACKET_TYPE_IPV4_IPV6_TCP 0X15
+#define IGB_PACKET_TYPE_IPV4_IPV6_UDP 0X25
+#define IGB_PACKET_TYPE_IPV4_IPV6_EXT 0X0D
+#define IGB_PACKET_TYPE_IPV4_IPV6_EXT_TCP 0X1D
+#define IGB_PACKET_TYPE_IPV4_IPV6_EXT_UDP 0X2D
+#define IGB_PACKET_TYPE_MAX 0X80
+#define IGB_PACKET_TYPE_MASK 0X7F
+#define IGB_PACKET_TYPE_SHIFT 0X04
+static inline uint32_t
+igb_rxd_pkt_info_to_pkt_type(uint16_t pkt_info)
+{
+ static const uint32_t
+ ptype_table[IGB_PACKET_TYPE_MAX] __rte_cache_aligned = {
+ [IGB_PACKET_TYPE_IPV4] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4,
+ [IGB_PACKET_TYPE_IPV4_EXT] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4_EXT,
+ [IGB_PACKET_TYPE_IPV6] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6,
+ [IGB_PACKET_TYPE_IPV4_IPV6] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6,
+ [IGB_PACKET_TYPE_IPV6_EXT] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV6_EXT,
+ [IGB_PACKET_TYPE_IPV4_IPV6_EXT] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT,
+ [IGB_PACKET_TYPE_IPV4_TCP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_TCP,
+ [IGB_PACKET_TYPE_IPV6_TCP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_TCP,
+ [IGB_PACKET_TYPE_IPV4_IPV6_TCP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6 | RTE_PTYPE_INNER_L4_TCP,
+ [IGB_PACKET_TYPE_IPV6_EXT_TCP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV6_EXT | RTE_PTYPE_L4_TCP,
+ [IGB_PACKET_TYPE_IPV4_IPV6_EXT_TCP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT | RTE_PTYPE_INNER_L4_TCP,
+ [IGB_PACKET_TYPE_IPV4_UDP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_UDP,
+ [IGB_PACKET_TYPE_IPV6_UDP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_UDP,
+ [IGB_PACKET_TYPE_IPV4_IPV6_UDP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6 | RTE_PTYPE_INNER_L4_UDP,
+ [IGB_PACKET_TYPE_IPV6_EXT_UDP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV6_EXT | RTE_PTYPE_L4_UDP,
+ [IGB_PACKET_TYPE_IPV4_IPV6_EXT_UDP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT | RTE_PTYPE_INNER_L4_UDP,
+ [IGB_PACKET_TYPE_IPV4_SCTP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_SCTP,
+ [IGB_PACKET_TYPE_IPV4_EXT_SCTP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4_EXT | RTE_PTYPE_L4_SCTP,
+ };
+ if (unlikely(pkt_info & E1000_RXDADV_PKTTYPE_ETQF))
+ return RTE_PTYPE_UNKNOWN;
+
+ pkt_info = (pkt_info >> IGB_PACKET_TYPE_SHIFT) & IGB_PACKET_TYPE_MASK;
+
+ return ptype_table[pkt_info];
+}
+
static inline uint64_t
rx_desc_hlen_type_rss_to_pkt_flags(uint32_t hl_tp_rs)
{
- uint64_t pkt_flags;
-
- static uint64_t ip_pkt_types_map[16] = {
- 0, PKT_RX_IPV4_HDR, PKT_RX_IPV4_HDR_EXT, PKT_RX_IPV4_HDR_EXT,
- PKT_RX_IPV6_HDR, 0, 0, 0,
- PKT_RX_IPV6_HDR_EXT, 0, 0, 0,
- PKT_RX_IPV6_HDR_EXT, 0, 0, 0,
- };
+ uint64_t pkt_flags = ((hl_tp_rs & 0x0F) == 0) ? 0 : PKT_RX_RSS_HASH;
#if defined(RTE_LIBRTE_IEEE1588)
static uint32_t ip_pkt_etqf_map[8] = {
@@ -620,14 +688,10 @@ rx_desc_hlen_type_rss_to_pkt_flags(uint32_t hl_tp_rs)
0, 0, 0, 0,
};
- pkt_flags = (hl_tp_rs & E1000_RXDADV_PKTTYPE_ETQF) ?
- ip_pkt_etqf_map[(hl_tp_rs >> 4) & 0x07] :
- ip_pkt_types_map[(hl_tp_rs >> 4) & 0x0F];
-#else
- pkt_flags = (hl_tp_rs & E1000_RXDADV_PKTTYPE_ETQF) ? 0 :
- ip_pkt_types_map[(hl_tp_rs >> 4) & 0x0F];
+ pkt_flags |= ip_pkt_etqf_map[(hl_tp_rs >> 4) & 0x07];
#endif
- return pkt_flags | (((hl_tp_rs & 0x0F) == 0) ? 0 : PKT_RX_RSS_HASH);
+
+ return pkt_flags;
}
static inline uint64_t
@@ -802,6 +866,8 @@ eth_igb_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
pkt_flags = pkt_flags | rx_desc_status_to_pkt_flags(staterr);
pkt_flags = pkt_flags | rx_desc_error_to_pkt_flags(staterr);
rxm->ol_flags = pkt_flags;
+ rxm->packet_type = igb_rxd_pkt_info_to_pkt_type(rxd.wb.lower.
+ lo_dword.hs_rss.pkt_info);
/*
* Store the mbuf address into the next entry of the array
@@ -1036,6 +1102,8 @@ eth_igb_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
pkt_flags = pkt_flags | rx_desc_status_to_pkt_flags(staterr);
pkt_flags = pkt_flags | rx_desc_error_to_pkt_flags(staterr);
first_seg->ol_flags = pkt_flags;
+ first_seg->packet_type = igb_rxd_pkt_info_to_pkt_type(rxd.wb.
+ lower.lo_dword.hs_rss.pkt_info);
/* Prefetch data of first segment, if configured to do so. */
rte_packet_prefetch((char *)first_seg->buf_addr +
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v3 05/16] ixgbe: support of unified packet type
2015-02-17 6:59 ` [dpdk-dev] [PATCH v3 00/16] unified packet type Helin Zhang
` (3 preceding siblings ...)
2015-02-17 6:59 ` [dpdk-dev] [PATCH v3 04/16] e1000: support of unified packet type Helin Zhang
@ 2015-02-17 6:59 ` Helin Zhang
2015-02-17 6:59 ` [dpdk-dev] [PATCH v3 06/16] i40e: " Helin Zhang
` (13 subsequent siblings)
18 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-02-17 6:59 UTC (permalink / raw)
To: dev
To unify packet type among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.
Note that around 2.5% performance drop (64B) was observed of doing
4 ports (1 port per 82599 card) IO forwarding on the same SNB core.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
lib/librte_pmd_ixgbe/ixgbe_rxtx.c | 146 +++++++++++++++++++++++++++++---------
1 file changed, 112 insertions(+), 34 deletions(-)
v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.
diff --git a/lib/librte_pmd_ixgbe/ixgbe_rxtx.c b/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
index e6766b3..a2e4234 100644
--- a/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
+++ b/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
@@ -866,40 +866,107 @@ end_of_tx:
* RX functions
*
**********************************************************************/
-static inline uint64_t
-rx_desc_hlen_type_rss_to_pkt_flags(uint32_t hl_tp_rs)
+#define IXGBE_PACKET_TYPE_IPV4 0X01
+#define IXGBE_PACKET_TYPE_IPV4_TCP 0X11
+#define IXGBE_PACKET_TYPE_IPV4_UDP 0X21
+#define IXGBE_PACKET_TYPE_IPV4_SCTP 0X41
+#define IXGBE_PACKET_TYPE_IPV4_EXT 0X03
+#define IXGBE_PACKET_TYPE_IPV4_EXT_SCTP 0X43
+#define IXGBE_PACKET_TYPE_IPV6 0X04
+#define IXGBE_PACKET_TYPE_IPV6_TCP 0X14
+#define IXGBE_PACKET_TYPE_IPV6_UDP 0X24
+#define IXGBE_PACKET_TYPE_IPV6_EXT 0X0C
+#define IXGBE_PACKET_TYPE_IPV6_EXT_TCP 0X1C
+#define IXGBE_PACKET_TYPE_IPV6_EXT_UDP 0X2C
+#define IXGBE_PACKET_TYPE_IPV4_IPV6 0X05
+#define IXGBE_PACKET_TYPE_IPV4_IPV6_TCP 0X15
+#define IXGBE_PACKET_TYPE_IPV4_IPV6_UDP 0X25
+#define IXGBE_PACKET_TYPE_IPV4_IPV6_EXT 0X0D
+#define IXGBE_PACKET_TYPE_IPV4_IPV6_EXT_TCP 0X1D
+#define IXGBE_PACKET_TYPE_IPV4_IPV6_EXT_UDP 0X2D
+#define IXGBE_PACKET_TYPE_MAX 0X80
+#define IXGBE_PACKET_TYPE_MASK 0X7F
+#define IXGBE_PACKET_TYPE_SHIFT 0X04
+static inline uint32_t
+ixgbe_rxd_pkt_info_to_pkt_type(uint16_t pkt_info)
{
- uint64_t pkt_flags;
-
- static uint64_t ip_pkt_types_map[16] = {
- 0, PKT_RX_IPV4_HDR, PKT_RX_IPV4_HDR_EXT, PKT_RX_IPV4_HDR_EXT,
- PKT_RX_IPV6_HDR, 0, 0, 0,
- PKT_RX_IPV6_HDR_EXT, 0, 0, 0,
- PKT_RX_IPV6_HDR_EXT, 0, 0, 0,
+ static const uint32_t
+ ptype_table[IXGBE_PACKET_TYPE_MAX] __rte_cache_aligned = {
+ [IXGBE_PACKET_TYPE_IPV4] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4,
+ [IXGBE_PACKET_TYPE_IPV4_EXT] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4_EXT,
+ [IXGBE_PACKET_TYPE_IPV6] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV6,
+ [IXGBE_PACKET_TYPE_IPV4_IPV6] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6,
+ [IXGBE_PACKET_TYPE_IPV6_EXT] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV6_EXT,
+ [IXGBE_PACKET_TYPE_IPV4_IPV6_EXT] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT,
+ [IXGBE_PACKET_TYPE_IPV4_TCP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_TCP,
+ [IXGBE_PACKET_TYPE_IPV6_TCP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_TCP,
+ [IXGBE_PACKET_TYPE_IPV4_IPV6_TCP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6 | RTE_PTYPE_INNER_L4_TCP,
+ [IXGBE_PACKET_TYPE_IPV6_EXT_TCP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV6_EXT | RTE_PTYPE_L4_TCP,
+ [IXGBE_PACKET_TYPE_IPV4_IPV6_EXT_TCP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT | RTE_PTYPE_INNER_L4_TCP,
+ [IXGBE_PACKET_TYPE_IPV4_UDP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_UDP,
+ [IXGBE_PACKET_TYPE_IPV6_UDP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_UDP,
+ [IXGBE_PACKET_TYPE_IPV4_IPV6_UDP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6 | RTE_PTYPE_INNER_L4_UDP,
+ [IXGBE_PACKET_TYPE_IPV6_EXT_UDP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV6_EXT | RTE_PTYPE_L4_UDP,
+ [IXGBE_PACKET_TYPE_IPV4_IPV6_EXT_UDP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT | RTE_PTYPE_INNER_L4_UDP,
+ [IXGBE_PACKET_TYPE_IPV4_SCTP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_SCTP,
+ [IXGBE_PACKET_TYPE_IPV4_EXT_SCTP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4_EXT | RTE_PTYPE_L4_SCTP,
};
+ if (unlikely(pkt_info & IXGBE_RXDADV_PKTTYPE_ETQF))
+ return RTE_PTYPE_UNKNOWN;
- static uint64_t ip_rss_types_map[16] = {
+ pkt_info = (pkt_info >> IXGBE_PACKET_TYPE_SHIFT) &
+ IXGBE_PACKET_TYPE_MASK;
+
+ return ptype_table[pkt_info];
+}
+
+static inline uint64_t
+ixgbe_rxd_pkt_info_to_pkt_flags(uint16_t pkt_info)
+{
+ static uint64_t ip_rss_types_map[16] __rte_cache_aligned = {
0, PKT_RX_RSS_HASH, PKT_RX_RSS_HASH, PKT_RX_RSS_HASH,
0, PKT_RX_RSS_HASH, 0, PKT_RX_RSS_HASH,
PKT_RX_RSS_HASH, 0, 0, 0,
0, 0, 0, PKT_RX_FDIR,
};
-
#ifdef RTE_LIBRTE_IEEE1588
static uint64_t ip_pkt_etqf_map[8] = {
0, 0, 0, PKT_RX_IEEE1588_PTP,
0, 0, 0, 0,
};
- pkt_flags = (hl_tp_rs & IXGBE_RXDADV_PKTTYPE_ETQF) ?
- ip_pkt_etqf_map[(hl_tp_rs >> 4) & 0x07] :
- ip_pkt_types_map[(hl_tp_rs >> 4) & 0x0F];
+ if (likely(pkt_info & IXGBE_RXDADV_PKTTYPE_ETQF))
+ return ip_pkt_etqf_map[(pkt_info >> 4) & 0X07] |
+ ip_rss_types_map[pkt_info & 0xF];
+ else
+ return ip_rss_types_map[pkt_info & 0xF];
#else
- pkt_flags = (hl_tp_rs & IXGBE_RXDADV_PKTTYPE_ETQF) ? 0 :
- ip_pkt_types_map[(hl_tp_rs >> 4) & 0x0F];
-
+ return ip_rss_types_map[pkt_info & 0xF];
#endif
- return pkt_flags | ip_rss_types_map[hl_tp_rs & 0xF];
}
static inline uint64_t
@@ -956,7 +1023,9 @@ ixgbe_rx_scan_hw_ring(struct igb_rx_queue *rxq)
struct rte_mbuf *mb;
uint16_t pkt_len;
uint64_t pkt_flags;
- int s[LOOK_AHEAD], nb_dd;
+ int nb_dd;
+ uint32_t s[LOOK_AHEAD];
+ uint16_t pkt_info[LOOK_AHEAD];
int i, j, nb_rx = 0;
@@ -979,6 +1048,9 @@ ixgbe_rx_scan_hw_ring(struct igb_rx_queue *rxq)
for (j = LOOK_AHEAD-1; j >= 0; --j)
s[j] = rxdp[j].wb.upper.status_error;
+ for (j = LOOK_AHEAD-1; j >= 0; --j)
+ pkt_info[j] = rxdp[j].wb.lower.lo_dword.hs_rss.pkt_info;
+
/* Compute how many status bits were set */
nb_dd = 0;
for (j = 0; j < LOOK_AHEAD; ++j)
@@ -996,12 +1068,13 @@ ixgbe_rx_scan_hw_ring(struct igb_rx_queue *rxq)
mb->vlan_tci = rte_le_to_cpu_16(rxdp[j].wb.upper.vlan);
/* convert descriptor fields to rte mbuf flags */
- pkt_flags = rx_desc_hlen_type_rss_to_pkt_flags(
- rxdp[j].wb.lower.lo_dword.data);
- /* reuse status field from scan list */
- pkt_flags |= rx_desc_status_to_pkt_flags(s[j]);
+ pkt_flags = rx_desc_status_to_pkt_flags(s[j]);
pkt_flags |= rx_desc_error_to_pkt_flags(s[j]);
+ pkt_flags |=
+ ixgbe_rxd_pkt_info_to_pkt_flags(pkt_info[j]);
mb->ol_flags = pkt_flags;
+ mb->packet_type =
+ ixgbe_rxd_pkt_info_to_pkt_type(pkt_info[j]);
if (likely(pkt_flags & PKT_RX_RSS_HASH))
mb->hash.rss = rxdp[j].wb.lower.hi_dword.rss;
@@ -1198,7 +1271,7 @@ ixgbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
union ixgbe_adv_rx_desc rxd;
uint64_t dma_addr;
uint32_t staterr;
- uint32_t hlen_type_rss;
+ uint32_t pkt_info;
uint16_t pkt_len;
uint16_t rx_id;
uint16_t nb_rx;
@@ -1316,14 +1389,17 @@ ixgbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
rxm->data_len = pkt_len;
rxm->port = rxq->port_id;
- hlen_type_rss = rte_le_to_cpu_32(rxd.wb.lower.lo_dword.data);
+ pkt_info = rte_le_to_cpu_32(rxd.wb.lower.lo_dword.hs_rss.
+ pkt_info);
/* Only valid if PKT_RX_VLAN_PKT set in pkt_flags */
rxm->vlan_tci = rte_le_to_cpu_16(rxd.wb.upper.vlan);
- pkt_flags = rx_desc_hlen_type_rss_to_pkt_flags(hlen_type_rss);
- pkt_flags = pkt_flags | rx_desc_status_to_pkt_flags(staterr);
+ pkt_flags = rx_desc_status_to_pkt_flags(staterr);
pkt_flags = pkt_flags | rx_desc_error_to_pkt_flags(staterr);
+ pkt_flags = pkt_flags |
+ ixgbe_rxd_pkt_info_to_pkt_flags(pkt_info);
rxm->ol_flags = pkt_flags;
+ rxm->packet_type = ixgbe_rxd_pkt_info_to_pkt_type(pkt_info);
if (likely(pkt_flags & PKT_RX_RSS_HASH))
rxm->hash.rss = rxd.wb.lower.hi_dword.rss;
@@ -1382,7 +1458,7 @@ ixgbe_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
union ixgbe_adv_rx_desc rxd;
uint64_t dma; /* Physical address of mbuf data buffer */
uint32_t staterr;
- uint32_t hlen_type_rss;
+ uint16_t pkt_info;
uint16_t rx_id;
uint16_t nb_rx;
uint16_t nb_hold;
@@ -1561,13 +1637,15 @@ ixgbe_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
* set in the pkt_flags field.
*/
first_seg->vlan_tci = rte_le_to_cpu_16(rxd.wb.upper.vlan);
- hlen_type_rss = rte_le_to_cpu_32(rxd.wb.lower.lo_dword.data);
- pkt_flags = rx_desc_hlen_type_rss_to_pkt_flags(hlen_type_rss);
- pkt_flags = (pkt_flags |
- rx_desc_status_to_pkt_flags(staterr));
- pkt_flags = (pkt_flags |
- rx_desc_error_to_pkt_flags(staterr));
+ pkt_info = rte_le_to_cpu_32(rxd.wb.lower.lo_dword.hs_rss.
+ pkt_info);
+ pkt_flags = rx_desc_status_to_pkt_flags(staterr);
+ pkt_flags = pkt_flags | rx_desc_error_to_pkt_flags(staterr);
+ pkt_flags = pkt_flags |
+ ixgbe_rxd_pkt_info_to_pkt_flags(pkt_info);
first_seg->ol_flags = pkt_flags;
+ first_seg->packet_type =
+ ixgbe_rxd_pkt_info_to_pkt_type(pkt_info);
if (likely(pkt_flags & PKT_RX_RSS_HASH))
first_seg->hash.rss = rxd.wb.lower.hi_dword.rss;
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v3 06/16] i40e: support of unified packet type
2015-02-17 6:59 ` [dpdk-dev] [PATCH v3 00/16] unified packet type Helin Zhang
` (4 preceding siblings ...)
2015-02-17 6:59 ` [dpdk-dev] [PATCH v3 05/16] ixgbe: " Helin Zhang
@ 2015-02-17 6:59 ` Helin Zhang
2015-02-17 6:59 ` [dpdk-dev] [PATCH v3 07/16] enic: " Helin Zhang
` (12 subsequent siblings)
18 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-02-17 6:59 UTC (permalink / raw)
To: dev
To unify packet types among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
lib/librte_pmd_i40e/i40e_rxtx.c | 786 ++++++++++++++++++++++++++--------------
1 file changed, 512 insertions(+), 274 deletions(-)
v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.
diff --git a/lib/librte_pmd_i40e/i40e_rxtx.c b/lib/librte_pmd_i40e/i40e_rxtx.c
index c9f1026..25ee9f8 100644
--- a/lib/librte_pmd_i40e/i40e_rxtx.c
+++ b/lib/librte_pmd_i40e/i40e_rxtx.c
@@ -151,272 +151,511 @@ i40e_rxd_error_to_pkt_flags(uint64_t qword)
return flags;
}
-/* Translate pkt types to pkt flags */
-static inline uint64_t
-i40e_rxd_ptype_to_pkt_flags(uint64_t qword)
+/* For each value it means, datasheet of hardware can tell more details */
+static inline uint32_t
+i40e_rxd_pkt_type_mapping(uint8_t ptype)
{
- uint8_t ptype = (uint8_t)((qword & I40E_RXD_QW1_PTYPE_MASK) >>
- I40E_RXD_QW1_PTYPE_SHIFT);
- static const uint64_t ip_ptype_map[I40E_MAX_PKT_TYPE] = {
- 0, /* PTYPE 0 */
- 0, /* PTYPE 1 */
- 0, /* PTYPE 2 */
- 0, /* PTYPE 3 */
- 0, /* PTYPE 4 */
- 0, /* PTYPE 5 */
- 0, /* PTYPE 6 */
- 0, /* PTYPE 7 */
- 0, /* PTYPE 8 */
- 0, /* PTYPE 9 */
- 0, /* PTYPE 10 */
- 0, /* PTYPE 11 */
- 0, /* PTYPE 12 */
- 0, /* PTYPE 13 */
- 0, /* PTYPE 14 */
- 0, /* PTYPE 15 */
- 0, /* PTYPE 16 */
- 0, /* PTYPE 17 */
- 0, /* PTYPE 18 */
- 0, /* PTYPE 19 */
- 0, /* PTYPE 20 */
- 0, /* PTYPE 21 */
- PKT_RX_IPV4_HDR, /* PTYPE 22 */
- PKT_RX_IPV4_HDR, /* PTYPE 23 */
- PKT_RX_IPV4_HDR, /* PTYPE 24 */
- 0, /* PTYPE 25 */
- PKT_RX_IPV4_HDR, /* PTYPE 26 */
- PKT_RX_IPV4_HDR, /* PTYPE 27 */
- PKT_RX_IPV4_HDR, /* PTYPE 28 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 29 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 30 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 31 */
- 0, /* PTYPE 32 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 33 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 34 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 35 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 36 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 37 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 38 */
- 0, /* PTYPE 39 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 40 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 41 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 42 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 43 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 44 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 45 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 46 */
- 0, /* PTYPE 47 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 48 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 49 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 50 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 51 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 52 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 53 */
- 0, /* PTYPE 54 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 55 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 56 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 57 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 58 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 59 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 60 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 61 */
- 0, /* PTYPE 62 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 63 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 64 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 65 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 66 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 67 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 68 */
- 0, /* PTYPE 69 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 70 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 71 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 72 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 73 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 74 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 75 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 76 */
- 0, /* PTYPE 77 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 78 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 79 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 80 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 81 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 82 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 83 */
- 0, /* PTYPE 84 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 85 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 86 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 87 */
- PKT_RX_IPV6_HDR, /* PTYPE 88 */
- PKT_RX_IPV6_HDR, /* PTYPE 89 */
- PKT_RX_IPV6_HDR, /* PTYPE 90 */
- 0, /* PTYPE 91 */
- PKT_RX_IPV6_HDR, /* PTYPE 92 */
- PKT_RX_IPV6_HDR, /* PTYPE 93 */
- PKT_RX_IPV6_HDR, /* PTYPE 94 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 95 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 96 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 97 */
- 0, /* PTYPE 98 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 99 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 100 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 101 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 102 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 103 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 104 */
- 0, /* PTYPE 105 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 106 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 107 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 108 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 109 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 110 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 111 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 112 */
- 0, /* PTYPE 113 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 114 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 115 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 116 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 117 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 118 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 119 */
- 0, /* PTYPE 120 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 121 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 122 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 123 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 124 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 125 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 126 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 127 */
- 0, /* PTYPE 128 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 129 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 130 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 131 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 132 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 133 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 134 */
- 0, /* PTYPE 135 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 136 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 137 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 138 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 139 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 140 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 141 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 142 */
- 0, /* PTYPE 143 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 144 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 145 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 146 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 147 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 148 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 149 */
- 0, /* PTYPE 150 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 151 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 152 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 153 */
- 0, /* PTYPE 154 */
- 0, /* PTYPE 155 */
- 0, /* PTYPE 156 */
- 0, /* PTYPE 157 */
- 0, /* PTYPE 158 */
- 0, /* PTYPE 159 */
- 0, /* PTYPE 160 */
- 0, /* PTYPE 161 */
- 0, /* PTYPE 162 */
- 0, /* PTYPE 163 */
- 0, /* PTYPE 164 */
- 0, /* PTYPE 165 */
- 0, /* PTYPE 166 */
- 0, /* PTYPE 167 */
- 0, /* PTYPE 168 */
- 0, /* PTYPE 169 */
- 0, /* PTYPE 170 */
- 0, /* PTYPE 171 */
- 0, /* PTYPE 172 */
- 0, /* PTYPE 173 */
- 0, /* PTYPE 174 */
- 0, /* PTYPE 175 */
- 0, /* PTYPE 176 */
- 0, /* PTYPE 177 */
- 0, /* PTYPE 178 */
- 0, /* PTYPE 179 */
- 0, /* PTYPE 180 */
- 0, /* PTYPE 181 */
- 0, /* PTYPE 182 */
- 0, /* PTYPE 183 */
- 0, /* PTYPE 184 */
- 0, /* PTYPE 185 */
- 0, /* PTYPE 186 */
- 0, /* PTYPE 187 */
- 0, /* PTYPE 188 */
- 0, /* PTYPE 189 */
- 0, /* PTYPE 190 */
- 0, /* PTYPE 191 */
- 0, /* PTYPE 192 */
- 0, /* PTYPE 193 */
- 0, /* PTYPE 194 */
- 0, /* PTYPE 195 */
- 0, /* PTYPE 196 */
- 0, /* PTYPE 197 */
- 0, /* PTYPE 198 */
- 0, /* PTYPE 199 */
- 0, /* PTYPE 200 */
- 0, /* PTYPE 201 */
- 0, /* PTYPE 202 */
- 0, /* PTYPE 203 */
- 0, /* PTYPE 204 */
- 0, /* PTYPE 205 */
- 0, /* PTYPE 206 */
- 0, /* PTYPE 207 */
- 0, /* PTYPE 208 */
- 0, /* PTYPE 209 */
- 0, /* PTYPE 210 */
- 0, /* PTYPE 211 */
- 0, /* PTYPE 212 */
- 0, /* PTYPE 213 */
- 0, /* PTYPE 214 */
- 0, /* PTYPE 215 */
- 0, /* PTYPE 216 */
- 0, /* PTYPE 217 */
- 0, /* PTYPE 218 */
- 0, /* PTYPE 219 */
- 0, /* PTYPE 220 */
- 0, /* PTYPE 221 */
- 0, /* PTYPE 222 */
- 0, /* PTYPE 223 */
- 0, /* PTYPE 224 */
- 0, /* PTYPE 225 */
- 0, /* PTYPE 226 */
- 0, /* PTYPE 227 */
- 0, /* PTYPE 228 */
- 0, /* PTYPE 229 */
- 0, /* PTYPE 230 */
- 0, /* PTYPE 231 */
- 0, /* PTYPE 232 */
- 0, /* PTYPE 233 */
- 0, /* PTYPE 234 */
- 0, /* PTYPE 235 */
- 0, /* PTYPE 236 */
- 0, /* PTYPE 237 */
- 0, /* PTYPE 238 */
- 0, /* PTYPE 239 */
- 0, /* PTYPE 240 */
- 0, /* PTYPE 241 */
- 0, /* PTYPE 242 */
- 0, /* PTYPE 243 */
- 0, /* PTYPE 244 */
- 0, /* PTYPE 245 */
- 0, /* PTYPE 246 */
- 0, /* PTYPE 247 */
- 0, /* PTYPE 248 */
- 0, /* PTYPE 249 */
- 0, /* PTYPE 250 */
- 0, /* PTYPE 251 */
- 0, /* PTYPE 252 */
- 0, /* PTYPE 253 */
- 0, /* PTYPE 254 */
- 0, /* PTYPE 255 */
+ static const uint32_t ptype_table[UINT8_MAX] __rte_cache_aligned = {
+ /* L2 types */
+ /* [0] reserved */
+ [1] = RTE_PTYPE_L2_MAC,
+ [2] = RTE_PTYPE_L2_MAC_TIMESYNC,
+ /* [3] - [5] reserved */
+ [6] = RTE_PTYPE_L2_LLDP,
+ /* [7] - [10] reserved */
+ [11] = RTE_PTYPE_L2_ARP,
+ /* [12] - [21] reserved */
+
+ /* Non tunneled IPv4 */
+ [22] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_FRAG,
+ [23] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_NONFRAG,
+ [24] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_UDP,
+ /* [25] reserved */
+ [26] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_TCP,
+ [27] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_SCTP,
+ [28] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_ICMP,
+
+ /* IPv4 --> IPv4 */
+ [29] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [30] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [31] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [32] reserved */
+ [33] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [34] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [35] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv4 --> IPv6 */
+ [36] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [37] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [38] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [39] reserved */
+ [40] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [41] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [42] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv4 --> GRE/Teredo/VXLAN */
+ [43] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT,
+
+ /* IPv4 --> GRE/Teredo/VXLAN --> IPv4 */
+ [44] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [45] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [46] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [47] reserved */
+ [48] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [49] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [50] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv4 --> GRE/Teredo/VXLAN --> IPv6 */
+ [51] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [52] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [53] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [54] reserved */
+ [55] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [56] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [57] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv4 --> GRE/Teredo/VXLAN --> MAC */
+ [58] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC,
+
+ /* IPv4 --> GRE/Teredo/VXLAN --> MAC --> IPv4 */
+ [59] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [60] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [61] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [62] reserved */
+ [63] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [64] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [65] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv4 --> GRE/Teredo/VXLAN --> MAC --> IPv6 */
+ [66] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [67] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [68] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [69] reserved */
+ [70] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [71] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [72] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv4 --> GRE/Teredo/VXLAN --> MAC/VLAN */
+ [73] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN,
+
+ /* IPv4 --> GRE/Teredo/VXLAN --> MAC/VLAN --> IPv4 */
+ [74] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [75] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [76] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [77] reserved */
+ [78] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [79] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [80] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv4 --> GRE/Teredo/VXLAN --> MAC/VLAN --> IPv6 */
+ [81] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [82] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [83] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [84] reserved */
+ [85] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [86] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [87] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* Non tunneled IPv6 */
+ [88] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_FRAG,
+ [89] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_NONFRAG,
+ [90] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_UDP,
+ /* [91] reserved */
+ [92] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_TCP,
+ [93] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_SCTP,
+ [94] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_ICMP,
+
+ /* IPv6 --> IPv4 */
+ [95] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [96] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [97] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [98] reserved */
+ [99] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [100] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [101] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv6 --> IPv6 */
+ [102] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [103] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [104] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [105] reserved */
+ [106] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [107] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [108] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv6 --> GRE/Teredo/VXLAN */
+ [109] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT,
+
+ /* IPv6 --> GRE/Teredo/VXLAN --> IPv4 */
+ [110] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [111] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [112] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [113] reserved */
+ [114] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [115] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [116] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv6 --> GRE/Teredo/VXLAN --> IPv6 */
+ [117] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [118] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [119] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [120] reserved */
+ [121] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [122] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [123] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv6 --> GRE/Teredo/VXLAN --> MAC */
+ [124] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC,
+
+ /* IPv6 --> GRE/Teredo/VXLAN --> MAC --> IPv4 */
+ [125] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [126] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [127] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [128] reserved */
+ [129] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [130] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [131] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv6 --> GRE/Teredo/VXLAN --> MAC --> IPv6 */
+ [132] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [133] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [134] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [135] reserved */
+ [136] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [137] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [138] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv6 --> GRE/Teredo/VXLAN --> MAC/VLAN */
+ [139] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN,
+
+ /* IPv6 --> GRE/Teredo/VXLAN --> MAC/VLAN --> IPv4 */
+ [140] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [141] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [142] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [143] reserved */
+ [144] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [145] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [146] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv6 --> GRE/Teredo/VXLAN --> MAC/VLAN --> IPv6 */
+ [147] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [148] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [149] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [150] reserved */
+ [151] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [152] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [153] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* All others reserved */
};
- return ip_ptype_map[ptype];
+ return ptype_table[ptype];
}
#define I40E_RX_DESC_EXT_STATUS_FLEXBH_MASK 0x03
@@ -702,11 +941,11 @@ i40e_rx_scan_hw_ring(struct i40e_rx_queue *rxq)
rxdp[j].wb.qword0.lo_dword.l2tag1) : 0;
pkt_flags = i40e_rxd_status_to_pkt_flags(qword1);
pkt_flags |= i40e_rxd_error_to_pkt_flags(qword1);
- pkt_flags |= i40e_rxd_ptype_to_pkt_flags(qword1);
- mb->packet_type = (uint16_t)((qword1 &
- I40E_RXD_QW1_PTYPE_MASK) >>
- I40E_RXD_QW1_PTYPE_SHIFT);
+ mb->packet_type =
+ i40e_rxd_pkt_type_mapping((uint8_t)((qword1 &
+ I40E_RXD_QW1_PTYPE_MASK) >>
+ I40E_RXD_QW1_PTYPE_SHIFT));
if (pkt_flags & PKT_RX_RSS_HASH)
mb->hash.rss = rte_le_to_cpu_32(\
rxdp[j].wb.qword0.hi_dword.rss);
@@ -945,9 +1184,9 @@ i40e_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
rte_le_to_cpu_16(rxd.wb.qword0.lo_dword.l2tag1) : 0;
pkt_flags = i40e_rxd_status_to_pkt_flags(qword1);
pkt_flags |= i40e_rxd_error_to_pkt_flags(qword1);
- pkt_flags |= i40e_rxd_ptype_to_pkt_flags(qword1);
- rxm->packet_type = (uint16_t)((qword1 & I40E_RXD_QW1_PTYPE_MASK) >>
- I40E_RXD_QW1_PTYPE_SHIFT);
+ rxm->packet_type =
+ i40e_rxd_pkt_type_mapping((uint8_t)((qword1 &
+ I40E_RXD_QW1_PTYPE_MASK) >> I40E_RXD_QW1_PTYPE_SHIFT));
if (pkt_flags & PKT_RX_RSS_HASH)
rxm->hash.rss =
rte_le_to_cpu_32(rxd.wb.qword0.hi_dword.rss);
@@ -1104,10 +1343,9 @@ i40e_recv_scattered_pkts(void *rx_queue,
rte_le_to_cpu_16(rxd.wb.qword0.lo_dword.l2tag1) : 0;
pkt_flags = i40e_rxd_status_to_pkt_flags(qword1);
pkt_flags |= i40e_rxd_error_to_pkt_flags(qword1);
- pkt_flags |= i40e_rxd_ptype_to_pkt_flags(qword1);
- first_seg->packet_type = (uint16_t)((qword1 &
- I40E_RXD_QW1_PTYPE_MASK) >>
- I40E_RXD_QW1_PTYPE_SHIFT);
+ first_seg->packet_type =
+ i40e_rxd_pkt_type_mapping((uint8_t)((qword1 &
+ I40E_RXD_QW1_PTYPE_MASK) >> I40E_RXD_QW1_PTYPE_SHIFT));
if (pkt_flags & PKT_RX_RSS_HASH)
rxm->hash.rss =
rte_le_to_cpu_32(rxd.wb.qword0.hi_dword.rss);
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v3 07/16] enic: support of unified packet type
2015-02-17 6:59 ` [dpdk-dev] [PATCH v3 00/16] unified packet type Helin Zhang
` (5 preceding siblings ...)
2015-02-17 6:59 ` [dpdk-dev] [PATCH v3 06/16] i40e: " Helin Zhang
@ 2015-02-17 6:59 ` Helin Zhang
2015-02-17 6:59 ` [dpdk-dev] [PATCH v3 08/16] vmxnet3: " Helin Zhang
` (11 subsequent siblings)
18 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-02-17 6:59 UTC (permalink / raw)
To: dev
To unify packet types among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
lib/librte_pmd_enic/enic_main.c | 14 ++++++++------
1 file changed, 8 insertions(+), 6 deletions(-)
v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.
diff --git a/lib/librte_pmd_enic/enic_main.c b/lib/librte_pmd_enic/enic_main.c
index 48fdca2..9acba9a 100644
--- a/lib/librte_pmd_enic/enic_main.c
+++ b/lib/librte_pmd_enic/enic_main.c
@@ -423,7 +423,7 @@ static int enic_rq_indicate_buf(struct vnic_rq *rq,
rx_pkt->pkt_len = bytes_written;
if (ipv4) {
- rx_pkt->ol_flags |= PKT_RX_IPV4_HDR;
+ rx_pkt->packet_type = RTE_PTYPE_L3_IPV4;
if (!csum_not_calc) {
if (unlikely(!ipv4_csum_ok))
rx_pkt->ol_flags |= PKT_RX_IP_CKSUM_BAD;
@@ -432,7 +432,7 @@ static int enic_rq_indicate_buf(struct vnic_rq *rq,
rx_pkt->ol_flags |= PKT_RX_L4_CKSUM_BAD;
}
} else if (ipv6)
- rx_pkt->ol_flags |= PKT_RX_IPV6_HDR;
+ rx_pkt->packet_type = RTE_PTYPE_L3_IPV6;
} else {
/* Header split */
if (sop && !eop) {
@@ -445,7 +445,7 @@ static int enic_rq_indicate_buf(struct vnic_rq *rq,
*rx_pkt_bucket = rx_pkt;
rx_pkt->pkt_len = bytes_written;
if (ipv4) {
- rx_pkt->ol_flags |= PKT_RX_IPV4_HDR;
+ rx_pkt->packet_type = RTE_PTYPE_L3_IPV4;
if (!csum_not_calc) {
if (unlikely(!ipv4_csum_ok))
rx_pkt->ol_flags |=
@@ -457,13 +457,14 @@ static int enic_rq_indicate_buf(struct vnic_rq *rq,
PKT_RX_L4_CKSUM_BAD;
}
} else if (ipv6)
- rx_pkt->ol_flags |= PKT_RX_IPV6_HDR;
+ rx_pkt->packet_type = RTE_PTYPE_L3_IPV6;
} else {
/* Payload */
hdr_rx_pkt = *rx_pkt_bucket;
hdr_rx_pkt->pkt_len += bytes_written;
if (ipv4) {
- hdr_rx_pkt->ol_flags |= PKT_RX_IPV4_HDR;
+ hdr_rx_pkt->packet_type =
+ RTE_PTYPE_L3_IPV4;
if (!csum_not_calc) {
if (unlikely(!ipv4_csum_ok))
hdr_rx_pkt->ol_flags |=
@@ -475,7 +476,8 @@ static int enic_rq_indicate_buf(struct vnic_rq *rq,
PKT_RX_L4_CKSUM_BAD;
}
} else if (ipv6)
- hdr_rx_pkt->ol_flags |= PKT_RX_IPV6_HDR;
+ hdr_rx_pkt->packet_type =
+ RTE_PTYPE_L3_IPV6;
}
}
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v3 08/16] vmxnet3: support of unified packet type
2015-02-17 6:59 ` [dpdk-dev] [PATCH v3 00/16] unified packet type Helin Zhang
` (6 preceding siblings ...)
2015-02-17 6:59 ` [dpdk-dev] [PATCH v3 07/16] enic: " Helin Zhang
@ 2015-02-17 6:59 ` Helin Zhang
2015-02-27 11:25 ` Thomas Monjalon
2015-02-17 6:59 ` [dpdk-dev] [PATCH v3 09/16] app/test-pipeline: " Helin Zhang
` (10 subsequent siblings)
18 siblings, 1 reply; 257+ messages in thread
From: Helin Zhang @ 2015-02-17 6:59 UTC (permalink / raw)
To: dev
To unify packet types among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.
diff --git a/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c b/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c
index 8425f32..c85ebd8 100644
--- a/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c
+++ b/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c
@@ -650,9 +650,9 @@ vmxnet3_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
struct ipv4_hdr *ip = (struct ipv4_hdr *)(eth + 1);
if (((ip->version_ihl & 0xf) << 2) > (int)sizeof(struct ipv4_hdr))
- rxm->ol_flags |= PKT_RX_IPV4_HDR_EXT;
+ rxm->packet_type = RTE_PTYPE_L3_IPV4_EXT;
else
- rxm->ol_flags |= PKT_RX_IPV4_HDR;
+ rxm->packet_type = RTE_PTYPE_L3_IPV4;
if (!rcd->cnc) {
if (!rcd->ipc)
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v3 09/16] app/test-pipeline: support of unified packet type
2015-02-17 6:59 ` [dpdk-dev] [PATCH v3 00/16] unified packet type Helin Zhang
` (7 preceding siblings ...)
2015-02-17 6:59 ` [dpdk-dev] [PATCH v3 08/16] vmxnet3: " Helin Zhang
@ 2015-02-17 6:59 ` Helin Zhang
2015-02-17 6:59 ` [dpdk-dev] [PATCH v3 10/16] app/testpmd: " Helin Zhang
` (9 subsequent siblings)
18 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-02-17 6:59 UTC (permalink / raw)
To: dev
To unify packet types among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
app/test-pipeline/pipeline_hash.c | 7 ++++---
1 file changed, 4 insertions(+), 3 deletions(-)
v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.
diff --git a/app/test-pipeline/pipeline_hash.c b/app/test-pipeline/pipeline_hash.c
index 4598ad4..548615f 100644
--- a/app/test-pipeline/pipeline_hash.c
+++ b/app/test-pipeline/pipeline_hash.c
@@ -459,20 +459,21 @@ app_main_loop_rx_metadata(void) {
signature = RTE_MBUF_METADATA_UINT32_PTR(m, 0);
key = RTE_MBUF_METADATA_UINT8_PTR(m, 32);
- if (m->ol_flags & PKT_RX_IPV4_HDR) {
+ if (RTE_ETH_IS_IPV4_HDR(m->packet_type)) {
ip_hdr = (struct ipv4_hdr *)
&m_data[sizeof(struct ether_hdr)];
ip_dst = ip_hdr->dst_addr;
k32 = (uint32_t *) key;
k32[0] = ip_dst & 0xFFFFFF00;
- } else {
+ } else if (RTE_ETH_IS_IPV6_HDR(m->packet_type)) {
ipv6_hdr = (struct ipv6_hdr *)
&m_data[sizeof(struct ether_hdr)];
ipv6_dst = ipv6_hdr->dst_addr;
memcpy(key, ipv6_dst, 16);
- }
+ } else
+ continue;
*signature = test_hash(key, 0, 0);
}
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v3 10/16] app/testpmd: support of unified packet type
2015-02-17 6:59 ` [dpdk-dev] [PATCH v3 00/16] unified packet type Helin Zhang
` (8 preceding siblings ...)
2015-02-17 6:59 ` [dpdk-dev] [PATCH v3 09/16] app/test-pipeline: " Helin Zhang
@ 2015-02-17 6:59 ` Helin Zhang
2015-02-17 6:59 ` [dpdk-dev] [PATCH v3 11/16] examples/ip_fragmentation: " Helin Zhang
` (8 subsequent siblings)
18 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-02-17 6:59 UTC (permalink / raw)
To: dev
To unify packet types among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Signed-off-by: Jijiang Liu <jijiang.liu@intel.com>
---
app/test-pmd/csumonly.c | 10 +++++-----
app/test-pmd/rxonly.c | 9 +++------
2 files changed, 8 insertions(+), 11 deletions(-)
v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.
diff --git a/app/test-pmd/csumonly.c b/app/test-pmd/csumonly.c
index 0a7af79..ad877c2 100644
--- a/app/test-pmd/csumonly.c
+++ b/app/test-pmd/csumonly.c
@@ -199,8 +199,9 @@ parse_ethernet(struct ether_hdr *eth_hdr, struct testpmd_offload_info *info)
/* Parse a vxlan header */
static void
-parse_vxlan(struct udp_hdr *udp_hdr, struct testpmd_offload_info *info,
- uint64_t mbuf_olflags)
+parse_vxlan(struct udp_hdr *udp_hdr,
+ struct testpmd_offload_info *info,
+ uint32_t pkt_type)
{
struct ether_hdr *eth_hdr;
@@ -208,8 +209,7 @@ parse_vxlan(struct udp_hdr *udp_hdr, struct testpmd_offload_info *info,
* (rfc7348) or that the rx offload flag is set (i40e only
* currently) */
if (udp_hdr->dst_port != _htons(4789) &&
- (mbuf_olflags & (PKT_RX_TUNNEL_IPV4_HDR |
- PKT_RX_TUNNEL_IPV6_HDR)) == 0)
+ RTE_ETH_IS_TUNNEL_PKT(pkt_type) == 0)
return;
info->is_tunnel = 1;
@@ -543,7 +543,7 @@ pkt_burst_checksum_forward(struct fwd_stream *fs)
struct udp_hdr *udp_hdr;
udp_hdr = (struct udp_hdr *)((char *)l3_hdr +
info.l3_len);
- parse_vxlan(udp_hdr, &info, m->ol_flags);
+ parse_vxlan(udp_hdr, &info, m->packet_type);
} else if (info.l4_proto == IPPROTO_GRE) {
struct simple_gre_hdr *gre_hdr;
gre_hdr = (struct simple_gre_hdr *)
diff --git a/app/test-pmd/rxonly.c b/app/test-pmd/rxonly.c
index fdfe990..8eb68c4 100644
--- a/app/test-pmd/rxonly.c
+++ b/app/test-pmd/rxonly.c
@@ -92,7 +92,7 @@ pkt_burst_receive(struct fwd_stream *fs)
uint64_t ol_flags;
uint16_t nb_rx;
uint16_t i, packet_type;
- uint64_t is_encapsulation;
+ uint16_t is_encapsulation;
#ifdef RTE_TEST_PMD_RECORD_CORE_CYCLES
uint64_t start_tsc;
@@ -135,10 +135,7 @@ pkt_burst_receive(struct fwd_stream *fs)
eth_type = RTE_BE_TO_CPU_16(eth_hdr->ether_type);
ol_flags = mb->ol_flags;
packet_type = mb->packet_type;
-
- is_encapsulation = ol_flags & (PKT_RX_TUNNEL_IPV4_HDR |
- PKT_RX_TUNNEL_IPV6_HDR);
-
+ is_encapsulation = RTE_ETH_IS_TUNNEL_PKT(packet_type);
print_ether_addr(" src=", ð_hdr->s_addr);
print_ether_addr(" - dst=", ð_hdr->d_addr);
printf(" - type=0x%04x - length=%u - nb_segs=%d",
@@ -174,7 +171,7 @@ pkt_burst_receive(struct fwd_stream *fs)
l2_len = sizeof(struct ether_hdr);
/* Do not support ipv4 option field */
- if (ol_flags & PKT_RX_TUNNEL_IPV4_HDR) {
+ if (RTE_ETH_IS_IPV4_HDR(packet_type)) {
l3_len = sizeof(struct ipv4_hdr);
ipv4_hdr = (struct ipv4_hdr *) (rte_pktmbuf_mtod(mb,
unsigned char *) + l2_len);
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v3 11/16] examples/ip_fragmentation: support of unified packet type
2015-02-17 6:59 ` [dpdk-dev] [PATCH v3 00/16] unified packet type Helin Zhang
` (9 preceding siblings ...)
2015-02-17 6:59 ` [dpdk-dev] [PATCH v3 10/16] app/testpmd: " Helin Zhang
@ 2015-02-17 6:59 ` Helin Zhang
2015-02-17 6:59 ` [dpdk-dev] [PATCH v3 12/16] examples/ip_reassembly: " Helin Zhang
` (7 subsequent siblings)
18 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-02-17 6:59 UTC (permalink / raw)
To: dev
To unify packet types among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
examples/ip_fragmentation/main.c | 7 +++----
1 file changed, 3 insertions(+), 4 deletions(-)
v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.
diff --git a/examples/ip_fragmentation/main.c b/examples/ip_fragmentation/main.c
index eac5427..152844e 100644
--- a/examples/ip_fragmentation/main.c
+++ b/examples/ip_fragmentation/main.c
@@ -286,7 +286,7 @@ l3fwd_simple_forward(struct rte_mbuf *m, struct lcore_queue_conf *qconf,
len = qconf->tx_mbufs[port_out].len;
/* if this is an IPv4 packet */
- if (m->ol_flags & PKT_RX_IPV4_HDR) {
+ if (RTE_ETH_IS_IPV4_HDR(m->packet_type)) {
struct ipv4_hdr *ip_hdr;
uint32_t ip_dst;
/* Read the lookup key (i.e. ip_dst) from the input packet */
@@ -320,9 +320,8 @@ l3fwd_simple_forward(struct rte_mbuf *m, struct lcore_queue_conf *qconf,
if (unlikely (len2 < 0))
return;
}
- }
- /* if this is an IPv6 packet */
- else if (m->ol_flags & PKT_RX_IPV6_HDR) {
+ } else if (RTE_ETH_IS_IPV6_HDR(m->packet_type)) {
+ /* if this is an IPv6 packet */
struct ipv6_hdr *ip_hdr;
ipv6 = 1;
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v3 12/16] examples/ip_reassembly: support of unified packet type
2015-02-17 6:59 ` [dpdk-dev] [PATCH v3 00/16] unified packet type Helin Zhang
` (10 preceding siblings ...)
2015-02-17 6:59 ` [dpdk-dev] [PATCH v3 11/16] examples/ip_fragmentation: " Helin Zhang
@ 2015-02-17 6:59 ` Helin Zhang
2015-02-17 6:59 ` [dpdk-dev] [PATCH v3 13/16] examples/l3fwd-acl: " Helin Zhang
` (6 subsequent siblings)
18 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-02-17 6:59 UTC (permalink / raw)
To: dev
To unify packet types among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
examples/ip_reassembly/main.c | 7 +++----
1 file changed, 3 insertions(+), 4 deletions(-)
v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.
diff --git a/examples/ip_reassembly/main.c b/examples/ip_reassembly/main.c
index 8492153..5ef2135 100644
--- a/examples/ip_reassembly/main.c
+++ b/examples/ip_reassembly/main.c
@@ -357,7 +357,7 @@ reassemble(struct rte_mbuf *m, uint8_t portid, uint32_t queue,
dst_port = portid;
/* if packet is IPv4 */
- if (m->ol_flags & (PKT_RX_IPV4_HDR)) {
+ if (RTE_ETH_IS_IPV4_HDR(m->packet_type)) {
struct ipv4_hdr *ip_hdr;
uint32_t ip_dst;
@@ -397,9 +397,8 @@ reassemble(struct rte_mbuf *m, uint8_t portid, uint32_t queue,
}
eth_hdr->ether_type = rte_be_to_cpu_16(ETHER_TYPE_IPv4);
- }
- /* if packet is IPv6 */
- else if (m->ol_flags & (PKT_RX_IPV6_HDR | PKT_RX_IPV6_HDR_EXT)) {
+ } else if (RTE_ETH_IS_IPV6_HDR(m->packet_type)) {
+ /* if packet is IPv6 */
struct ipv6_extension_fragment *frag_hdr;
struct ipv6_hdr *ip_hdr;
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v3 13/16] examples/l3fwd-acl: support of unified packet type
2015-02-17 6:59 ` [dpdk-dev] [PATCH v3 00/16] unified packet type Helin Zhang
` (11 preceding siblings ...)
2015-02-17 6:59 ` [dpdk-dev] [PATCH v3 12/16] examples/ip_reassembly: " Helin Zhang
@ 2015-02-17 6:59 ` Helin Zhang
2015-02-17 6:59 ` [dpdk-dev] [PATCH v3 14/16] examples/l3fwd-power: " Helin Zhang
` (5 subsequent siblings)
18 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-02-17 6:59 UTC (permalink / raw)
To: dev
To unify packet types among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
examples/l3fwd-acl/main.c | 19 ++++++-------------
1 file changed, 6 insertions(+), 13 deletions(-)
v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.
diff --git a/examples/l3fwd-acl/main.c b/examples/l3fwd-acl/main.c
index f1f7601..af70ccd 100644
--- a/examples/l3fwd-acl/main.c
+++ b/examples/l3fwd-acl/main.c
@@ -651,9 +651,7 @@ prepare_one_packet(struct rte_mbuf **pkts_in, struct acl_search_t *acl,
struct ipv4_hdr *ipv4_hdr;
struct rte_mbuf *pkt = pkts_in[index];
- int type = pkt->ol_flags & (PKT_RX_IPV4_HDR | PKT_RX_IPV6_HDR);
-
- if (type == PKT_RX_IPV4_HDR) {
+ if (RTE_ETH_IS_IPV4_HDR(pkt->packet_type)) {
ipv4_hdr = (struct ipv4_hdr *)(rte_pktmbuf_mtod(pkt,
unsigned char *) + sizeof(struct ether_hdr));
@@ -674,8 +672,7 @@ prepare_one_packet(struct rte_mbuf **pkts_in, struct acl_search_t *acl,
rte_pktmbuf_free(pkt);
}
- } else if (type == PKT_RX_IPV6_HDR) {
-
+ } else if (RTE_ETH_IS_IPV6_HDR(pkt->packet_type)) {
/* Fill acl structure */
acl->data_ipv6[acl->num_ipv6] = MBUF_IPV6_2PROTO(pkt);
acl->m_ipv6[(acl->num_ipv6)++] = pkt;
@@ -693,17 +690,13 @@ prepare_one_packet(struct rte_mbuf **pkts_in, struct acl_search_t *acl,
{
struct rte_mbuf *pkt = pkts_in[index];
- int type = pkt->ol_flags & (PKT_RX_IPV4_HDR | PKT_RX_IPV6_HDR);
-
- if (type == PKT_RX_IPV4_HDR) {
-
+ if (RTE_ETH_IS_IPV4_HDR(pkt->packet_type)) {
/* Fill acl structure */
acl->data_ipv4[acl->num_ipv4] = MBUF_IPV4_2PROTO(pkt);
acl->m_ipv4[(acl->num_ipv4)++] = pkt;
- } else if (type == PKT_RX_IPV6_HDR) {
-
+ } else if (RTE_ETH_IS_IPV6_HDR(pkt->packet_type)) {
/* Fill acl structure */
acl->data_ipv6[acl->num_ipv6] = MBUF_IPV6_2PROTO(pkt);
acl->m_ipv6[(acl->num_ipv6)++] = pkt;
@@ -751,9 +744,9 @@ send_one_packet(struct rte_mbuf *m, uint32_t res)
/* in the ACL list, drop it */
#ifdef L3FWDACL_DEBUG
if ((res & ACL_DENY_SIGNATURE) != 0) {
- if (m->ol_flags & PKT_RX_IPV4_HDR)
+ if (RTE_ETH_IS_IPV4_HDR(m->packet_type))
dump_acl4_rule(m, res);
- else
+ else if (RTE_ETH_IS_IPV6_HDR(m->packet_type))
dump_acl6_rule(m, res);
}
#endif
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v3 14/16] examples/l3fwd-power: support of unified packet type
2015-02-17 6:59 ` [dpdk-dev] [PATCH v3 00/16] unified packet type Helin Zhang
` (12 preceding siblings ...)
2015-02-17 6:59 ` [dpdk-dev] [PATCH v3 13/16] examples/l3fwd-acl: " Helin Zhang
@ 2015-02-17 6:59 ` Helin Zhang
2015-02-17 6:59 ` [dpdk-dev] [PATCH v3 15/16] examples/l3fwd: " Helin Zhang
` (4 subsequent siblings)
18 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-02-17 6:59 UTC (permalink / raw)
To: dev
To unify packet types among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
examples/l3fwd-power/main.c | 5 ++---
1 file changed, 2 insertions(+), 3 deletions(-)
v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.
diff --git a/examples/l3fwd-power/main.c b/examples/l3fwd-power/main.c
index f6b55b9..964e5b9 100644
--- a/examples/l3fwd-power/main.c
+++ b/examples/l3fwd-power/main.c
@@ -638,7 +638,7 @@ l3fwd_simple_forward(struct rte_mbuf *m, uint8_t portid,
eth_hdr = rte_pktmbuf_mtod(m, struct ether_hdr *);
- if (m->ol_flags & PKT_RX_IPV4_HDR) {
+ if (RTE_ETH_IS_IPV4_HDR(m->packet_type)) {
/* Handle IPv4 headers.*/
ipv4_hdr =
(struct ipv4_hdr *)(rte_pktmbuf_mtod(m, unsigned char*)
@@ -673,8 +673,7 @@ l3fwd_simple_forward(struct rte_mbuf *m, uint8_t portid,
ether_addr_copy(&ports_eth_addr[dst_port], ð_hdr->s_addr);
send_single_packet(m, dst_port);
- }
- else {
+ } else if (RTE_ETH_IS_IPV6_HDR(m->packet_type)) {
/* Handle IPv6 headers.*/
#if (APP_LOOKUP_METHOD == APP_LOOKUP_EXACT_MATCH)
struct ipv6_hdr *ipv6_hdr;
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v3 15/16] examples/l3fwd: support of unified packet type
2015-02-17 6:59 ` [dpdk-dev] [PATCH v3 00/16] unified packet type Helin Zhang
` (13 preceding siblings ...)
2015-02-17 6:59 ` [dpdk-dev] [PATCH v3 14/16] examples/l3fwd-power: " Helin Zhang
@ 2015-02-17 6:59 ` Helin Zhang
2015-02-17 6:59 ` [dpdk-dev] [PATCH v3 16/16] mbuf: remove old packet type bit masks Helin Zhang
` (3 subsequent siblings)
18 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-02-17 6:59 UTC (permalink / raw)
To: dev
To unify packet types among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
examples/l3fwd/main.c | 71 +++++++++++++++++++++++++++++----------------------
1 file changed, 40 insertions(+), 31 deletions(-)
v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.
v3 changes:
* Minor bug fixes and enhancements.
diff --git a/examples/l3fwd/main.c b/examples/l3fwd/main.c
index 6f7d7d4..49000f3 100644
--- a/examples/l3fwd/main.c
+++ b/examples/l3fwd/main.c
@@ -958,7 +958,7 @@ l3fwd_simple_forward(struct rte_mbuf *m, uint8_t portid, struct lcore_conf *qcon
eth_hdr = rte_pktmbuf_mtod(m, struct ether_hdr *);
- if (m->ol_flags & PKT_RX_IPV4_HDR) {
+ if (RTE_ETH_IS_IPV4_HDR(m->packet_type)) {
/* Handle IPv4 headers.*/
ipv4_hdr = (struct ipv4_hdr *)(rte_pktmbuf_mtod(m, unsigned char *) +
sizeof(struct ether_hdr));
@@ -993,7 +993,7 @@ l3fwd_simple_forward(struct rte_mbuf *m, uint8_t portid, struct lcore_conf *qcon
send_single_packet(m, dst_port);
- } else {
+ } else if (RTE_ETH_IS_IPV6_HDR(m->packet_type)) {
/* Handle IPv6 headers.*/
struct ipv6_hdr *ipv6_hdr;
@@ -1014,8 +1014,9 @@ l3fwd_simple_forward(struct rte_mbuf *m, uint8_t portid, struct lcore_conf *qcon
ether_addr_copy(&ports_eth_addr[dst_port], ð_hdr->s_addr);
send_single_packet(m, dst_port);
- }
-
+ } else
+ /* Free the mbuf that contains non-IPV4/IPV6 packet */
+ rte_pktmbuf_free(m);
}
#ifdef DO_RFC_1812_CHECKS
@@ -1039,11 +1040,11 @@ l3fwd_simple_forward(struct rte_mbuf *m, uint8_t portid, struct lcore_conf *qcon
* to BAD_PORT value.
*/
static inline __attribute__((always_inline)) void
-rfc1812_process(struct ipv4_hdr *ipv4_hdr, uint16_t *dp, uint32_t flags)
+rfc1812_process(struct ipv4_hdr *ipv4_hdr, uint16_t *dp, uint32_t ptype)
{
uint8_t ihl;
- if ((flags & PKT_RX_IPV4_HDR) != 0) {
+ if (RTE_ETH_IS_IPV4_HDR(ptype)) {
ihl = ipv4_hdr->version_ihl - IPV4_MIN_VER_IHL;
@@ -1074,11 +1075,11 @@ get_dst_port(const struct lcore_conf *qconf, struct rte_mbuf *pkt,
struct ipv6_hdr *ipv6_hdr;
struct ether_hdr *eth_hdr;
- if (pkt->ol_flags & PKT_RX_IPV4_HDR) {
+ if (RTE_ETH_IS_IPV4_HDR(pkt->packet_type)) {
if (rte_lpm_lookup(qconf->ipv4_lookup_struct, dst_ipv4,
&next_hop) != 0)
next_hop = portid;
- } else if (pkt->ol_flags & PKT_RX_IPV6_HDR) {
+ } else if (RTE_ETH_IS_IPV6_HDR(pkt->packet_type)) {
eth_hdr = rte_pktmbuf_mtod(pkt, struct ether_hdr *);
ipv6_hdr = (struct ipv6_hdr *)(eth_hdr + 1);
if (rte_lpm6_lookup(qconf->ipv6_lookup_struct,
@@ -1112,17 +1113,19 @@ process_packet(struct lcore_conf *qconf, struct rte_mbuf *pkt,
ve = val_eth[dp];
dst_port[0] = dp;
- rfc1812_process(ipv4_hdr, dst_port, pkt->ol_flags);
+ rfc1812_process(ipv4_hdr, dst_port, pkt->packet_type);
te = _mm_blend_epi16(te, ve, MASK_ETH);
_mm_store_si128((__m128i *)eth_hdr, te);
}
/*
- * Read ol_flags and destination IPV4 addresses from 4 mbufs.
+ * Read packet_type and destination IPV4 addresses from 4 mbufs.
*/
static inline void
-processx4_step1(struct rte_mbuf *pkt[FWDSTEP], __m128i *dip, uint32_t *flag)
+processx4_step1(struct rte_mbuf *pkt[FWDSTEP],
+ __m128i *dip,
+ uint32_t *ipv4_flag)
{
struct ipv4_hdr *ipv4_hdr;
struct ether_hdr *eth_hdr;
@@ -1131,22 +1134,22 @@ processx4_step1(struct rte_mbuf *pkt[FWDSTEP], __m128i *dip, uint32_t *flag)
eth_hdr = rte_pktmbuf_mtod(pkt[0], struct ether_hdr *);
ipv4_hdr = (struct ipv4_hdr *)(eth_hdr + 1);
x0 = ipv4_hdr->dst_addr;
- flag[0] = pkt[0]->ol_flags & PKT_RX_IPV4_HDR;
+ ipv4_flag[0] = pkt[0]->packet_type & RTE_PTYPE_L3_IPV4;
eth_hdr = rte_pktmbuf_mtod(pkt[1], struct ether_hdr *);
ipv4_hdr = (struct ipv4_hdr *)(eth_hdr + 1);
x1 = ipv4_hdr->dst_addr;
- flag[0] &= pkt[1]->ol_flags;
+ ipv4_flag[0] &= pkt[1]->packet_type;
eth_hdr = rte_pktmbuf_mtod(pkt[2], struct ether_hdr *);
ipv4_hdr = (struct ipv4_hdr *)(eth_hdr + 1);
x2 = ipv4_hdr->dst_addr;
- flag[0] &= pkt[2]->ol_flags;
+ ipv4_flag[0] &= pkt[2]->packet_type;
eth_hdr = rte_pktmbuf_mtod(pkt[3], struct ether_hdr *);
ipv4_hdr = (struct ipv4_hdr *)(eth_hdr + 1);
x3 = ipv4_hdr->dst_addr;
- flag[0] &= pkt[3]->ol_flags;
+ ipv4_flag[0] &= pkt[3]->packet_type;
dip[0] = _mm_set_epi32(x3, x2, x1, x0);
}
@@ -1156,8 +1159,12 @@ processx4_step1(struct rte_mbuf *pkt[FWDSTEP], __m128i *dip, uint32_t *flag)
* If lookup fails, use incoming port (portid) as destination port.
*/
static inline void
-processx4_step2(const struct lcore_conf *qconf, __m128i dip, uint32_t flag,
- uint8_t portid, struct rte_mbuf *pkt[FWDSTEP], uint16_t dprt[FWDSTEP])
+processx4_step2(const struct lcore_conf *qconf,
+ __m128i dip,
+ uint32_t ipv4_flag,
+ uint8_t portid,
+ struct rte_mbuf *pkt[FWDSTEP],
+ uint16_t dprt[FWDSTEP])
{
rte_xmm_t dst;
const __m128i bswap_mask = _mm_set_epi8(12, 13, 14, 15, 8, 9, 10, 11,
@@ -1167,7 +1174,7 @@ processx4_step2(const struct lcore_conf *qconf, __m128i dip, uint32_t flag,
dip = _mm_shuffle_epi8(dip, bswap_mask);
/* if all 4 packets are IPV4. */
- if (likely(flag != 0)) {
+ if (likely(ipv4_flag)) {
rte_lpm_lookupx4(qconf->ipv4_lookup_struct, dip, dprt, portid);
} else {
dst.x = dip;
@@ -1218,13 +1225,13 @@ processx4_step3(struct rte_mbuf *pkt[FWDSTEP], uint16_t dst_port[FWDSTEP])
_mm_store_si128(p[3], te[3]);
rfc1812_process((struct ipv4_hdr *)((struct ether_hdr *)p[0] + 1),
- &dst_port[0], pkt[0]->ol_flags);
+ &dst_port[0], pkt[0]->packet_type);
rfc1812_process((struct ipv4_hdr *)((struct ether_hdr *)p[1] + 1),
- &dst_port[1], pkt[1]->ol_flags);
+ &dst_port[1], pkt[1]->packet_type);
rfc1812_process((struct ipv4_hdr *)((struct ether_hdr *)p[2] + 1),
- &dst_port[2], pkt[2]->ol_flags);
+ &dst_port[2], pkt[2]->packet_type);
rfc1812_process((struct ipv4_hdr *)((struct ether_hdr *)p[3] + 1),
- &dst_port[3], pkt[3]->ol_flags);
+ &dst_port[3], pkt[3]->packet_type);
}
/*
@@ -1411,7 +1418,7 @@ main_loop(__attribute__((unused)) void *dummy)
uint16_t *lp;
uint16_t dst_port[MAX_PKT_BURST];
__m128i dip[MAX_PKT_BURST / FWDSTEP];
- uint32_t flag[MAX_PKT_BURST / FWDSTEP];
+ uint32_t ipv4_flag[MAX_PKT_BURST / FWDSTEP];
uint16_t pnum[MAX_PKT_BURST + 1];
#endif
@@ -1481,14 +1488,16 @@ main_loop(__attribute__((unused)) void *dummy)
*/
int32_t n = RTE_ALIGN_FLOOR(nb_rx, 4);
for (j = 0; j < n ; j+=4) {
- uint32_t ol_flag = pkts_burst[j]->ol_flags
- & pkts_burst[j+1]->ol_flags
- & pkts_burst[j+2]->ol_flags
- & pkts_burst[j+3]->ol_flags;
- if (ol_flag & PKT_RX_IPV4_HDR ) {
+ uint32_t pkt_type =
+ pkts_burst[j]->packet_type &
+ pkts_burst[j+1]->packet_type &
+ pkts_burst[j+2]->packet_type &
+ pkts_burst[j+3]->packet_type;
+ if (pkt_type & RTE_PTYPE_L3_IPV4) {
simple_ipv4_fwd_4pkts(&pkts_burst[j],
portid, qconf);
- } else if (ol_flag & PKT_RX_IPV6_HDR) {
+ } else if (pkt_type &
+ RTE_PTYPE_L3_IPV6) {
simple_ipv6_fwd_4pkts(&pkts_burst[j],
portid, qconf);
} else {
@@ -1513,13 +1522,13 @@ main_loop(__attribute__((unused)) void *dummy)
for (j = 0; j != k; j += FWDSTEP) {
processx4_step1(&pkts_burst[j],
&dip[j / FWDSTEP],
- &flag[j / FWDSTEP]);
+ &ipv4_flag[j / FWDSTEP]);
}
k = RTE_ALIGN_FLOOR(nb_rx, FWDSTEP);
for (j = 0; j != k; j += FWDSTEP) {
processx4_step2(qconf, dip[j / FWDSTEP],
- flag[j / FWDSTEP], portid,
+ ipv4_flag[j / FWDSTEP], portid,
&pkts_burst[j], &dst_port[j]);
}
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v3 16/16] mbuf: remove old packet type bit masks
2015-02-17 6:59 ` [dpdk-dev] [PATCH v3 00/16] unified packet type Helin Zhang
` (14 preceding siblings ...)
2015-02-17 6:59 ` [dpdk-dev] [PATCH v3 15/16] examples/l3fwd: " Helin Zhang
@ 2015-02-17 6:59 ` Helin Zhang
2015-02-17 7:03 ` [dpdk-dev] [PATCH v3 00/16] unified packet type Liang, Cunming
` (2 subsequent siblings)
18 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-02-17 6:59 UTC (permalink / raw)
To: dev
As unified packet types are used instead, those old bit masks and
the relevant macros for packet type indication need to be removed.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
lib/librte_mbuf/rte_mbuf.c | 6 ------
lib/librte_mbuf/rte_mbuf.h | 14 ++++----------
2 files changed, 4 insertions(+), 16 deletions(-)
v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.
* Redefined the bit masks for packet RX offload flags.
diff --git a/lib/librte_mbuf/rte_mbuf.c b/lib/librte_mbuf/rte_mbuf.c
index 2a4bc8c..6e018c4 100644
--- a/lib/librte_mbuf/rte_mbuf.c
+++ b/lib/librte_mbuf/rte_mbuf.c
@@ -215,14 +215,8 @@ const char *rte_get_rx_ol_flag_name(uint64_t mask)
/* case PKT_RX_HBUF_OVERFLOW: return "PKT_RX_HBUF_OVERFLOW"; */
/* case PKT_RX_RECIP_ERR: return "PKT_RX_RECIP_ERR"; */
/* case PKT_RX_MAC_ERR: return "PKT_RX_MAC_ERR"; */
- case PKT_RX_IPV4_HDR: return "PKT_RX_IPV4_HDR";
- case PKT_RX_IPV4_HDR_EXT: return "PKT_RX_IPV4_HDR_EXT";
- case PKT_RX_IPV6_HDR: return "PKT_RX_IPV6_HDR";
- case PKT_RX_IPV6_HDR_EXT: return "PKT_RX_IPV6_HDR_EXT";
case PKT_RX_IEEE1588_PTP: return "PKT_RX_IEEE1588_PTP";
case PKT_RX_IEEE1588_TMST: return "PKT_RX_IEEE1588_TMST";
- case PKT_RX_TUNNEL_IPV4_HDR: return "PKT_RX_TUNNEL_IPV4_HDR";
- case PKT_RX_TUNNEL_IPV6_HDR: return "PKT_RX_TUNNEL_IPV6_HDR";
default: return NULL;
}
}
diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
index 2cdf8a0..069a8f7 100644
--- a/lib/librte_mbuf/rte_mbuf.h
+++ b/lib/librte_mbuf/rte_mbuf.h
@@ -90,16 +90,10 @@ extern "C" {
#define PKT_RX_HBUF_OVERFLOW (0ULL << 0) /**< Header buffer overflow. */
#define PKT_RX_RECIP_ERR (0ULL << 0) /**< Hardware processing error. */
#define PKT_RX_MAC_ERR (0ULL << 0) /**< MAC error. */
-#define PKT_RX_IPV4_HDR (1ULL << 5) /**< RX packet with IPv4 header. */
-#define PKT_RX_IPV4_HDR_EXT (1ULL << 6) /**< RX packet with extended IPv4 header. */
-#define PKT_RX_IPV6_HDR (1ULL << 7) /**< RX packet with IPv6 header. */
-#define PKT_RX_IPV6_HDR_EXT (1ULL << 8) /**< RX packet with extended IPv6 header. */
-#define PKT_RX_IEEE1588_PTP (1ULL << 9) /**< RX IEEE1588 L2 Ethernet PT Packet. */
-#define PKT_RX_IEEE1588_TMST (1ULL << 10) /**< RX IEEE1588 L2/L4 timestamped packet.*/
-#define PKT_RX_TUNNEL_IPV4_HDR (1ULL << 11) /**< RX tunnel packet with IPv4 header.*/
-#define PKT_RX_TUNNEL_IPV6_HDR (1ULL << 12) /**< RX tunnel packet with IPv6 header. */
-#define PKT_RX_FDIR_ID (1ULL << 13) /**< FD id reported if FDIR match. */
-#define PKT_RX_FDIR_FLX (1ULL << 14) /**< Flexible bytes reported if FDIR match. */
+#define PKT_RX_IEEE1588_PTP (1ULL << 5) /**< RX IEEE1588 L2 Ethernet PT Packet. */
+#define PKT_RX_IEEE1588_TMST (1ULL << 6) /**< RX IEEE1588 L2/L4 timestamped packet.*/
+#define PKT_RX_FDIR_ID (1ULL << 7) /**< FD id reported if FDIR match. */
+#define PKT_RX_FDIR_FLX (1ULL << 8) /**< Flexible bytes reported if FDIR match. */
/* add new RX flags here */
/* add new TX flags here */
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* Re: [dpdk-dev] [PATCH v3 00/16] unified packet type
2015-02-17 6:59 ` [dpdk-dev] [PATCH v3 00/16] unified packet type Helin Zhang
` (15 preceding siblings ...)
2015-02-17 6:59 ` [dpdk-dev] [PATCH v3 16/16] mbuf: remove old packet type bit masks Helin Zhang
@ 2015-02-17 7:03 ` Liang, Cunming
2015-02-17 9:46 ` Ananyev, Konstantin
2015-02-27 13:11 ` [dpdk-dev] [PATCH v4 00/18] " Helin Zhang
18 siblings, 0 replies; 257+ messages in thread
From: Liang, Cunming @ 2015-02-17 7:03 UTC (permalink / raw)
To: Zhang, Helin, dev
> -----Original Message-----
> From: Zhang, Helin
> Sent: Tuesday, February 17, 2015 2:59 PM
> To: dev@dpdk.org
> Cc: Cao, Waterman; Liang, Cunming; Liu, Jijiang; Ananyev, Konstantin; Richardson,
> Bruce; Zhang, Helin
> Subject: [PATCH v3 00/16] unified packet type
>
> Currently only 6 bits which are stored in ol_flags are used to indicate the
> packet types. This is not enough, as some NIC hardware can recognize quite
> a lot of packet types, e.g i40e hardware can recognize more than 150 packet
> types. Hiding those packet types hides hardware offload capabilities which
> could be quite useful for improving performance and for end users. So an
> unified packet types are needed to support all possible PMDs. A 16 bits
> packet_type in mbuf structure can be changed to 32 bits and used for this
> purpose. In addition, all packet types stored in ol_flag field should be
> deleted at all, and 6 bits of ol_flags can be save as the benifit.
>
> Initially, 32 bits of packet_type can be divided into several sub fields to
> indicate different packet type information of a packet. The initial design
> is to divide those bits into fields for L2 types, L3 types, L4 types, tunnel
> types, inner L2 types, inner L3 types and inner L4 types. All PMDs should
> translate the offloaded packet types into these 7 fields of information, for
> user applications.
>
> v2 changes:
> * Enlarged the packet_type field from 16 bits to 32 bits.
> * Redefined the packet type sub-fields.
> * Updated the 'struct rte_kni_mbuf' for KNI according to the mbuf changes.
> * Used redefined packet types and enlarged packet_type field for all PMDs
> and corresponding applications.
> * Removed changes in bond and its relevant application, as there is no need
> at all according to the recent bond changes.
>
> v3 changes:
> * Put the mbuf layout changes into a single patch.
> * Put vector ixgbe changes right after mbuf changes.
> * Disabled vector ixgbe PMD by default, as mbuf layout changed, and then
> re-enabled it after vector ixgbe PMD updated.
> * Put the definitions of unified packet type into a single patch.
> * Minor bug fixes and enhancements in l3fwd example.
>
> Helin Zhang (16):
> mbuf: redefinition of packet_type in rte_mbuf
> ixgbe: support of unified packet type for vector
> mbuf: add definitions of unified packet types
> e1000: support of unified packet type
> ixgbe: support of unified packet type
> i40e: support of unified packet type
> enic: support of unified packet type
> vmxnet3: support of unified packet type
> app/test-pipeline: support of unified packet type
> app/testpmd: support of unified packet type
> examples/ip_fragmentation: support of unified packet type
> examples/ip_reassembly: support of unified packet type
> examples/l3fwd-acl: support of unified packet type
> examples/l3fwd-power: support of unified packet type
> examples/l3fwd: support of unified packet type
> mbuf: remove old packet type bit masks
>
> app/test-pipeline/pipeline_hash.c | 7 +-
> app/test-pmd/csumonly.c | 10 +-
> app/test-pmd/rxonly.c | 9 +-
> examples/ip_fragmentation/main.c | 7 +-
> examples/ip_reassembly/main.c | 7 +-
> examples/l3fwd-acl/main.c | 19 +-
> examples/l3fwd-power/main.c | 5 +-
> examples/l3fwd/main.c | 71 +-
> .../linuxapp/eal/include/exec-env/rte_kni_common.h | 4 +-
> lib/librte_mbuf/rte_mbuf.c | 6 -
> lib/librte_mbuf/rte_mbuf.h | 127 +++-
> lib/librte_pmd_e1000/igb_rxtx.c | 98 ++-
> lib/librte_pmd_enic/enic_main.c | 14 +-
> lib/librte_pmd_i40e/i40e_rxtx.c | 786 ++++++++++++++-------
> lib/librte_pmd_ixgbe/ixgbe_rxtx.c | 146 +++-
> lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c | 49 +-
> lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c | 4 +-
> 17 files changed, 921 insertions(+), 448 deletions(-)
>
> --
> 1.9.3
Acked-by: Cunming Liang <cunming.liang@intel.com>
^ permalink raw reply [flat|nested] 257+ messages in thread
* Re: [dpdk-dev] [PATCH v3 03/16] mbuf: add definitions of unified packet types
2015-02-17 6:59 ` [dpdk-dev] [PATCH v3 03/16] mbuf: add definitions of unified packet types Helin Zhang
@ 2015-02-17 9:01 ` Olivier MATZ
2015-02-20 14:26 ` Zhang, Helin
0 siblings, 1 reply; 257+ messages in thread
From: Olivier MATZ @ 2015-02-17 9:01 UTC (permalink / raw)
To: Helin Zhang, dev
Hi Helin,
On 02/17/2015 07:59 AM, Helin Zhang wrote:
> As there are only 6 bit flags in ol_flags for indicating packet
> types, which is not enough to describe all the possible packet
> types hardware can recognize. For example, i40e hardware can
> recognize more than 150 packet types. Unified packet type is
> composed of L2 type, L3 type, L4 type, tunnel type, inner L2 type,
> inner L3 type and inner L4 type fields, and can be stored in
> 'struct rte_mbuf' of 32 bits field 'packet_type'.
>
> Signed-off-by: Helin Zhang <helin.zhang@intel.com>
A formal definition of each flag is still missing. I explained
several times why it's needed. We must be able to answer to these
questions:
- If I'm developing a PMD, what fields should I check in the packet
to set a specific flag?
- If I'm developing an application, if a specific flag is set, what
checks can I skip?
Example with RTE_PTYPE_L3_IPV4:
- IP version field is 4
- no IP options (header size is 20)
- layer 2 identified the packet as IP (ex: ethertype=0x800)
I think we need such a definition for all packet types.
Regards,
Olivier
^ permalink raw reply [flat|nested] 257+ messages in thread
* Re: [dpdk-dev] [PATCH v3 00/16] unified packet type
2015-02-17 6:59 ` [dpdk-dev] [PATCH v3 00/16] unified packet type Helin Zhang
` (16 preceding siblings ...)
2015-02-17 7:03 ` [dpdk-dev] [PATCH v3 00/16] unified packet type Liang, Cunming
@ 2015-02-17 9:46 ` Ananyev, Konstantin
2015-02-27 13:11 ` [dpdk-dev] [PATCH v4 00/18] " Helin Zhang
18 siblings, 0 replies; 257+ messages in thread
From: Ananyev, Konstantin @ 2015-02-17 9:46 UTC (permalink / raw)
To: Zhang, Helin, dev
> -----Original Message-----
> From: Zhang, Helin
> Sent: Tuesday, February 17, 2015 6:59 AM
> To: dev@dpdk.org
> Cc: Cao, Waterman; Liang, Cunming; Liu, Jijiang; Ananyev, Konstantin; Richardson, Bruce; Zhang, Helin
> Subject: [PATCH v3 00/16] unified packet type
>
> Currently only 6 bits which are stored in ol_flags are used to indicate the
> packet types. This is not enough, as some NIC hardware can recognize quite
> a lot of packet types, e.g i40e hardware can recognize more than 150 packet
> types. Hiding those packet types hides hardware offload capabilities which
> could be quite useful for improving performance and for end users. So an
> unified packet types are needed to support all possible PMDs. A 16 bits
> packet_type in mbuf structure can be changed to 32 bits and used for this
> purpose. In addition, all packet types stored in ol_flag field should be
> deleted at all, and 6 bits of ol_flags can be save as the benifit.
>
> Initially, 32 bits of packet_type can be divided into several sub fields to
> indicate different packet type information of a packet. The initial design
> is to divide those bits into fields for L2 types, L3 types, L4 types, tunnel
> types, inner L2 types, inner L3 types and inner L4 types. All PMDs should
> translate the offloaded packet types into these 7 fields of information, for
> user applications.
>
> v2 changes:
> * Enlarged the packet_type field from 16 bits to 32 bits.
> * Redefined the packet type sub-fields.
> * Updated the 'struct rte_kni_mbuf' for KNI according to the mbuf changes.
> * Used redefined packet types and enlarged packet_type field for all PMDs
> and corresponding applications.
> * Removed changes in bond and its relevant application, as there is no need
> at all according to the recent bond changes.
>
> v3 changes:
> * Put the mbuf layout changes into a single patch.
> * Put vector ixgbe changes right after mbuf changes.
> * Disabled vector ixgbe PMD by default, as mbuf layout changed, and then
> re-enabled it after vector ixgbe PMD updated.
> * Put the definitions of unified packet type into a single patch.
> * Minor bug fixes and enhancements in l3fwd example.
>
> Helin Zhang (16):
> mbuf: redefinition of packet_type in rte_mbuf
> ixgbe: support of unified packet type for vector
> mbuf: add definitions of unified packet types
> e1000: support of unified packet type
> ixgbe: support of unified packet type
> i40e: support of unified packet type
> enic: support of unified packet type
> vmxnet3: support of unified packet type
> app/test-pipeline: support of unified packet type
> app/testpmd: support of unified packet type
> examples/ip_fragmentation: support of unified packet type
> examples/ip_reassembly: support of unified packet type
> examples/l3fwd-acl: support of unified packet type
> examples/l3fwd-power: support of unified packet type
> examples/l3fwd: support of unified packet type
> mbuf: remove old packet type bit masks
>
> app/test-pipeline/pipeline_hash.c | 7 +-
> app/test-pmd/csumonly.c | 10 +-
> app/test-pmd/rxonly.c | 9 +-
> examples/ip_fragmentation/main.c | 7 +-
> examples/ip_reassembly/main.c | 7 +-
> examples/l3fwd-acl/main.c | 19 +-
> examples/l3fwd-power/main.c | 5 +-
> examples/l3fwd/main.c | 71 +-
> .../linuxapp/eal/include/exec-env/rte_kni_common.h | 4 +-
> lib/librte_mbuf/rte_mbuf.c | 6 -
> lib/librte_mbuf/rte_mbuf.h | 127 +++-
> lib/librte_pmd_e1000/igb_rxtx.c | 98 ++-
> lib/librte_pmd_enic/enic_main.c | 14 +-
> lib/librte_pmd_i40e/i40e_rxtx.c | 786 ++++++++++++++-------
> lib/librte_pmd_ixgbe/ixgbe_rxtx.c | 146 +++-
> lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c | 49 +-
> lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c | 4 +-
> 17 files changed, 921 insertions(+), 448 deletions(-)
>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> --
> 1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* Re: [dpdk-dev] [PATCH v3 03/16] mbuf: add definitions of unified packet types
2015-02-17 9:01 ` Olivier MATZ
@ 2015-02-20 14:26 ` Zhang, Helin
2015-02-24 9:09 ` Olivier MATZ
0 siblings, 1 reply; 257+ messages in thread
From: Zhang, Helin @ 2015-02-20 14:26 UTC (permalink / raw)
To: Olivier MATZ, dev
> -----Original Message-----
> From: Olivier MATZ [mailto:olivier.matz@6wind.com]
> Sent: Tuesday, February 17, 2015 5:02 PM
> To: Zhang, Helin; dev@dpdk.org
> Subject: Re: [dpdk-dev] [PATCH v3 03/16] mbuf: add definitions of unified packet
> types
>
> Hi Helin,
>
> On 02/17/2015 07:59 AM, Helin Zhang wrote:
> > As there are only 6 bit flags in ol_flags for indicating packet types,
> > which is not enough to describe all the possible packet types hardware
> > can recognize. For example, i40e hardware can recognize more than 150
> > packet types. Unified packet type is composed of L2 type, L3 type, L4
> > type, tunnel type, inner L2 type, inner L3 type and inner L4 type
> > fields, and can be stored in 'struct rte_mbuf' of 32 bits field
> > 'packet_type'.
> >
> > Signed-off-by: Helin Zhang <helin.zhang@intel.com>
>
> A formal definition of each flag is still missing. I explained several times why it's
> needed. We must be able to answer to these
> questions:
>
> - If I'm developing a PMD, what fields should I check in the packet
> to set a specific flag?
> - If I'm developing an application, if a specific flag is set, what
> checks can I skip?
>
> Example with RTE_PTYPE_L3_IPV4:
>
> - IP version field is 4
> - no IP options (header size is 20)
> - layer 2 identified the packet as IP (ex: ethertype=0x800)
>
> I think we need such a definition for all packet types.
You meant we need a detailed description of each packet type, right?
If yes, I can add those information soon. Thanks for the helps!
Regards,
Helin
>
> Regards,
> Olivier
^ permalink raw reply [flat|nested] 257+ messages in thread
* Re: [dpdk-dev] [PATCH v3 03/16] mbuf: add definitions of unified packet types
2015-02-20 14:26 ` Zhang, Helin
@ 2015-02-24 9:09 ` Olivier MATZ
2015-02-24 13:38 ` Zhang, Helin
0 siblings, 1 reply; 257+ messages in thread
From: Olivier MATZ @ 2015-02-24 9:09 UTC (permalink / raw)
To: Zhang, Helin, dev
Hi Helin,
On 02/20/2015 03:26 PM, Zhang, Helin wrote:
>> On 02/17/2015 07:59 AM, Helin Zhang wrote:
>>> As there are only 6 bit flags in ol_flags for indicating packet types,
>>> which is not enough to describe all the possible packet types hardware
>>> can recognize. For example, i40e hardware can recognize more than 150
>>> packet types. Unified packet type is composed of L2 type, L3 type, L4
>>> type, tunnel type, inner L2 type, inner L3 type and inner L4 type
>>> fields, and can be stored in 'struct rte_mbuf' of 32 bits field
>>> 'packet_type'.
>>>
>>> Signed-off-by: Helin Zhang <helin.zhang@intel.com>
>>
>> A formal definition of each flag is still missing. I explained several times why it's
>> needed. We must be able to answer to these
>> questions:
>>
>> - If I'm developing a PMD, what fields should I check in the packet
>> to set a specific flag?
>> - If I'm developing an application, if a specific flag is set, what
>> checks can I skip?
>>
>> Example with RTE_PTYPE_L3_IPV4:
>>
>> - IP version field is 4
>> - no IP options (header size is 20)
>> - layer 2 identified the packet as IP (ex: ethertype=0x800)
>>
>> I think we need such a definition for all packet types.
> You meant we need a detailed description of each packet type, right?
> If yes, I can add those information soon. Thanks for the helps!
Yes, I think this would be really helpful.
Thank you!
Olivier
^ permalink raw reply [flat|nested] 257+ messages in thread
* Re: [dpdk-dev] [PATCH v3 03/16] mbuf: add definitions of unified packet types
2015-02-24 9:09 ` Olivier MATZ
@ 2015-02-24 13:38 ` Zhang, Helin
0 siblings, 0 replies; 257+ messages in thread
From: Zhang, Helin @ 2015-02-24 13:38 UTC (permalink / raw)
To: Olivier MATZ, dev
> -----Original Message-----
> From: Olivier MATZ [mailto:olivier.matz@6wind.com]
> Sent: Tuesday, February 24, 2015 5:09 PM
> To: Zhang, Helin; dev@dpdk.org
> Subject: Re: [dpdk-dev] [PATCH v3 03/16] mbuf: add definitions of unified packet
> types
>
> Hi Helin,
>
> On 02/20/2015 03:26 PM, Zhang, Helin wrote:
> >> On 02/17/2015 07:59 AM, Helin Zhang wrote:
> >>> As there are only 6 bit flags in ol_flags for indicating packet
> >>> types, which is not enough to describe all the possible packet types
> >>> hardware can recognize. For example, i40e hardware can recognize
> >>> more than 150 packet types. Unified packet type is composed of L2
> >>> type, L3 type, L4 type, tunnel type, inner L2 type, inner L3 type
> >>> and inner L4 type fields, and can be stored in 'struct rte_mbuf' of
> >>> 32 bits field 'packet_type'.
> >>>
> >>> Signed-off-by: Helin Zhang <helin.zhang@intel.com>
> >>
> >> A formal definition of each flag is still missing. I explained
> >> several times why it's needed. We must be able to answer to these
> >> questions:
> >>
> >> - If I'm developing a PMD, what fields should I check in the packet
> >> to set a specific flag?
> >> - If I'm developing an application, if a specific flag is set, what
> >> checks can I skip?
> >>
> >> Example with RTE_PTYPE_L3_IPV4:
> >>
> >> - IP version field is 4
> >> - no IP options (header size is 20)
> >> - layer 2 identified the packet as IP (ex: ethertype=0x800)
> >>
> >> I think we need such a definition for all packet types.
> > You meant we need a detailed description of each packet type, right?
> > If yes, I can add those information soon. Thanks for the helps!
>
> Yes, I think this would be really helpful.
OK. Got it. I will add them and send out v4 version. Thanks for your good suggestions!
Regards,
Helin
>
> Thank you!
> Olivier
^ permalink raw reply [flat|nested] 257+ messages in thread
* Re: [dpdk-dev] [PATCH v3 08/16] vmxnet3: support of unified packet type
2015-02-17 6:59 ` [dpdk-dev] [PATCH v3 08/16] vmxnet3: " Helin Zhang
@ 2015-02-27 11:25 ` Thomas Monjalon
2015-02-27 12:26 ` Zhang, Helin
0 siblings, 1 reply; 257+ messages in thread
From: Thomas Monjalon @ 2015-02-27 11:25 UTC (permalink / raw)
To: dev, Helin Zhang
2015-02-17 14:59, Helin Zhang:
> To unify packet types among all PMDs, bit masks of packet type for
> 'ol_flags' are replaced by unified packet type.
>
> Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Helin, this patch was already acked in v2 and you didn't change it.
Please keep the Acked-by line in such case.
Note that Acked-by is still valid after minor changes like typos.
I'd like every developers adopt this rule. Please spread the word.
^ permalink raw reply [flat|nested] 257+ messages in thread
* Re: [dpdk-dev] [PATCH v3 08/16] vmxnet3: support of unified packet type
2015-02-27 11:25 ` Thomas Monjalon
@ 2015-02-27 12:26 ` Zhang, Helin
0 siblings, 0 replies; 257+ messages in thread
From: Zhang, Helin @ 2015-02-27 12:26 UTC (permalink / raw)
To: Thomas Monjalon, dev
> -----Original Message-----
> From: Thomas Monjalon [mailto:thomas.monjalon@6wind.com]
> Sent: Friday, February 27, 2015 7:26 PM
> To: dev@dpdk.org; Zhang, Helin
> Subject: Re: [dpdk-dev] [PATCH v3 08/16] vmxnet3: support of unified packet
> type
>
> 2015-02-17 14:59, Helin Zhang:
> > To unify packet types among all PMDs, bit masks of packet type for
> > 'ol_flags' are replaced by unified packet type.
> >
> > Signed-off-by: Helin Zhang <helin.zhang@intel.com>
>
> Helin, this patch was already acked in v2 and you didn't change it.
> Please keep the Acked-by line in such case.
> Note that Acked-by is still valid after minor changes like typos.
OK. Good to learn that! Thank you very much!
>
> I'd like every developers adopt this rule. Please spread the word.
Yes, I will forward this rule to all the team here. Hopefully it will be helpful for all of us!
Regards,
Helin
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v4 00/18] unified packet type
2015-02-17 6:59 ` [dpdk-dev] [PATCH v3 00/16] unified packet type Helin Zhang
` (17 preceding siblings ...)
2015-02-17 9:46 ` Ananyev, Konstantin
@ 2015-02-27 13:11 ` Helin Zhang
2015-02-27 13:11 ` [dpdk-dev] [PATCH v4 01/18] mbuf: redefinition of packet_type in rte_mbuf Helin Zhang
` (18 more replies)
18 siblings, 19 replies; 257+ messages in thread
From: Helin Zhang @ 2015-02-27 13:11 UTC (permalink / raw)
To: dev
Currently only 6 bits which are stored in ol_flags are used to indicate the
packet types. This is not enough, as some NIC hardware can recognize quite
a lot of packet types, e.g i40e hardware can recognize more than 150 packet
types. Hiding those packet types hides hardware offload capabilities which
could be quite useful for improving performance and for end users. So an
unified packet types are needed to support all possible PMDs. A 16 bits
packet_type in mbuf structure can be changed to 32 bits and used for this
purpose. In addition, all packet types stored in ol_flag field should be
deleted at all, and 6 bits of ol_flags can be save as the benifit.
Initially, 32 bits of packet_type can be divided into several sub fields to
indicate different packet type information of a packet. The initial design
is to divide those bits into fields for L2 types, L3 types, L4 types, tunnel
types, inner L2 types, inner L3 types and inner L4 types. All PMDs should
translate the offloaded packet types into these 7 fields of information, for
user applications.
v2 changes:
* Enlarged the packet_type field from 16 bits to 32 bits.
* Redefined the packet type sub-fields.
* Updated the 'struct rte_kni_mbuf' for KNI according to the mbuf changes.
* Used redefined packet types and enlarged packet_type field for all PMDs
and corresponding applications.
* Removed changes in bond and its relevant application, as there is no need
at all according to the recent bond changes.
v3 changes:
* Put the mbuf layout changes into a single patch.
* Put vector ixgbe changes right after mbuf changes.
* Disabled vector ixgbe PMD by default, as mbuf layout changed, and then
re-enabled it after vector ixgbe PMD updated.
* Put the definitions of unified packet type into a single patch.
* Minor bug fixes and enhancements in l3fwd example.
v4 changes:
* Added detailed description of each packet types.
* Supported unified packet type of fm10k.
* Added printing logs of packet types of each received packet for rxonly
mode in testpmd.
* Removed several useless code lines which block packet type unification from
app/test/packet_burst_generator.c.
Helin Zhang (18):
mbuf: redefinition of packet_type in rte_mbuf
ixgbe: support of unified packet type for vector
mbuf: add definitions of unified packet types
e1000: support of unified packet type
ixgbe: support of unified packet type
i40e: support of unified packet type
enic: support of unified packet type
vmxnet3: support of unified packet type
fm10k: support of unified packet type
app/test-pipeline: support of unified packet type
app/testpmd: support of unified packet type
app/test: Remove useless code
examples/ip_fragmentation: support of unified packet type
examples/ip_reassembly: support of unified packet type
examples/l3fwd-acl: support of unified packet type
examples/l3fwd-power: support of unified packet type
examples/l3fwd: support of unified packet type
mbuf: remove old packet type bit masks
app/test-pipeline/pipeline_hash.c | 7 +-
app/test-pmd/csumonly.c | 10 +-
app/test-pmd/rxonly.c | 178 ++++-
app/test/packet_burst_generator.c | 10 -
examples/ip_fragmentation/main.c | 7 +-
examples/ip_reassembly/main.c | 7 +-
examples/l3fwd-acl/main.c | 19 +-
examples/l3fwd-power/main.c | 5 +-
examples/l3fwd/main.c | 71 +-
.../linuxapp/eal/include/exec-env/rte_kni_common.h | 4 +-
lib/librte_mbuf/rte_mbuf.c | 6 -
lib/librte_mbuf/rte_mbuf.h | 290 +++++++-
lib/librte_pmd_e1000/igb_rxtx.c | 98 ++-
lib/librte_pmd_enic/enic_main.c | 14 +-
lib/librte_pmd_fm10k/fm10k_rxtx.c | 30 +-
lib/librte_pmd_i40e/i40e_rxtx.c | 786 ++++++++++++++-------
lib/librte_pmd_ixgbe/ixgbe_rxtx.c | 146 +++-
lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c | 49 +-
lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c | 4 +-
19 files changed, 1274 insertions(+), 467 deletions(-)
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v4 01/18] mbuf: redefinition of packet_type in rte_mbuf
2015-02-27 13:11 ` [dpdk-dev] [PATCH v4 00/18] " Helin Zhang
@ 2015-02-27 13:11 ` Helin Zhang
2015-03-02 11:47 ` Chilikin, Andrey
2015-02-27 13:11 ` [dpdk-dev] [PATCH v4 02/18] ixgbe: support of unified packet type for vector Helin Zhang
` (17 subsequent siblings)
18 siblings, 1 reply; 257+ messages in thread
From: Helin Zhang @ 2015-02-27 13:11 UTC (permalink / raw)
To: dev
In order to unify the packet type, the field of 'packet_type' in
'struct rte_mbuf' needs to be extended from 16 to 32 bits.
Accordingly, some fields in 'struct rte_mbuf' are re-organized to
support this change for Vector PMD. As 'struct rte_kni_mbuf' for
KNI should be right mapped to 'struct rte_mbuf', it should be
modified accordingly. In addition, Vector PMD of ixgbe is disabled
by default, as 'struct rte_mbuf' changed.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Signed-off-by: Cunming Liang <cunming.liang@intel.com>
---
config/common_linuxapp | 2 +-
.../linuxapp/eal/include/exec-env/rte_kni_common.h | 4 ++--
lib/librte_mbuf/rte_mbuf.h | 23 +++++++++++++++-------
3 files changed, 19 insertions(+), 10 deletions(-)
v2 changes:
* Enlarged the packet_type field from 16 bits to 32 bits.
* Redefined the packet type sub-fields.
* Updated the 'struct rte_kni_mbuf' for KNI according to the mbuf changes.
v3 changes:
* Put the mbuf layout changes into a single patch.
* Disabled vector ixgbe PMD by default, as mbuf layout changed.
diff --git a/config/common_linuxapp b/config/common_linuxapp
index 97f1c9e..97d7bae 100644
--- a/config/common_linuxapp
+++ b/config/common_linuxapp
@@ -166,7 +166,7 @@ CONFIG_RTE_LIBRTE_IXGBE_DEBUG_TX_FREE=n
CONFIG_RTE_LIBRTE_IXGBE_DEBUG_DRIVER=n
CONFIG_RTE_LIBRTE_IXGBE_PF_DISABLE_STRIP_CRC=n
CONFIG_RTE_LIBRTE_IXGBE_RX_ALLOW_BULK_ALLOC=y
-CONFIG_RTE_IXGBE_INC_VECTOR=y
+CONFIG_RTE_IXGBE_INC_VECTOR=n
CONFIG_RTE_IXGBE_RX_OLFLAGS_ENABLE=y
#
diff --git a/lib/librte_eal/linuxapp/eal/include/exec-env/rte_kni_common.h b/lib/librte_eal/linuxapp/eal/include/exec-env/rte_kni_common.h
index 1e55c2d..bd1cc09 100644
--- a/lib/librte_eal/linuxapp/eal/include/exec-env/rte_kni_common.h
+++ b/lib/librte_eal/linuxapp/eal/include/exec-env/rte_kni_common.h
@@ -117,9 +117,9 @@ struct rte_kni_mbuf {
uint16_t data_off; /**< Start address of data in segment buffer. */
char pad1[4];
uint64_t ol_flags; /**< Offload features. */
- char pad2[2];
- uint16_t data_len; /**< Amount of data in segment buffer. */
+ char pad2[4];
uint32_t pkt_len; /**< Total pkt len: sum of all segment data_len. */
+ uint16_t data_len; /**< Amount of data in segment buffer. */
/* fields on second cache line */
char pad3[8] __attribute__((__aligned__(RTE_CACHE_LINE_SIZE)));
diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
index 17ba791..f5b7a8b 100644
--- a/lib/librte_mbuf/rte_mbuf.h
+++ b/lib/librte_mbuf/rte_mbuf.h
@@ -258,17 +258,26 @@ struct rte_mbuf {
/* remaining bytes are set on RX when pulling packet from descriptor */
MARKER rx_descriptor_fields1;
- /**
- * The packet type, which is used to indicate ordinary packet and also
- * tunneled packet format, i.e. each number is represented a type of
- * packet.
+ /*
+ * The packet type, which is the combination of outer/inner L2, L3, L4
+ * and tunnel types.
*/
- uint16_t packet_type;
+ union {
+ uint32_t packet_type; /**< L2/L3/L4 and tunnel information. */
+ struct {
+ uint32_t l2_type:4; /**< (Outer) L2 type. */
+ uint32_t l3_type:4; /**< (Outer) L3 type. */
+ uint32_t l4_type:4; /**< (Outer) L4 type. */
+ uint32_t tun_type:4; /**< Tunnel type. */
+ uint32_t inner_l2_type:4; /**< Inner L2 type. */
+ uint32_t inner_l3_type:4; /**< Inner L3 type. */
+ uint32_t inner_l4_type:4; /**< Inner L4 type. */
+ };
+ };
- uint16_t data_len; /**< Amount of data in segment buffer. */
uint32_t pkt_len; /**< Total pkt len: sum of all segments. */
+ uint16_t data_len; /**< Amount of data in segment buffer. */
uint16_t vlan_tci; /**< VLAN Tag Control Identifier (CPU order) */
- uint16_t reserved;
union {
uint32_t rss; /**< RSS hash result if RSS enabled */
struct {
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v4 02/18] ixgbe: support of unified packet type for vector
2015-02-27 13:11 ` [dpdk-dev] [PATCH v4 00/18] " Helin Zhang
2015-02-27 13:11 ` [dpdk-dev] [PATCH v4 01/18] mbuf: redefinition of packet_type in rte_mbuf Helin Zhang
@ 2015-02-27 13:11 ` Helin Zhang
2015-02-27 13:11 ` [dpdk-dev] [PATCH v4 03/18] mbuf: add definitions of unified packet types Helin Zhang
` (16 subsequent siblings)
18 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-02-27 13:11 UTC (permalink / raw)
To: dev
To unify the packet type, bit masks of packet type for ol_flags are
replaced. In addition, more packet types (UDP, TCP and SCTP) are
supported in vectorized ixgbe PMD.
Note that around 2% performance drop (64B) was observed of doing 4
ports (1 port per 82599 card) IO forwarding on the same SNB core.
Signed-off-by: Cunming Liang <cunming.liang@intel.com>
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
config/common_linuxapp | 2 +-
lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c | 49 +++++++++++++++++++----------------
2 files changed, 27 insertions(+), 24 deletions(-)
v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.
v3 changes:
* Put vector ixgbe changes right after mbuf changes.
* Enabled vector ixgbe PMD by default together with changes for updated
vector PMD.
diff --git a/config/common_linuxapp b/config/common_linuxapp
index 97d7bae..97f1c9e 100644
--- a/config/common_linuxapp
+++ b/config/common_linuxapp
@@ -166,7 +166,7 @@ CONFIG_RTE_LIBRTE_IXGBE_DEBUG_TX_FREE=n
CONFIG_RTE_LIBRTE_IXGBE_DEBUG_DRIVER=n
CONFIG_RTE_LIBRTE_IXGBE_PF_DISABLE_STRIP_CRC=n
CONFIG_RTE_LIBRTE_IXGBE_RX_ALLOW_BULK_ALLOC=y
-CONFIG_RTE_IXGBE_INC_VECTOR=n
+CONFIG_RTE_IXGBE_INC_VECTOR=y
CONFIG_RTE_IXGBE_RX_OLFLAGS_ENABLE=y
#
diff --git a/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c b/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c
index 1f46f0f..eeb0ffb 100644
--- a/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c
+++ b/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c
@@ -134,44 +134,35 @@ ixgbe_rxq_rearm(struct igb_rx_queue *rxq)
*/
#ifdef RTE_IXGBE_RX_OLFLAGS_ENABLE
-#define OLFLAGS_MASK ((uint16_t)(PKT_RX_VLAN_PKT | PKT_RX_IPV4_HDR |\
- PKT_RX_IPV4_HDR_EXT | PKT_RX_IPV6_HDR |\
- PKT_RX_IPV6_HDR_EXT))
-#define OLFLAGS_MASK_V (((uint64_t)OLFLAGS_MASK << 48) | \
- ((uint64_t)OLFLAGS_MASK << 32) | \
- ((uint64_t)OLFLAGS_MASK << 16) | \
- ((uint64_t)OLFLAGS_MASK))
-#define PTYPE_SHIFT (1)
+#define OLFLAGS_MASK_V (((uint64_t)PKT_RX_VLAN_PKT << 48) | \
+ ((uint64_t)PKT_RX_VLAN_PKT << 32) | \
+ ((uint64_t)PKT_RX_VLAN_PKT << 16) | \
+ ((uint64_t)PKT_RX_VLAN_PKT))
#define VTAG_SHIFT (3)
static inline void
desc_to_olflags_v(__m128i descs[4], struct rte_mbuf **rx_pkts)
{
- __m128i ptype0, ptype1, vtag0, vtag1;
+ __m128i vtag0, vtag1;
union {
uint16_t e[4];
uint64_t dword;
} vol;
- ptype0 = _mm_unpacklo_epi16(descs[0], descs[1]);
- ptype1 = _mm_unpacklo_epi16(descs[2], descs[3]);
vtag0 = _mm_unpackhi_epi16(descs[0], descs[1]);
vtag1 = _mm_unpackhi_epi16(descs[2], descs[3]);
- ptype1 = _mm_unpacklo_epi32(ptype0, ptype1);
vtag1 = _mm_unpacklo_epi32(vtag0, vtag1);
-
- ptype1 = _mm_slli_epi16(ptype1, PTYPE_SHIFT);
vtag1 = _mm_srli_epi16(vtag1, VTAG_SHIFT);
- ptype1 = _mm_or_si128(ptype1, vtag1);
- vol.dword = _mm_cvtsi128_si64(ptype1) & OLFLAGS_MASK_V;
+ vol.dword = _mm_cvtsi128_si64(vtag1) & OLFLAGS_MASK_V;
rx_pkts[0]->ol_flags = vol.e[0];
rx_pkts[1]->ol_flags = vol.e[1];
rx_pkts[2]->ol_flags = vol.e[2];
rx_pkts[3]->ol_flags = vol.e[3];
}
+
#else
#define desc_to_olflags_v(desc, rx_pkts) do {} while (0)
#endif
@@ -197,13 +188,15 @@ _recv_raw_pkts_vec(struct igb_rx_queue *rxq, struct rte_mbuf **rx_pkts,
uint64_t var;
__m128i shuf_msk;
__m128i crc_adjust = _mm_set_epi16(
- 0, 0, 0, 0, /* ignore non-length fields */
+ 0, 0, 0, /* ignore non-length fields */
+ -rxq->crc_len, /* sub crc on data_len */
0, /* ignore high-16bits of pkt_len */
-rxq->crc_len, /* sub crc on pkt_len */
- -rxq->crc_len, /* sub crc on data_len */
- 0 /* ignore pkt_type field */
+ 0, 0 /* ignore pkt_type field */
);
__m128i dd_check, eop_check;
+ __m128i desc_mask = _mm_set_epi32(0xFFFFFFFF, 0xFFFFFFFF,
+ 0xFFFFFFFF, 0xFFFF07F0);
if (unlikely(nb_pkts < RTE_IXGBE_VPMD_RX_BURST))
return 0;
@@ -234,12 +227,13 @@ _recv_raw_pkts_vec(struct igb_rx_queue *rxq, struct rte_mbuf **rx_pkts,
/* mask to shuffle from desc. to mbuf */
shuf_msk = _mm_set_epi8(
7, 6, 5, 4, /* octet 4~7, 32bits rss */
- 0xFF, 0xFF, /* skip high 16 bits vlan_macip, zero out */
15, 14, /* octet 14~15, low 16 bits vlan_macip */
+ 13, 12, /* octet 12~13, 16 bits data_len */
0xFF, 0xFF, /* skip high 16 bits pkt_len, zero out */
13, 12, /* octet 12~13, low 16 bits pkt_len */
- 13, 12, /* octet 12~13, 16 bits data_len */
- 0xFF, 0xFF /* skip pkt_type field */
+ 0xFF, 0xFF, /* skip high 16 bits pkt_type */
+ 1, /* octet 1, 8 bits pkt_type field */
+ 0 /* octet 0, 4 bits offset 4 pkt_type field */
);
/* Cache is empty -> need to scan the buffer rings, but first move
@@ -248,6 +242,7 @@ _recv_raw_pkts_vec(struct igb_rx_queue *rxq, struct rte_mbuf **rx_pkts,
/*
* A. load 4 packet in one loop
+ * [A*. mask out 4 unused dirty field in desc]
* B. copy 4 mbuf point from swring to rx_pkts
* C. calc the number of DD bits among the 4 packets
* [C*. extract the end-of-packet bit, if requested]
@@ -289,6 +284,14 @@ _recv_raw_pkts_vec(struct igb_rx_queue *rxq, struct rte_mbuf **rx_pkts,
/* B.2 copy 2 mbuf point into rx_pkts */
_mm_storeu_si128((__m128i *)&rx_pkts[pos+2], mbp2);
+ /* A* mask out 0~3 bits RSS type */
+ descs[3] = _mm_and_si128(descs[3], desc_mask);
+ descs[2] = _mm_and_si128(descs[2], desc_mask);
+
+ /* A* mask out 0~3 bits RSS type */
+ descs[1] = _mm_and_si128(descs[1], desc_mask);
+ descs[0] = _mm_and_si128(descs[0], desc_mask);
+
/* avoid compiler reorder optimization */
rte_compiler_barrier();
@@ -301,7 +304,7 @@ _recv_raw_pkts_vec(struct igb_rx_queue *rxq, struct rte_mbuf **rx_pkts,
/* C.1 4=>2 filter staterr info only */
sterr_tmp1 = _mm_unpackhi_epi32(descs[1], descs[0]);
- /* set ol_flags with packet type and vlan tag */
+ /* set ol_flags with vlan packet type */
desc_to_olflags_v(descs, &rx_pkts[pos]);
/* D.2 pkt 3,4 set in_port/nb_seg and remove crc */
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v4 03/18] mbuf: add definitions of unified packet types
2015-02-27 13:11 ` [dpdk-dev] [PATCH v4 00/18] " Helin Zhang
2015-02-27 13:11 ` [dpdk-dev] [PATCH v4 01/18] mbuf: redefinition of packet_type in rte_mbuf Helin Zhang
2015-02-27 13:11 ` [dpdk-dev] [PATCH v4 02/18] ixgbe: support of unified packet type for vector Helin Zhang
@ 2015-02-27 13:11 ` Helin Zhang
2015-02-27 15:02 ` Olivier MATZ
2015-02-27 13:11 ` [dpdk-dev] [PATCH v4 04/18] e1000: support of unified packet type Helin Zhang
` (15 subsequent siblings)
18 siblings, 1 reply; 257+ messages in thread
From: Helin Zhang @ 2015-02-27 13:11 UTC (permalink / raw)
To: dev
As there are only 6 bit flags in ol_flags for indicating packet
types, which is not enough to describe all the possible packet
types hardware can recognize. For example, i40e hardware can
recognize more than 150 packet types. Unified packet type is
composed of L2 type, L3 type, L4 type, tunnel type, inner L2 type,
inner L3 type and inner L4 type fields, and can be stored in
'struct rte_mbuf' of 32 bits field 'packet_type'.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
lib/librte_mbuf/rte_mbuf.h | 253 +++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 253 insertions(+)
v3 changes:
* Put the definitions of unified packet type into a single patch.
v4 changes:
* Added detailed description of each packet types.
diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
index f5b7a8b..8de57fd 100644
--- a/lib/librte_mbuf/rte_mbuf.h
+++ b/lib/librte_mbuf/rte_mbuf.h
@@ -194,6 +194,259 @@ extern "C" {
/* Use final bit of flags to indicate a control mbuf */
#define CTRL_MBUF_FLAG (1ULL << 63) /**< Mbuf contains control data */
+/*
+ * 32 bits are divided into several fields to mark packet types. Note that
+ * each field is indexical.
+ * - Bit 3:0 is for L2 types.
+ * - Bit 7:4 is for L3 or outer L3 (for tunneling case) types.
+ * - Bit 11:8 is for L4 or outer L4 (for tunneling case) types.
+ * - Bit 15:12 is for tunnel types.
+ * - Bit 19:16 is for inner L2 types.
+ * - Bit 23:20 is for inner L3 types.
+ * - Bit 27:24 is for inner L4 types.
+ * - Bit 31:28 is reserved.
+ *
+ * To be compatible with Vector PMD, RTE_PTYPE_L3_IPV4, RTE_PTYPE_L3_IPV4_EXT,
+ * RTE_PTYPE_L3_IPV6, RTE_PTYPE_L3_IPV6_EXT, RTE_PTYPE_L4_TCP, RTE_PTYPE_L4_UDP
+ * and RTE_PTYPE_L4_SCTP should be kept as below in a contiguous 7 bits.
+ *
+ * Note that L3 types values are selected for checking IPV4/IPV6 header from
+ * performance point of view. Reading annotations of RTE_ETH_IS_IPV4_HDR and
+ * RTE_ETH_IS_IPV6_HDR is needed for any future changes of L3 type values.
+ */
+#define RTE_PTYPE_UNKNOWN 0x00000000
+/**
+ * MAC (Media Access Control) packet type.
+ * It is used for outer packet for tunneling cases.
+ */
+#define RTE_PTYPE_L2_MAC 0x00000001
+/**
+ * MAC (Media Access Control) packet type for time sync.
+ */
+#define RTE_PTYPE_L2_MAC_TIMESYNC 0x00000002
+/**
+ * ARP (Address Resolution Protocol) packet type.
+ */
+#define RTE_PTYPE_L2_ARP 0x00000003
+/**
+ * LLDP (Link Layer Discovery Protocol) packet type.
+ */
+#define RTE_PTYPE_L2_LLDP 0x00000004
+/**
+ * Mask of layer 2 packet types.
+ * It is used for outer packet for tunneling cases.
+ */
+#define RTE_PTYPE_L2_MASK 0x0000000f
+/**
+ * IP (Internet Protocol) version 4 packet type.
+ * It is used for outer packet for tunneling cases, and does not contain any
+ * header option.
+ */
+#define RTE_PTYPE_L3_IPV4 0x00000010
+/**
+ * IP (Internet Protocol) version 4 packet type.
+ * It is used for outer packet for tunneling cases, and contains header
+ * options.
+ */
+#define RTE_PTYPE_L3_IPV4_EXT 0x00000030
+/**
+ * IP (Internet Protocol) version 6 packet type.
+ * It is used for outer packet for tunneling cases, and does not contain any
+ * extension header.
+ */
+#define RTE_PTYPE_L3_IPV6 0x00000040
+/**
+ * IP (Internet Protocol) version 4 packet type.
+ * It is used for outer packet for tunneling cases, and may or maynot contain
+ * header options.
+ */
+#define RTE_PTYPE_L3_IPV4_EXT_UNKNOWN 0x00000090
+/**
+ * IP (Internet Protocol) version 6 packet type.
+ * It is used for outer packet for tunneling cases, and contains extension
+ * headers.
+ */
+#define RTE_PTYPE_L3_IPV6_EXT 0x000000c0
+/**
+ * IP (Internet Protocol) version 6 packet type.
+ * It is used for outer packet for tunneling cases, and may or maynot contain
+ * extension headers.
+ */
+#define RTE_PTYPE_L3_IPV6_EXT_UNKNOWN 0x000000e0
+/**
+ * Mask of layer 3 packet types.
+ * It is used for outer packet for tunneling cases.
+ */
+#define RTE_PTYPE_L3_MASK 0x000000f0
+/**
+ * TCP (Transmission Control Protocol) packet type.
+ * It is used for outer packet for tunneling cases.
+ */
+#define RTE_PTYPE_L4_TCP 0x00000100
+/**
+ * UDP (User Datagram Protocol) packet type.
+ * It is used for outer packet for tunneling cases.
+ */
+#define RTE_PTYPE_L4_UDP 0x00000200
+/**
+ * Fragmented IP (Internet Protocol) packet type.
+ * It is used for outer packet for tunneling cases.
+ */
+#define RTE_PTYPE_L4_FRAG 0x00000300
+/**
+ * SCTP (Stream Control Transmission Protocol) packet type.
+ * It is used for outer packet for tunneling cases.
+ */
+#define RTE_PTYPE_L4_SCTP 0x00000400
+/**
+ * ICMP (Internet Control Message Protocol) packet type.
+ * It is used for outer packet for tunneling cases.
+ */
+#define RTE_PTYPE_L4_ICMP 0x00000500
+/**
+ * Non-fragmented IP (Internet Protocol) packet type.
+ * It is used for outer packet for tunneling cases.
+ */
+#define RTE_PTYPE_L4_NONFRAG 0x00000600
+/**
+ * Mask of layer 4 packet types.
+ * It is used for outer packet for tunneling cases.
+ */
+#define RTE_PTYPE_L4_MASK 0x00000f00
+/**
+ * IP (Internet Protocol) in IP (Internet Protocol) tunneling packet type.
+ */
+#define RTE_PTYPE_TUNNEL_IP 0x00001000
+/**
+ * GRE (Generic Routing Encapsulation) tunneling packet type.
+ */
+#define RTE_PTYPE_TUNNEL_GRE 0x00002000
+/**
+ * VXLAN (Virtual eXtensible Local Area Network) tunneling packet type.
+ */
+#define RTE_PTYPE_TUNNEL_VXLAN 0x00003000
+/**
+ * NVGRE (Network Virtualization using Generic Routing Encapsulation) tunneling
+ * packet type.
+ */
+#define RTE_PTYPE_TUNNEL_NVGRE 0x00004000
+/**
+ * GENEVE (Generic Network Virtualization Encapsulation) tunneling packet type.
+ */
+#define RTE_PTYPE_TUNNEL_GENEVE 0x00005000
+/**
+ * Tunneling packet type of Teredo, VXLAN (Virtual eXtensible Local Area
+ * Network) or GRE (Generic Routing Encapsulation).
+ * It is used for tunneling packet type, which is unknown but must be one of
+ * Teredo, VXLAN or GRE.
+ */
+#define RTE_PTYPE_TUNNEL_GRENAT 0x00006000
+/**
+ * Mask of tunneling packet types.
+ */
+#define RTE_PTYPE_TUNNEL_MASK 0x0000f000
+/**
+ * MAC (Media Access Control) packet type.
+ * It is used for inner packet type only.
+ */
+#define RTE_PTYPE_INNER_L2_MAC 0x00010000
+/**
+ * MAC (Media Access Control) packet type with VLAN (Virtual Local Area
+ * Network) tag.
+ */
+#define RTE_PTYPE_INNER_L2_MAC_VLAN 0x00020000
+/**
+ * Mask of inner layer 2 packet types.
+ */
+#define RTE_PTYPE_INNER_L2_MASK 0x000f0000
+/**
+ * IP (Internet Protocol) version 4 packet type.
+ * It is used for inner packet only, and does not contain any header option.
+ */
+#define RTE_PTYPE_INNER_L3_IPV4 0x00100000
+/**
+ * IP (Internet Protocol) version 4 packet type.
+ * It is used for inner packet only, and contains header options.
+ */
+#define RTE_PTYPE_INNER_L3_IPV4_EXT 0x00200000
+/**
+ * IP (Internet Protocol) version 6 packet type.
+ * It is used for inner packet only, and does not contain any extension header.
+ */
+#define RTE_PTYPE_INNER_L3_IPV6 0x00300000
+/**
+ * IP (Internet Protocol) version 4 packet type.
+ * It is used for inner packet only, and may or maynot contain header options.
+ */
+#define RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN 0x00400000
+/**
+ * IP (Internet Protocol) version 6 packet type.
+ * It is used for inner packet only, and contains extension headers.
+ */
+#define RTE_PTYPE_INNER_L3_IPV6_EXT 0x00500000
+/**
+ * IP (Internet Protocol) version 6 packet type.
+ * It is used for inner packet only, and may or maynot contain extension
+ * headers.
+ */
+#define RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN 0x00600000
+/**
+ * Mask of inner layer 3 packet types.
+ */
+#define RTE_PTYPE_INNER_INNER_L3_MASK 0x00f00000
+/**
+ * TCP (Transmission Control Protocol) packet type.
+ * It is used for inner packet only.
+ */
+#define RTE_PTYPE_INNER_L4_TCP 0x01000000
+/**
+ * UDP (User Datagram Protocol) packet type.
+ * It is used for inner packet only.
+ */
+#define RTE_PTYPE_INNER_L4_UDP 0x02000000
+/**
+ * Fragmented IP (Internet Protocol) packet type.
+ * It is used for inner packet only, and may or maynot have layer 4 packet.
+ */
+#define RTE_PTYPE_INNER_L4_FRAG 0x03000000
+/**
+ * SCTP (Stream Control Transmission Protocol) packet type.
+ * It is used for inner packet only.
+ */
+#define RTE_PTYPE_INNER_L4_SCTP 0x04000000
+/**
+ * ICMP (Internet Control Message Protocol) packet type.
+ * It is used for inner packet only.
+ */
+#define RTE_PTYPE_INNER_L4_ICMP 0x05000000
+/**
+ * Non-fragmented IP (Internet Protocol) packet type.
+ * It is used for inner packet only, and may or maynot have other unknown layer
+ * 4 packet types.
+ */
+#define RTE_PTYPE_INNER_L4_NONFRAG 0x06000000
+/**
+ * Mask of inner layer 4 packet types.
+ */
+#define RTE_PTYPE_INNER_L4_MASK 0x0f000000
+
+/**
+ * Check if the (outer) L3 header is IPv4. To avoid comparing IPv4 types one by
+ * one, bit 4 is selected to be used for IPv4 only. Then checking bit 4 can
+ * determin if it is an IPV4 packet.
+ */
+#define RTE_ETH_IS_IPV4_HDR(ptype) ((ptype) & RTE_PTYPE_L3_IPV4)
+
+/**
+ * Check if the (outer) L3 header is IPv4. To avoid comparing IPv4 types one by
+ * one, bit 6 is selected to be used for IPv4 only. Then checking bit 6 can
+ * determin if it is an IPV4 packet.
+ */
+#define RTE_ETH_IS_IPV6_HDR(ptype) ((ptype) & RTE_PTYPE_L3_IPV6)
+
+/* Check if it is a tunneling packet */
+#define RTE_ETH_IS_TUNNEL_PKT(ptype) ((ptype) & RTE_PTYPE_TUNNEL_MASK)
+
/**
* Get the name of a RX offload flag
*
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v4 04/18] e1000: support of unified packet type
2015-02-27 13:11 ` [dpdk-dev] [PATCH v4 00/18] " Helin Zhang
` (2 preceding siblings ...)
2015-02-27 13:11 ` [dpdk-dev] [PATCH v4 03/18] mbuf: add definitions of unified packet types Helin Zhang
@ 2015-02-27 13:11 ` Helin Zhang
2015-02-27 13:11 ` [dpdk-dev] [PATCH v4 05/18] ixgbe: " Helin Zhang
` (14 subsequent siblings)
18 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-02-27 13:11 UTC (permalink / raw)
To: dev
To unify packet types among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
lib/librte_pmd_e1000/igb_rxtx.c | 98 ++++++++++++++++++++++++++++++++++-------
1 file changed, 83 insertions(+), 15 deletions(-)
v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.
diff --git a/lib/librte_pmd_e1000/igb_rxtx.c b/lib/librte_pmd_e1000/igb_rxtx.c
index cdf2cac..4fa3ede 100644
--- a/lib/librte_pmd_e1000/igb_rxtx.c
+++ b/lib/librte_pmd_e1000/igb_rxtx.c
@@ -591,17 +591,85 @@ eth_igb_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
* RX functions
*
**********************************************************************/
+#define IGB_PACKET_TYPE_IPV4 0X01
+#define IGB_PACKET_TYPE_IPV4_TCP 0X11
+#define IGB_PACKET_TYPE_IPV4_UDP 0X21
+#define IGB_PACKET_TYPE_IPV4_SCTP 0X41
+#define IGB_PACKET_TYPE_IPV4_EXT 0X03
+#define IGB_PACKET_TYPE_IPV4_EXT_SCTP 0X43
+#define IGB_PACKET_TYPE_IPV6 0X04
+#define IGB_PACKET_TYPE_IPV6_TCP 0X14
+#define IGB_PACKET_TYPE_IPV6_UDP 0X24
+#define IGB_PACKET_TYPE_IPV6_EXT 0X0C
+#define IGB_PACKET_TYPE_IPV6_EXT_TCP 0X1C
+#define IGB_PACKET_TYPE_IPV6_EXT_UDP 0X2C
+#define IGB_PACKET_TYPE_IPV4_IPV6 0X05
+#define IGB_PACKET_TYPE_IPV4_IPV6_TCP 0X15
+#define IGB_PACKET_TYPE_IPV4_IPV6_UDP 0X25
+#define IGB_PACKET_TYPE_IPV4_IPV6_EXT 0X0D
+#define IGB_PACKET_TYPE_IPV4_IPV6_EXT_TCP 0X1D
+#define IGB_PACKET_TYPE_IPV4_IPV6_EXT_UDP 0X2D
+#define IGB_PACKET_TYPE_MAX 0X80
+#define IGB_PACKET_TYPE_MASK 0X7F
+#define IGB_PACKET_TYPE_SHIFT 0X04
+static inline uint32_t
+igb_rxd_pkt_info_to_pkt_type(uint16_t pkt_info)
+{
+ static const uint32_t
+ ptype_table[IGB_PACKET_TYPE_MAX] __rte_cache_aligned = {
+ [IGB_PACKET_TYPE_IPV4] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4,
+ [IGB_PACKET_TYPE_IPV4_EXT] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4_EXT,
+ [IGB_PACKET_TYPE_IPV6] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6,
+ [IGB_PACKET_TYPE_IPV4_IPV6] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6,
+ [IGB_PACKET_TYPE_IPV6_EXT] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV6_EXT,
+ [IGB_PACKET_TYPE_IPV4_IPV6_EXT] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT,
+ [IGB_PACKET_TYPE_IPV4_TCP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_TCP,
+ [IGB_PACKET_TYPE_IPV6_TCP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_TCP,
+ [IGB_PACKET_TYPE_IPV4_IPV6_TCP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6 | RTE_PTYPE_INNER_L4_TCP,
+ [IGB_PACKET_TYPE_IPV6_EXT_TCP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV6_EXT | RTE_PTYPE_L4_TCP,
+ [IGB_PACKET_TYPE_IPV4_IPV6_EXT_TCP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT | RTE_PTYPE_INNER_L4_TCP,
+ [IGB_PACKET_TYPE_IPV4_UDP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_UDP,
+ [IGB_PACKET_TYPE_IPV6_UDP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_UDP,
+ [IGB_PACKET_TYPE_IPV4_IPV6_UDP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6 | RTE_PTYPE_INNER_L4_UDP,
+ [IGB_PACKET_TYPE_IPV6_EXT_UDP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV6_EXT | RTE_PTYPE_L4_UDP,
+ [IGB_PACKET_TYPE_IPV4_IPV6_EXT_UDP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT | RTE_PTYPE_INNER_L4_UDP,
+ [IGB_PACKET_TYPE_IPV4_SCTP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_SCTP,
+ [IGB_PACKET_TYPE_IPV4_EXT_SCTP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4_EXT | RTE_PTYPE_L4_SCTP,
+ };
+ if (unlikely(pkt_info & E1000_RXDADV_PKTTYPE_ETQF))
+ return RTE_PTYPE_UNKNOWN;
+
+ pkt_info = (pkt_info >> IGB_PACKET_TYPE_SHIFT) & IGB_PACKET_TYPE_MASK;
+
+ return ptype_table[pkt_info];
+}
+
static inline uint64_t
rx_desc_hlen_type_rss_to_pkt_flags(uint32_t hl_tp_rs)
{
- uint64_t pkt_flags;
-
- static uint64_t ip_pkt_types_map[16] = {
- 0, PKT_RX_IPV4_HDR, PKT_RX_IPV4_HDR_EXT, PKT_RX_IPV4_HDR_EXT,
- PKT_RX_IPV6_HDR, 0, 0, 0,
- PKT_RX_IPV6_HDR_EXT, 0, 0, 0,
- PKT_RX_IPV6_HDR_EXT, 0, 0, 0,
- };
+ uint64_t pkt_flags = ((hl_tp_rs & 0x0F) == 0) ? 0 : PKT_RX_RSS_HASH;
#if defined(RTE_LIBRTE_IEEE1588)
static uint32_t ip_pkt_etqf_map[8] = {
@@ -609,14 +677,10 @@ rx_desc_hlen_type_rss_to_pkt_flags(uint32_t hl_tp_rs)
0, 0, 0, 0,
};
- pkt_flags = (hl_tp_rs & E1000_RXDADV_PKTTYPE_ETQF) ?
- ip_pkt_etqf_map[(hl_tp_rs >> 4) & 0x07] :
- ip_pkt_types_map[(hl_tp_rs >> 4) & 0x0F];
-#else
- pkt_flags = (hl_tp_rs & E1000_RXDADV_PKTTYPE_ETQF) ? 0 :
- ip_pkt_types_map[(hl_tp_rs >> 4) & 0x0F];
+ pkt_flags |= ip_pkt_etqf_map[(hl_tp_rs >> 4) & 0x07];
#endif
- return pkt_flags | (((hl_tp_rs & 0x0F) == 0) ? 0 : PKT_RX_RSS_HASH);
+
+ return pkt_flags;
}
static inline uint64_t
@@ -791,6 +855,8 @@ eth_igb_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
pkt_flags = pkt_flags | rx_desc_status_to_pkt_flags(staterr);
pkt_flags = pkt_flags | rx_desc_error_to_pkt_flags(staterr);
rxm->ol_flags = pkt_flags;
+ rxm->packet_type = igb_rxd_pkt_info_to_pkt_type(rxd.wb.lower.
+ lo_dword.hs_rss.pkt_info);
/*
* Store the mbuf address into the next entry of the array
@@ -1025,6 +1091,8 @@ eth_igb_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
pkt_flags = pkt_flags | rx_desc_status_to_pkt_flags(staterr);
pkt_flags = pkt_flags | rx_desc_error_to_pkt_flags(staterr);
first_seg->ol_flags = pkt_flags;
+ first_seg->packet_type = igb_rxd_pkt_info_to_pkt_type(rxd.wb.
+ lower.lo_dword.hs_rss.pkt_info);
/* Prefetch data of first segment, if configured to do so. */
rte_packet_prefetch((char *)first_seg->buf_addr +
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v4 05/18] ixgbe: support of unified packet type
2015-02-27 13:11 ` [dpdk-dev] [PATCH v4 00/18] " Helin Zhang
` (3 preceding siblings ...)
2015-02-27 13:11 ` [dpdk-dev] [PATCH v4 04/18] e1000: support of unified packet type Helin Zhang
@ 2015-02-27 13:11 ` Helin Zhang
2015-02-27 13:11 ` [dpdk-dev] [PATCH v4 06/18] i40e: " Helin Zhang
` (13 subsequent siblings)
18 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-02-27 13:11 UTC (permalink / raw)
To: dev
To unify packet type among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.
Note that around 2.5% performance drop (64B) was observed of doing
4 ports (1 port per 82599 card) IO forwarding on the same SNB core.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
lib/librte_pmd_ixgbe/ixgbe_rxtx.c | 146 +++++++++++++++++++++++++++++---------
1 file changed, 112 insertions(+), 34 deletions(-)
v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.
diff --git a/lib/librte_pmd_ixgbe/ixgbe_rxtx.c b/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
index 3059375..a8d99be 100644
--- a/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
+++ b/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
@@ -855,40 +855,107 @@ end_of_tx:
* RX functions
*
**********************************************************************/
-static inline uint64_t
-rx_desc_hlen_type_rss_to_pkt_flags(uint32_t hl_tp_rs)
+#define IXGBE_PACKET_TYPE_IPV4 0X01
+#define IXGBE_PACKET_TYPE_IPV4_TCP 0X11
+#define IXGBE_PACKET_TYPE_IPV4_UDP 0X21
+#define IXGBE_PACKET_TYPE_IPV4_SCTP 0X41
+#define IXGBE_PACKET_TYPE_IPV4_EXT 0X03
+#define IXGBE_PACKET_TYPE_IPV4_EXT_SCTP 0X43
+#define IXGBE_PACKET_TYPE_IPV6 0X04
+#define IXGBE_PACKET_TYPE_IPV6_TCP 0X14
+#define IXGBE_PACKET_TYPE_IPV6_UDP 0X24
+#define IXGBE_PACKET_TYPE_IPV6_EXT 0X0C
+#define IXGBE_PACKET_TYPE_IPV6_EXT_TCP 0X1C
+#define IXGBE_PACKET_TYPE_IPV6_EXT_UDP 0X2C
+#define IXGBE_PACKET_TYPE_IPV4_IPV6 0X05
+#define IXGBE_PACKET_TYPE_IPV4_IPV6_TCP 0X15
+#define IXGBE_PACKET_TYPE_IPV4_IPV6_UDP 0X25
+#define IXGBE_PACKET_TYPE_IPV4_IPV6_EXT 0X0D
+#define IXGBE_PACKET_TYPE_IPV4_IPV6_EXT_TCP 0X1D
+#define IXGBE_PACKET_TYPE_IPV4_IPV6_EXT_UDP 0X2D
+#define IXGBE_PACKET_TYPE_MAX 0X80
+#define IXGBE_PACKET_TYPE_MASK 0X7F
+#define IXGBE_PACKET_TYPE_SHIFT 0X04
+static inline uint32_t
+ixgbe_rxd_pkt_info_to_pkt_type(uint16_t pkt_info)
{
- uint64_t pkt_flags;
-
- static uint64_t ip_pkt_types_map[16] = {
- 0, PKT_RX_IPV4_HDR, PKT_RX_IPV4_HDR_EXT, PKT_RX_IPV4_HDR_EXT,
- PKT_RX_IPV6_HDR, 0, 0, 0,
- PKT_RX_IPV6_HDR_EXT, 0, 0, 0,
- PKT_RX_IPV6_HDR_EXT, 0, 0, 0,
+ static const uint32_t
+ ptype_table[IXGBE_PACKET_TYPE_MAX] __rte_cache_aligned = {
+ [IXGBE_PACKET_TYPE_IPV4] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4,
+ [IXGBE_PACKET_TYPE_IPV4_EXT] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4_EXT,
+ [IXGBE_PACKET_TYPE_IPV6] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV6,
+ [IXGBE_PACKET_TYPE_IPV4_IPV6] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6,
+ [IXGBE_PACKET_TYPE_IPV6_EXT] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV6_EXT,
+ [IXGBE_PACKET_TYPE_IPV4_IPV6_EXT] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT,
+ [IXGBE_PACKET_TYPE_IPV4_TCP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_TCP,
+ [IXGBE_PACKET_TYPE_IPV6_TCP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_TCP,
+ [IXGBE_PACKET_TYPE_IPV4_IPV6_TCP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6 | RTE_PTYPE_INNER_L4_TCP,
+ [IXGBE_PACKET_TYPE_IPV6_EXT_TCP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV6_EXT | RTE_PTYPE_L4_TCP,
+ [IXGBE_PACKET_TYPE_IPV4_IPV6_EXT_TCP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT | RTE_PTYPE_INNER_L4_TCP,
+ [IXGBE_PACKET_TYPE_IPV4_UDP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_UDP,
+ [IXGBE_PACKET_TYPE_IPV6_UDP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_UDP,
+ [IXGBE_PACKET_TYPE_IPV4_IPV6_UDP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6 | RTE_PTYPE_INNER_L4_UDP,
+ [IXGBE_PACKET_TYPE_IPV6_EXT_UDP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV6_EXT | RTE_PTYPE_L4_UDP,
+ [IXGBE_PACKET_TYPE_IPV4_IPV6_EXT_UDP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT | RTE_PTYPE_INNER_L4_UDP,
+ [IXGBE_PACKET_TYPE_IPV4_SCTP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_SCTP,
+ [IXGBE_PACKET_TYPE_IPV4_EXT_SCTP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4_EXT | RTE_PTYPE_L4_SCTP,
};
+ if (unlikely(pkt_info & IXGBE_RXDADV_PKTTYPE_ETQF))
+ return RTE_PTYPE_UNKNOWN;
- static uint64_t ip_rss_types_map[16] = {
+ pkt_info = (pkt_info >> IXGBE_PACKET_TYPE_SHIFT) &
+ IXGBE_PACKET_TYPE_MASK;
+
+ return ptype_table[pkt_info];
+}
+
+static inline uint64_t
+ixgbe_rxd_pkt_info_to_pkt_flags(uint16_t pkt_info)
+{
+ static uint64_t ip_rss_types_map[16] __rte_cache_aligned = {
0, PKT_RX_RSS_HASH, PKT_RX_RSS_HASH, PKT_RX_RSS_HASH,
0, PKT_RX_RSS_HASH, 0, PKT_RX_RSS_HASH,
PKT_RX_RSS_HASH, 0, 0, 0,
0, 0, 0, PKT_RX_FDIR,
};
-
#ifdef RTE_LIBRTE_IEEE1588
static uint64_t ip_pkt_etqf_map[8] = {
0, 0, 0, PKT_RX_IEEE1588_PTP,
0, 0, 0, 0,
};
- pkt_flags = (hl_tp_rs & IXGBE_RXDADV_PKTTYPE_ETQF) ?
- ip_pkt_etqf_map[(hl_tp_rs >> 4) & 0x07] :
- ip_pkt_types_map[(hl_tp_rs >> 4) & 0x0F];
+ if (likely(pkt_info & IXGBE_RXDADV_PKTTYPE_ETQF))
+ return ip_pkt_etqf_map[(pkt_info >> 4) & 0X07] |
+ ip_rss_types_map[pkt_info & 0xF];
+ else
+ return ip_rss_types_map[pkt_info & 0xF];
#else
- pkt_flags = (hl_tp_rs & IXGBE_RXDADV_PKTTYPE_ETQF) ? 0 :
- ip_pkt_types_map[(hl_tp_rs >> 4) & 0x0F];
-
+ return ip_rss_types_map[pkt_info & 0xF];
#endif
- return pkt_flags | ip_rss_types_map[hl_tp_rs & 0xF];
}
static inline uint64_t
@@ -945,7 +1012,9 @@ ixgbe_rx_scan_hw_ring(struct igb_rx_queue *rxq)
struct rte_mbuf *mb;
uint16_t pkt_len;
uint64_t pkt_flags;
- int s[LOOK_AHEAD], nb_dd;
+ int nb_dd;
+ uint32_t s[LOOK_AHEAD];
+ uint16_t pkt_info[LOOK_AHEAD];
int i, j, nb_rx = 0;
@@ -968,6 +1037,9 @@ ixgbe_rx_scan_hw_ring(struct igb_rx_queue *rxq)
for (j = LOOK_AHEAD-1; j >= 0; --j)
s[j] = rxdp[j].wb.upper.status_error;
+ for (j = LOOK_AHEAD-1; j >= 0; --j)
+ pkt_info[j] = rxdp[j].wb.lower.lo_dword.hs_rss.pkt_info;
+
/* Compute how many status bits were set */
nb_dd = 0;
for (j = 0; j < LOOK_AHEAD; ++j)
@@ -985,12 +1057,13 @@ ixgbe_rx_scan_hw_ring(struct igb_rx_queue *rxq)
mb->vlan_tci = rte_le_to_cpu_16(rxdp[j].wb.upper.vlan);
/* convert descriptor fields to rte mbuf flags */
- pkt_flags = rx_desc_hlen_type_rss_to_pkt_flags(
- rxdp[j].wb.lower.lo_dword.data);
- /* reuse status field from scan list */
- pkt_flags |= rx_desc_status_to_pkt_flags(s[j]);
+ pkt_flags = rx_desc_status_to_pkt_flags(s[j]);
pkt_flags |= rx_desc_error_to_pkt_flags(s[j]);
+ pkt_flags |=
+ ixgbe_rxd_pkt_info_to_pkt_flags(pkt_info[j]);
mb->ol_flags = pkt_flags;
+ mb->packet_type =
+ ixgbe_rxd_pkt_info_to_pkt_type(pkt_info[j]);
if (likely(pkt_flags & PKT_RX_RSS_HASH))
mb->hash.rss = rxdp[j].wb.lower.hi_dword.rss;
@@ -1187,7 +1260,7 @@ ixgbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
union ixgbe_adv_rx_desc rxd;
uint64_t dma_addr;
uint32_t staterr;
- uint32_t hlen_type_rss;
+ uint32_t pkt_info;
uint16_t pkt_len;
uint16_t rx_id;
uint16_t nb_rx;
@@ -1305,14 +1378,17 @@ ixgbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
rxm->data_len = pkt_len;
rxm->port = rxq->port_id;
- hlen_type_rss = rte_le_to_cpu_32(rxd.wb.lower.lo_dword.data);
+ pkt_info = rte_le_to_cpu_32(rxd.wb.lower.lo_dword.hs_rss.
+ pkt_info);
/* Only valid if PKT_RX_VLAN_PKT set in pkt_flags */
rxm->vlan_tci = rte_le_to_cpu_16(rxd.wb.upper.vlan);
- pkt_flags = rx_desc_hlen_type_rss_to_pkt_flags(hlen_type_rss);
- pkt_flags = pkt_flags | rx_desc_status_to_pkt_flags(staterr);
+ pkt_flags = rx_desc_status_to_pkt_flags(staterr);
pkt_flags = pkt_flags | rx_desc_error_to_pkt_flags(staterr);
+ pkt_flags = pkt_flags |
+ ixgbe_rxd_pkt_info_to_pkt_flags(pkt_info);
rxm->ol_flags = pkt_flags;
+ rxm->packet_type = ixgbe_rxd_pkt_info_to_pkt_type(pkt_info);
if (likely(pkt_flags & PKT_RX_RSS_HASH))
rxm->hash.rss = rxd.wb.lower.hi_dword.rss;
@@ -1371,7 +1447,7 @@ ixgbe_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
union ixgbe_adv_rx_desc rxd;
uint64_t dma; /* Physical address of mbuf data buffer */
uint32_t staterr;
- uint32_t hlen_type_rss;
+ uint16_t pkt_info;
uint16_t rx_id;
uint16_t nb_rx;
uint16_t nb_hold;
@@ -1550,13 +1626,15 @@ ixgbe_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
* set in the pkt_flags field.
*/
first_seg->vlan_tci = rte_le_to_cpu_16(rxd.wb.upper.vlan);
- hlen_type_rss = rte_le_to_cpu_32(rxd.wb.lower.lo_dword.data);
- pkt_flags = rx_desc_hlen_type_rss_to_pkt_flags(hlen_type_rss);
- pkt_flags = (pkt_flags |
- rx_desc_status_to_pkt_flags(staterr));
- pkt_flags = (pkt_flags |
- rx_desc_error_to_pkt_flags(staterr));
+ pkt_info = rte_le_to_cpu_32(rxd.wb.lower.lo_dword.hs_rss.
+ pkt_info);
+ pkt_flags = rx_desc_status_to_pkt_flags(staterr);
+ pkt_flags = pkt_flags | rx_desc_error_to_pkt_flags(staterr);
+ pkt_flags = pkt_flags |
+ ixgbe_rxd_pkt_info_to_pkt_flags(pkt_info);
first_seg->ol_flags = pkt_flags;
+ first_seg->packet_type =
+ ixgbe_rxd_pkt_info_to_pkt_type(pkt_info);
if (likely(pkt_flags & PKT_RX_RSS_HASH))
first_seg->hash.rss = rxd.wb.lower.hi_dword.rss;
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v4 06/18] i40e: support of unified packet type
2015-02-27 13:11 ` [dpdk-dev] [PATCH v4 00/18] " Helin Zhang
` (4 preceding siblings ...)
2015-02-27 13:11 ` [dpdk-dev] [PATCH v4 05/18] ixgbe: " Helin Zhang
@ 2015-02-27 13:11 ` Helin Zhang
2015-02-27 13:11 ` [dpdk-dev] [PATCH v4 07/18] enic: " Helin Zhang
` (12 subsequent siblings)
18 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-02-27 13:11 UTC (permalink / raw)
To: dev
To unify packet types among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
lib/librte_pmd_i40e/i40e_rxtx.c | 786 ++++++++++++++++++++++++++--------------
1 file changed, 512 insertions(+), 274 deletions(-)
v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.
diff --git a/lib/librte_pmd_i40e/i40e_rxtx.c b/lib/librte_pmd_i40e/i40e_rxtx.c
index 12c0831..6764978 100644
--- a/lib/librte_pmd_i40e/i40e_rxtx.c
+++ b/lib/librte_pmd_i40e/i40e_rxtx.c
@@ -151,272 +151,511 @@ i40e_rxd_error_to_pkt_flags(uint64_t qword)
return flags;
}
-/* Translate pkt types to pkt flags */
-static inline uint64_t
-i40e_rxd_ptype_to_pkt_flags(uint64_t qword)
+/* For each value it means, datasheet of hardware can tell more details */
+static inline uint32_t
+i40e_rxd_pkt_type_mapping(uint8_t ptype)
{
- uint8_t ptype = (uint8_t)((qword & I40E_RXD_QW1_PTYPE_MASK) >>
- I40E_RXD_QW1_PTYPE_SHIFT);
- static const uint64_t ip_ptype_map[I40E_MAX_PKT_TYPE] = {
- 0, /* PTYPE 0 */
- 0, /* PTYPE 1 */
- 0, /* PTYPE 2 */
- 0, /* PTYPE 3 */
- 0, /* PTYPE 4 */
- 0, /* PTYPE 5 */
- 0, /* PTYPE 6 */
- 0, /* PTYPE 7 */
- 0, /* PTYPE 8 */
- 0, /* PTYPE 9 */
- 0, /* PTYPE 10 */
- 0, /* PTYPE 11 */
- 0, /* PTYPE 12 */
- 0, /* PTYPE 13 */
- 0, /* PTYPE 14 */
- 0, /* PTYPE 15 */
- 0, /* PTYPE 16 */
- 0, /* PTYPE 17 */
- 0, /* PTYPE 18 */
- 0, /* PTYPE 19 */
- 0, /* PTYPE 20 */
- 0, /* PTYPE 21 */
- PKT_RX_IPV4_HDR, /* PTYPE 22 */
- PKT_RX_IPV4_HDR, /* PTYPE 23 */
- PKT_RX_IPV4_HDR, /* PTYPE 24 */
- 0, /* PTYPE 25 */
- PKT_RX_IPV4_HDR, /* PTYPE 26 */
- PKT_RX_IPV4_HDR, /* PTYPE 27 */
- PKT_RX_IPV4_HDR, /* PTYPE 28 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 29 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 30 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 31 */
- 0, /* PTYPE 32 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 33 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 34 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 35 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 36 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 37 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 38 */
- 0, /* PTYPE 39 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 40 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 41 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 42 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 43 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 44 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 45 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 46 */
- 0, /* PTYPE 47 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 48 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 49 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 50 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 51 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 52 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 53 */
- 0, /* PTYPE 54 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 55 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 56 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 57 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 58 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 59 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 60 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 61 */
- 0, /* PTYPE 62 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 63 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 64 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 65 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 66 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 67 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 68 */
- 0, /* PTYPE 69 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 70 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 71 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 72 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 73 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 74 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 75 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 76 */
- 0, /* PTYPE 77 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 78 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 79 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 80 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 81 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 82 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 83 */
- 0, /* PTYPE 84 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 85 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 86 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 87 */
- PKT_RX_IPV6_HDR, /* PTYPE 88 */
- PKT_RX_IPV6_HDR, /* PTYPE 89 */
- PKT_RX_IPV6_HDR, /* PTYPE 90 */
- 0, /* PTYPE 91 */
- PKT_RX_IPV6_HDR, /* PTYPE 92 */
- PKT_RX_IPV6_HDR, /* PTYPE 93 */
- PKT_RX_IPV6_HDR, /* PTYPE 94 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 95 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 96 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 97 */
- 0, /* PTYPE 98 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 99 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 100 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 101 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 102 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 103 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 104 */
- 0, /* PTYPE 105 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 106 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 107 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 108 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 109 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 110 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 111 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 112 */
- 0, /* PTYPE 113 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 114 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 115 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 116 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 117 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 118 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 119 */
- 0, /* PTYPE 120 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 121 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 122 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 123 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 124 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 125 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 126 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 127 */
- 0, /* PTYPE 128 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 129 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 130 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 131 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 132 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 133 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 134 */
- 0, /* PTYPE 135 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 136 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 137 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 138 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 139 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 140 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 141 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 142 */
- 0, /* PTYPE 143 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 144 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 145 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 146 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 147 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 148 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 149 */
- 0, /* PTYPE 150 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 151 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 152 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 153 */
- 0, /* PTYPE 154 */
- 0, /* PTYPE 155 */
- 0, /* PTYPE 156 */
- 0, /* PTYPE 157 */
- 0, /* PTYPE 158 */
- 0, /* PTYPE 159 */
- 0, /* PTYPE 160 */
- 0, /* PTYPE 161 */
- 0, /* PTYPE 162 */
- 0, /* PTYPE 163 */
- 0, /* PTYPE 164 */
- 0, /* PTYPE 165 */
- 0, /* PTYPE 166 */
- 0, /* PTYPE 167 */
- 0, /* PTYPE 168 */
- 0, /* PTYPE 169 */
- 0, /* PTYPE 170 */
- 0, /* PTYPE 171 */
- 0, /* PTYPE 172 */
- 0, /* PTYPE 173 */
- 0, /* PTYPE 174 */
- 0, /* PTYPE 175 */
- 0, /* PTYPE 176 */
- 0, /* PTYPE 177 */
- 0, /* PTYPE 178 */
- 0, /* PTYPE 179 */
- 0, /* PTYPE 180 */
- 0, /* PTYPE 181 */
- 0, /* PTYPE 182 */
- 0, /* PTYPE 183 */
- 0, /* PTYPE 184 */
- 0, /* PTYPE 185 */
- 0, /* PTYPE 186 */
- 0, /* PTYPE 187 */
- 0, /* PTYPE 188 */
- 0, /* PTYPE 189 */
- 0, /* PTYPE 190 */
- 0, /* PTYPE 191 */
- 0, /* PTYPE 192 */
- 0, /* PTYPE 193 */
- 0, /* PTYPE 194 */
- 0, /* PTYPE 195 */
- 0, /* PTYPE 196 */
- 0, /* PTYPE 197 */
- 0, /* PTYPE 198 */
- 0, /* PTYPE 199 */
- 0, /* PTYPE 200 */
- 0, /* PTYPE 201 */
- 0, /* PTYPE 202 */
- 0, /* PTYPE 203 */
- 0, /* PTYPE 204 */
- 0, /* PTYPE 205 */
- 0, /* PTYPE 206 */
- 0, /* PTYPE 207 */
- 0, /* PTYPE 208 */
- 0, /* PTYPE 209 */
- 0, /* PTYPE 210 */
- 0, /* PTYPE 211 */
- 0, /* PTYPE 212 */
- 0, /* PTYPE 213 */
- 0, /* PTYPE 214 */
- 0, /* PTYPE 215 */
- 0, /* PTYPE 216 */
- 0, /* PTYPE 217 */
- 0, /* PTYPE 218 */
- 0, /* PTYPE 219 */
- 0, /* PTYPE 220 */
- 0, /* PTYPE 221 */
- 0, /* PTYPE 222 */
- 0, /* PTYPE 223 */
- 0, /* PTYPE 224 */
- 0, /* PTYPE 225 */
- 0, /* PTYPE 226 */
- 0, /* PTYPE 227 */
- 0, /* PTYPE 228 */
- 0, /* PTYPE 229 */
- 0, /* PTYPE 230 */
- 0, /* PTYPE 231 */
- 0, /* PTYPE 232 */
- 0, /* PTYPE 233 */
- 0, /* PTYPE 234 */
- 0, /* PTYPE 235 */
- 0, /* PTYPE 236 */
- 0, /* PTYPE 237 */
- 0, /* PTYPE 238 */
- 0, /* PTYPE 239 */
- 0, /* PTYPE 240 */
- 0, /* PTYPE 241 */
- 0, /* PTYPE 242 */
- 0, /* PTYPE 243 */
- 0, /* PTYPE 244 */
- 0, /* PTYPE 245 */
- 0, /* PTYPE 246 */
- 0, /* PTYPE 247 */
- 0, /* PTYPE 248 */
- 0, /* PTYPE 249 */
- 0, /* PTYPE 250 */
- 0, /* PTYPE 251 */
- 0, /* PTYPE 252 */
- 0, /* PTYPE 253 */
- 0, /* PTYPE 254 */
- 0, /* PTYPE 255 */
+ static const uint32_t ptype_table[UINT8_MAX] __rte_cache_aligned = {
+ /* L2 types */
+ /* [0] reserved */
+ [1] = RTE_PTYPE_L2_MAC,
+ [2] = RTE_PTYPE_L2_MAC_TIMESYNC,
+ /* [3] - [5] reserved */
+ [6] = RTE_PTYPE_L2_LLDP,
+ /* [7] - [10] reserved */
+ [11] = RTE_PTYPE_L2_ARP,
+ /* [12] - [21] reserved */
+
+ /* Non tunneled IPv4 */
+ [22] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_FRAG,
+ [23] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_NONFRAG,
+ [24] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_UDP,
+ /* [25] reserved */
+ [26] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_TCP,
+ [27] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_SCTP,
+ [28] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_ICMP,
+
+ /* IPv4 --> IPv4 */
+ [29] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [30] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [31] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [32] reserved */
+ [33] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [34] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [35] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv4 --> IPv6 */
+ [36] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [37] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [38] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [39] reserved */
+ [40] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [41] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [42] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv4 --> GRE/Teredo/VXLAN */
+ [43] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT,
+
+ /* IPv4 --> GRE/Teredo/VXLAN --> IPv4 */
+ [44] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [45] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [46] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [47] reserved */
+ [48] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [49] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [50] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv4 --> GRE/Teredo/VXLAN --> IPv6 */
+ [51] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [52] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [53] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [54] reserved */
+ [55] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [56] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [57] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv4 --> GRE/Teredo/VXLAN --> MAC */
+ [58] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC,
+
+ /* IPv4 --> GRE/Teredo/VXLAN --> MAC --> IPv4 */
+ [59] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [60] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [61] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [62] reserved */
+ [63] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [64] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [65] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv4 --> GRE/Teredo/VXLAN --> MAC --> IPv6 */
+ [66] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [67] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [68] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [69] reserved */
+ [70] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [71] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [72] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv4 --> GRE/Teredo/VXLAN --> MAC/VLAN */
+ [73] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN,
+
+ /* IPv4 --> GRE/Teredo/VXLAN --> MAC/VLAN --> IPv4 */
+ [74] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [75] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [76] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [77] reserved */
+ [78] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [79] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [80] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv4 --> GRE/Teredo/VXLAN --> MAC/VLAN --> IPv6 */
+ [81] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [82] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [83] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [84] reserved */
+ [85] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [86] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [87] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* Non tunneled IPv6 */
+ [88] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_FRAG,
+ [89] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_NONFRAG,
+ [90] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_UDP,
+ /* [91] reserved */
+ [92] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_TCP,
+ [93] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_SCTP,
+ [94] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_ICMP,
+
+ /* IPv6 --> IPv4 */
+ [95] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [96] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [97] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [98] reserved */
+ [99] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [100] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [101] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv6 --> IPv6 */
+ [102] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [103] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [104] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [105] reserved */
+ [106] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [107] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [108] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv6 --> GRE/Teredo/VXLAN */
+ [109] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT,
+
+ /* IPv6 --> GRE/Teredo/VXLAN --> IPv4 */
+ [110] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [111] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [112] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [113] reserved */
+ [114] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [115] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [116] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv6 --> GRE/Teredo/VXLAN --> IPv6 */
+ [117] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [118] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [119] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [120] reserved */
+ [121] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [122] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [123] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv6 --> GRE/Teredo/VXLAN --> MAC */
+ [124] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC,
+
+ /* IPv6 --> GRE/Teredo/VXLAN --> MAC --> IPv4 */
+ [125] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [126] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [127] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [128] reserved */
+ [129] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [130] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [131] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv6 --> GRE/Teredo/VXLAN --> MAC --> IPv6 */
+ [132] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [133] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [134] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [135] reserved */
+ [136] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [137] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [138] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv6 --> GRE/Teredo/VXLAN --> MAC/VLAN */
+ [139] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN,
+
+ /* IPv6 --> GRE/Teredo/VXLAN --> MAC/VLAN --> IPv4 */
+ [140] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [141] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [142] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [143] reserved */
+ [144] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [145] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [146] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv6 --> GRE/Teredo/VXLAN --> MAC/VLAN --> IPv6 */
+ [147] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [148] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [149] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [150] reserved */
+ [151] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [152] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [153] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* All others reserved */
};
- return ip_ptype_map[ptype];
+ return ptype_table[ptype];
}
#define I40E_RX_DESC_EXT_STATUS_FLEXBH_MASK 0x03
@@ -702,11 +941,11 @@ i40e_rx_scan_hw_ring(struct i40e_rx_queue *rxq)
rxdp[j].wb.qword0.lo_dword.l2tag1) : 0;
pkt_flags = i40e_rxd_status_to_pkt_flags(qword1);
pkt_flags |= i40e_rxd_error_to_pkt_flags(qword1);
- pkt_flags |= i40e_rxd_ptype_to_pkt_flags(qword1);
- mb->packet_type = (uint16_t)((qword1 &
- I40E_RXD_QW1_PTYPE_MASK) >>
- I40E_RXD_QW1_PTYPE_SHIFT);
+ mb->packet_type =
+ i40e_rxd_pkt_type_mapping((uint8_t)((qword1 &
+ I40E_RXD_QW1_PTYPE_MASK) >>
+ I40E_RXD_QW1_PTYPE_SHIFT));
if (pkt_flags & PKT_RX_RSS_HASH)
mb->hash.rss = rte_le_to_cpu_32(\
rxdp[j].wb.qword0.hi_dword.rss);
@@ -945,9 +1184,9 @@ i40e_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
rte_le_to_cpu_16(rxd.wb.qword0.lo_dword.l2tag1) : 0;
pkt_flags = i40e_rxd_status_to_pkt_flags(qword1);
pkt_flags |= i40e_rxd_error_to_pkt_flags(qword1);
- pkt_flags |= i40e_rxd_ptype_to_pkt_flags(qword1);
- rxm->packet_type = (uint16_t)((qword1 & I40E_RXD_QW1_PTYPE_MASK) >>
- I40E_RXD_QW1_PTYPE_SHIFT);
+ rxm->packet_type =
+ i40e_rxd_pkt_type_mapping((uint8_t)((qword1 &
+ I40E_RXD_QW1_PTYPE_MASK) >> I40E_RXD_QW1_PTYPE_SHIFT));
if (pkt_flags & PKT_RX_RSS_HASH)
rxm->hash.rss =
rte_le_to_cpu_32(rxd.wb.qword0.hi_dword.rss);
@@ -1104,10 +1343,9 @@ i40e_recv_scattered_pkts(void *rx_queue,
rte_le_to_cpu_16(rxd.wb.qword0.lo_dword.l2tag1) : 0;
pkt_flags = i40e_rxd_status_to_pkt_flags(qword1);
pkt_flags |= i40e_rxd_error_to_pkt_flags(qword1);
- pkt_flags |= i40e_rxd_ptype_to_pkt_flags(qword1);
- first_seg->packet_type = (uint16_t)((qword1 &
- I40E_RXD_QW1_PTYPE_MASK) >>
- I40E_RXD_QW1_PTYPE_SHIFT);
+ first_seg->packet_type =
+ i40e_rxd_pkt_type_mapping((uint8_t)((qword1 &
+ I40E_RXD_QW1_PTYPE_MASK) >> I40E_RXD_QW1_PTYPE_SHIFT));
if (pkt_flags & PKT_RX_RSS_HASH)
rxm->hash.rss =
rte_le_to_cpu_32(rxd.wb.qword0.hi_dword.rss);
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v4 07/18] enic: support of unified packet type
2015-02-27 13:11 ` [dpdk-dev] [PATCH v4 00/18] " Helin Zhang
` (5 preceding siblings ...)
2015-02-27 13:11 ` [dpdk-dev] [PATCH v4 06/18] i40e: " Helin Zhang
@ 2015-02-27 13:11 ` Helin Zhang
2015-02-27 13:11 ` [dpdk-dev] [PATCH v4 08/18] vmxnet3: " Helin Zhang
` (11 subsequent siblings)
18 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-02-27 13:11 UTC (permalink / raw)
To: dev
To unify packet types among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
lib/librte_pmd_enic/enic_main.c | 14 ++++++++------
1 file changed, 8 insertions(+), 6 deletions(-)
v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.
diff --git a/lib/librte_pmd_enic/enic_main.c b/lib/librte_pmd_enic/enic_main.c
index c66f139..701d506 100644
--- a/lib/librte_pmd_enic/enic_main.c
+++ b/lib/librte_pmd_enic/enic_main.c
@@ -423,7 +423,7 @@ static int enic_rq_indicate_buf(struct vnic_rq *rq,
rx_pkt->pkt_len = bytes_written;
if (ipv4) {
- rx_pkt->ol_flags |= PKT_RX_IPV4_HDR;
+ rx_pkt->packet_type = RTE_PTYPE_L3_IPV4;
if (!csum_not_calc) {
if (unlikely(!ipv4_csum_ok))
rx_pkt->ol_flags |= PKT_RX_IP_CKSUM_BAD;
@@ -432,7 +432,7 @@ static int enic_rq_indicate_buf(struct vnic_rq *rq,
rx_pkt->ol_flags |= PKT_RX_L4_CKSUM_BAD;
}
} else if (ipv6)
- rx_pkt->ol_flags |= PKT_RX_IPV6_HDR;
+ rx_pkt->packet_type = RTE_PTYPE_L3_IPV6;
} else {
/* Header split */
if (sop && !eop) {
@@ -445,7 +445,7 @@ static int enic_rq_indicate_buf(struct vnic_rq *rq,
*rx_pkt_bucket = rx_pkt;
rx_pkt->pkt_len = bytes_written;
if (ipv4) {
- rx_pkt->ol_flags |= PKT_RX_IPV4_HDR;
+ rx_pkt->packet_type = RTE_PTYPE_L3_IPV4;
if (!csum_not_calc) {
if (unlikely(!ipv4_csum_ok))
rx_pkt->ol_flags |=
@@ -457,13 +457,14 @@ static int enic_rq_indicate_buf(struct vnic_rq *rq,
PKT_RX_L4_CKSUM_BAD;
}
} else if (ipv6)
- rx_pkt->ol_flags |= PKT_RX_IPV6_HDR;
+ rx_pkt->packet_type = RTE_PTYPE_L3_IPV6;
} else {
/* Payload */
hdr_rx_pkt = *rx_pkt_bucket;
hdr_rx_pkt->pkt_len += bytes_written;
if (ipv4) {
- hdr_rx_pkt->ol_flags |= PKT_RX_IPV4_HDR;
+ hdr_rx_pkt->packet_type =
+ RTE_PTYPE_L3_IPV4;
if (!csum_not_calc) {
if (unlikely(!ipv4_csum_ok))
hdr_rx_pkt->ol_flags |=
@@ -475,7 +476,8 @@ static int enic_rq_indicate_buf(struct vnic_rq *rq,
PKT_RX_L4_CKSUM_BAD;
}
} else if (ipv6)
- hdr_rx_pkt->ol_flags |= PKT_RX_IPV6_HDR;
+ hdr_rx_pkt->packet_type =
+ RTE_PTYPE_L3_IPV6;
}
}
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v4 08/18] vmxnet3: support of unified packet type
2015-02-27 13:11 ` [dpdk-dev] [PATCH v4 00/18] " Helin Zhang
` (6 preceding siblings ...)
2015-02-27 13:11 ` [dpdk-dev] [PATCH v4 07/18] enic: " Helin Zhang
@ 2015-02-27 13:11 ` Helin Zhang
2015-02-27 13:11 ` [dpdk-dev] [PATCH v4 09/18] fm10k: " Helin Zhang
` (10 subsequent siblings)
18 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-02-27 13:11 UTC (permalink / raw)
To: dev
To unify packet types among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Yong Wang <yongwang@vmware.com>
---
lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.
diff --git a/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c b/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c
index 4d8a010..831e676 100644
--- a/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c
+++ b/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c
@@ -650,9 +650,9 @@ vmxnet3_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
struct ipv4_hdr *ip = (struct ipv4_hdr *)(eth + 1);
if (((ip->version_ihl & 0xf) << 2) > (int)sizeof(struct ipv4_hdr))
- rxm->ol_flags |= PKT_RX_IPV4_HDR_EXT;
+ rxm->packet_type = RTE_PTYPE_L3_IPV4_EXT;
else
- rxm->ol_flags |= PKT_RX_IPV4_HDR;
+ rxm->packet_type = RTE_PTYPE_L3_IPV4;
if (!rcd->cnc) {
if (!rcd->ipc)
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v4 09/18] fm10k: support of unified packet type
2015-02-27 13:11 ` [dpdk-dev] [PATCH v4 00/18] " Helin Zhang
` (7 preceding siblings ...)
2015-02-27 13:11 ` [dpdk-dev] [PATCH v4 08/18] vmxnet3: " Helin Zhang
@ 2015-02-27 13:11 ` Helin Zhang
2015-02-27 13:11 ` [dpdk-dev] [PATCH v4 10/18] app/test-pipeline: " Helin Zhang
` (9 subsequent siblings)
18 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-02-27 13:11 UTC (permalink / raw)
To: dev
To unify packet types among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
lib/librte_pmd_fm10k/fm10k_rxtx.c | 30 +++++++++++++++++++++---------
1 file changed, 21 insertions(+), 9 deletions(-)
v4 changes:
* Supported unified packet type of fm10k from v4.
diff --git a/lib/librte_pmd_fm10k/fm10k_rxtx.c b/lib/librte_pmd_fm10k/fm10k_rxtx.c
index 83bddfc..2a2e778 100644
--- a/lib/librte_pmd_fm10k/fm10k_rxtx.c
+++ b/lib/librte_pmd_fm10k/fm10k_rxtx.c
@@ -65,13 +65,29 @@ static inline void dump_rxd(union fm10k_rx_desc *rxd)
static inline void
rx_desc_to_ol_flags(struct rte_mbuf *m, const union fm10k_rx_desc *d)
{
- uint16_t ptype;
- static const uint16_t pt_lut[] = { 0,
- PKT_RX_IPV4_HDR, PKT_RX_IPV4_HDR_EXT,
- PKT_RX_IPV6_HDR, PKT_RX_IPV6_HDR_EXT,
- 0, 0, 0
+ static const uint32_t
+ ptype_table[FM10K_RXD_PKTTYPE_MASK >> FM10K_RXD_PKTTYPE_SHIFT]
+ __rte_cache_aligned = {
+ [FM10K_PKTTYPE_OTHER] = RTE_PTYPE_L2_MAC,
+ [FM10K_PKTTYPE_IPV4] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4,
+ [FM10K_PKTTYPE_IPV4_EX] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4_EXT,
+ [FM10K_PKTTYPE_IPV6] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6,
+ [FM10K_PKTTYPE_IPV6_EX] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV6_EXT,
+ [FM10K_PKTTYPE_IPV4 | FM10K_PKTTYPE_TCP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_TCP,
+ [FM10K_PKTTYPE_IPV6 | FM10K_PKTTYPE_TCP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_TCP,
+ [FM10K_PKTTYPE_IPV4 | FM10K_PKTTYPE_UDP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_UDP,
+ [FM10K_PKTTYPE_IPV6 | FM10K_PKTTYPE_UDP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_UDP,
};
+ m->packet_type = ptype_table[(d->w.pkt_info & FM10K_RXD_PKTTYPE_MASK)
+ >> FM10K_RXD_PKTTYPE_SHIFT];
+
if (d->w.pkt_info & FM10K_RXD_RSSTYPE_MASK)
m->ol_flags |= PKT_RX_RSS_HASH;
@@ -93,10 +109,6 @@ rx_desc_to_ol_flags(struct rte_mbuf *m, const union fm10k_rx_desc *d)
if (unlikely(d->d.staterr & FM10K_RXD_STATUS_RXE))
m->ol_flags |= PKT_RX_RECIP_ERR;
-
- ptype = (d->d.data & FM10K_RXD_PKTTYPE_MASK_L3) >>
- FM10K_RXD_PKTTYPE_SHIFT;
- m->ol_flags |= pt_lut[(uint8_t)ptype];
}
uint16_t
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v4 10/18] app/test-pipeline: support of unified packet type
2015-02-27 13:11 ` [dpdk-dev] [PATCH v4 00/18] " Helin Zhang
` (8 preceding siblings ...)
2015-02-27 13:11 ` [dpdk-dev] [PATCH v4 09/18] fm10k: " Helin Zhang
@ 2015-02-27 13:11 ` Helin Zhang
2015-02-27 13:11 ` [dpdk-dev] [PATCH v4 11/18] app/testpmd: " Helin Zhang
` (8 subsequent siblings)
18 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-02-27 13:11 UTC (permalink / raw)
To: dev
To unify packet types among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
app/test-pipeline/pipeline_hash.c | 7 ++++---
1 file changed, 4 insertions(+), 3 deletions(-)
v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.
diff --git a/app/test-pipeline/pipeline_hash.c b/app/test-pipeline/pipeline_hash.c
index 4598ad4..548615f 100644
--- a/app/test-pipeline/pipeline_hash.c
+++ b/app/test-pipeline/pipeline_hash.c
@@ -459,20 +459,21 @@ app_main_loop_rx_metadata(void) {
signature = RTE_MBUF_METADATA_UINT32_PTR(m, 0);
key = RTE_MBUF_METADATA_UINT8_PTR(m, 32);
- if (m->ol_flags & PKT_RX_IPV4_HDR) {
+ if (RTE_ETH_IS_IPV4_HDR(m->packet_type)) {
ip_hdr = (struct ipv4_hdr *)
&m_data[sizeof(struct ether_hdr)];
ip_dst = ip_hdr->dst_addr;
k32 = (uint32_t *) key;
k32[0] = ip_dst & 0xFFFFFF00;
- } else {
+ } else if (RTE_ETH_IS_IPV6_HDR(m->packet_type)) {
ipv6_hdr = (struct ipv6_hdr *)
&m_data[sizeof(struct ether_hdr)];
ipv6_dst = ipv6_hdr->dst_addr;
memcpy(key, ipv6_dst, 16);
- }
+ } else
+ continue;
*signature = test_hash(key, 0, 0);
}
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v4 11/18] app/testpmd: support of unified packet type
2015-02-27 13:11 ` [dpdk-dev] [PATCH v4 00/18] " Helin Zhang
` (9 preceding siblings ...)
2015-02-27 13:11 ` [dpdk-dev] [PATCH v4 10/18] app/test-pipeline: " Helin Zhang
@ 2015-02-27 13:11 ` Helin Zhang
2015-02-27 13:11 ` [dpdk-dev] [PATCH v4 12/18] app/test: Remove useless code Helin Zhang
` (7 subsequent siblings)
18 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-02-27 13:11 UTC (permalink / raw)
To: dev
To unify packet types among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Signed-off-by: Jijiang Liu <jijiang.liu@intel.com>
---
app/test-pmd/csumonly.c | 10 +--
app/test-pmd/rxonly.c | 178 ++++++++++++++++++++++++++++++++++++++++++++++--
2 files changed, 177 insertions(+), 11 deletions(-)
v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.
v4 changes:
* Added printing logs of packet types of each received packet in rxonly mode.
diff --git a/app/test-pmd/csumonly.c b/app/test-pmd/csumonly.c
index 52cbd8a..e3e0c8a 100644
--- a/app/test-pmd/csumonly.c
+++ b/app/test-pmd/csumonly.c
@@ -203,8 +203,9 @@ parse_ethernet(struct ether_hdr *eth_hdr, struct testpmd_offload_info *info)
/* Parse a vxlan header */
static void
-parse_vxlan(struct udp_hdr *udp_hdr, struct testpmd_offload_info *info,
- uint64_t mbuf_olflags)
+parse_vxlan(struct udp_hdr *udp_hdr,
+ struct testpmd_offload_info *info,
+ uint32_t pkt_type)
{
struct ether_hdr *eth_hdr;
@@ -212,8 +213,7 @@ parse_vxlan(struct udp_hdr *udp_hdr, struct testpmd_offload_info *info,
* (rfc7348) or that the rx offload flag is set (i40e only
* currently) */
if (udp_hdr->dst_port != _htons(4789) &&
- (mbuf_olflags & (PKT_RX_TUNNEL_IPV4_HDR |
- PKT_RX_TUNNEL_IPV6_HDR)) == 0)
+ RTE_ETH_IS_TUNNEL_PKT(pkt_type) == 0)
return;
info->is_tunnel = 1;
@@ -550,7 +550,7 @@ pkt_burst_checksum_forward(struct fwd_stream *fs)
struct udp_hdr *udp_hdr;
udp_hdr = (struct udp_hdr *)((char *)l3_hdr +
info.l3_len);
- parse_vxlan(udp_hdr, &info, m->ol_flags);
+ parse_vxlan(udp_hdr, &info, m->packet_type);
} else if (info.l4_proto == IPPROTO_GRE) {
struct simple_gre_hdr *gre_hdr;
gre_hdr = (struct simple_gre_hdr *)
diff --git a/app/test-pmd/rxonly.c b/app/test-pmd/rxonly.c
index fdfe990..affc8ed 100644
--- a/app/test-pmd/rxonly.c
+++ b/app/test-pmd/rxonly.c
@@ -92,7 +92,7 @@ pkt_burst_receive(struct fwd_stream *fs)
uint64_t ol_flags;
uint16_t nb_rx;
uint16_t i, packet_type;
- uint64_t is_encapsulation;
+ uint16_t is_encapsulation;
#ifdef RTE_TEST_PMD_RECORD_CORE_CYCLES
uint64_t start_tsc;
@@ -135,10 +135,7 @@ pkt_burst_receive(struct fwd_stream *fs)
eth_type = RTE_BE_TO_CPU_16(eth_hdr->ether_type);
ol_flags = mb->ol_flags;
packet_type = mb->packet_type;
-
- is_encapsulation = ol_flags & (PKT_RX_TUNNEL_IPV4_HDR |
- PKT_RX_TUNNEL_IPV6_HDR);
-
+ is_encapsulation = RTE_ETH_IS_TUNNEL_PKT(packet_type);
print_ether_addr(" src=", ð_hdr->s_addr);
print_ether_addr(" - dst=", ð_hdr->d_addr);
printf(" - type=0x%04x - length=%u - nb_segs=%d",
@@ -161,6 +158,175 @@ pkt_burst_receive(struct fwd_stream *fs)
}
if (ol_flags & PKT_RX_VLAN_PKT)
printf(" - VLAN tci=0x%x", mb->vlan_tci);
+ if (mb->packet_type) {
+ uint32_t ptype;
+
+ /* (outer) L2 packet type */
+ ptype = mb->packet_type & RTE_PTYPE_L2_MASK;
+ switch (ptype) {
+ case RTE_PTYPE_L2_MAC:
+ printf(" - (outer) L2 type: MAC");
+ break;
+ case RTE_PTYPE_L2_MAC_TIMESYNC:
+ printf(" - (outer) L2 type: MAC Timesync");
+ break;
+ case RTE_PTYPE_L2_ARP:
+ printf(" - (outer) L2 type: ARP");
+ break;
+ case RTE_PTYPE_L2_LLDP:
+ printf(" - (outer) L2 type: LLDP");
+ break;
+ default:
+ printf(" - (outer) L2 type: Unknown");
+ break;
+ }
+
+ /* (outer) L3 packet type */
+ ptype = mb->packet_type & RTE_PTYPE_L3_MASK;
+ switch (ptype) {
+ case RTE_PTYPE_L3_IPV4:
+ printf(" - (outer) L3 type: IPV4");
+ break;
+ case RTE_PTYPE_L3_IPV4_EXT:
+ printf(" - (outer) L3 type: IPV4_EXT");
+ break;
+ case RTE_PTYPE_L3_IPV6:
+ printf(" - (outer) L3 type: IPV6");
+ break;
+ case RTE_PTYPE_L3_IPV4_EXT_UNKNOWN:
+ printf(" - (outer) L3 type: IPV4_EXT_UNKNOWN");
+ break;
+ case RTE_PTYPE_L3_IPV6_EXT:
+ printf(" - (outer) L3 type: IPV6_EXT");
+ break;
+ case RTE_PTYPE_L3_IPV6_EXT_UNKNOWN:
+ printf(" - (outer) L3 type: IPV6_EXT_UNKNOWN");
+ break;
+ default:
+ printf(" - (outer) L3 type: Unknown");
+ break;
+ }
+
+ /* (outer) L4 packet type */
+ ptype = mb->packet_type & RTE_PTYPE_L4_MASK;
+ switch (ptype) {
+ case RTE_PTYPE_L4_TCP:
+ printf(" - (outer) L4 type: TCP");
+ break;
+ case RTE_PTYPE_L4_UDP:
+ printf(" - (outer) L4 type: UDP");
+ break;
+ case RTE_PTYPE_L4_FRAG:
+ printf(" - (outer) L4 type: L4_FRAG");
+ break;
+ case RTE_PTYPE_L4_SCTP:
+ printf(" - (outer) L4 type: SCTP");
+ break;
+ case RTE_PTYPE_L4_ICMP:
+ printf(" - (outer) L4 type: ICMP");
+ break;
+ case RTE_PTYPE_L4_NONFRAG:
+ printf(" - (outer) L4 type: L4_NONFRAG");
+ break;
+ default:
+ printf(" - (outer) L4 type: Unknown");
+ break;
+ }
+
+ /* packet tunnel type */
+ ptype = mb->packet_type & RTE_PTYPE_TUNNEL_MASK;
+ switch (ptype) {
+ case RTE_PTYPE_TUNNEL_IP:
+ printf(" - Tunnel type: IP");
+ break;
+ case RTE_PTYPE_TUNNEL_GRE:
+ printf(" - Tunnel type: GRE");
+ break;
+ case RTE_PTYPE_TUNNEL_VXLAN:
+ printf(" - Tunnel type: VXLAN");
+ break;
+ case RTE_PTYPE_TUNNEL_NVGRE:
+ printf(" - Tunnel type: NVGRE");
+ break;
+ case RTE_PTYPE_TUNNEL_GENEVE:
+ printf(" - Tunnel type: GENEVE");
+ break;
+ case RTE_PTYPE_TUNNEL_GRENAT:
+ printf(" - Tunnel type: GRENAT");
+ break;
+ default:
+ printf(" - Tunnel type: Unkown");
+ break;
+ }
+
+ /* inner L2 packet type */
+ ptype = mb->packet_type & RTE_PTYPE_INNER_L2_MASK;
+ switch (ptype) {
+ case RTE_PTYPE_INNER_L2_MAC:
+ printf(" - Inner L2 type: MAC");
+ break;
+ case RTE_PTYPE_INNER_L2_MAC_VLAN:
+ printf(" - Inner L2 type: MAC_VLAN");
+ break;
+ default:
+ printf(" - Inner L2 type: Unknown");
+ break;
+ }
+
+ /* inner L3 packet type */
+ ptype = mb->packet_type & RTE_PTYPE_INNER_INNER_L3_MASK;
+ switch (ptype) {
+ case RTE_PTYPE_INNER_L3_IPV4:
+ printf(" - Inner L3 type: IPV4");
+ break;
+ case RTE_PTYPE_INNER_L3_IPV4_EXT:
+ printf(" - Inner L3 type: IPV4_EXT");
+ break;
+ case RTE_PTYPE_INNER_L3_IPV6:
+ printf(" - Inner L3 type: IPV6");
+ break;
+ case RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN:
+ printf(" - Inner L3 type: IPV4_EXT_UNKNOWN");
+ break;
+ case RTE_PTYPE_INNER_L3_IPV6_EXT:
+ printf(" - Inner L3 type: IPV6_EXT");
+ break;
+ case RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN:
+ printf(" - Inner L3 type: IPV6_EXT_UNKOWN");
+ break;
+ default:
+ printf(" - Inner L3 type: Unkown");
+ break;
+ }
+
+ /* inner L4 packet type */
+ ptype = mb->packet_type & RTE_PTYPE_INNER_L4_MASK;
+ switch (ptype) {
+ case RTE_PTYPE_INNER_L4_TCP:
+ printf(" - Inner L4 type: TCP");
+ break;
+ case RTE_PTYPE_INNER_L4_UDP:
+ printf(" - Inner L4 type: UDP");
+ break;
+ case RTE_PTYPE_INNER_L4_FRAG:
+ printf(" - Inner L4 type: L4_FRAG");
+ break;
+ case RTE_PTYPE_INNER_L4_SCTP:
+ printf(" - Inner L4 type: SCTP");
+ break;
+ case RTE_PTYPE_INNER_L4_ICMP:
+ printf(" - Inner L4 type: ICMP");
+ break;
+ case RTE_PTYPE_INNER_L4_NONFRAG:
+ printf(" - Inner L4 type: L4_NONFRAG");
+ break;
+ default:
+ printf(" - Inner L4 type: Unknown");
+ break;
+ }
+ printf("\n");
+ } else
+ printf("Unknown packet type\n");
if (is_encapsulation) {
struct ipv4_hdr *ipv4_hdr;
struct ipv6_hdr *ipv6_hdr;
@@ -174,7 +340,7 @@ pkt_burst_receive(struct fwd_stream *fs)
l2_len = sizeof(struct ether_hdr);
/* Do not support ipv4 option field */
- if (ol_flags & PKT_RX_TUNNEL_IPV4_HDR) {
+ if (RTE_ETH_IS_IPV4_HDR(packet_type)) {
l3_len = sizeof(struct ipv4_hdr);
ipv4_hdr = (struct ipv4_hdr *) (rte_pktmbuf_mtod(mb,
unsigned char *) + l2_len);
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v4 12/18] app/test: Remove useless code
2015-02-27 13:11 ` [dpdk-dev] [PATCH v4 00/18] " Helin Zhang
` (10 preceding siblings ...)
2015-02-27 13:11 ` [dpdk-dev] [PATCH v4 11/18] app/testpmd: " Helin Zhang
@ 2015-02-27 13:11 ` Helin Zhang
2015-02-27 16:01 ` Gajdzica, MaciejX T
2015-02-27 13:11 ` [dpdk-dev] [PATCH v4 13/18] examples/ip_fragmentation: support of unified packet type Helin Zhang
` (6 subsequent siblings)
18 siblings, 1 reply; 257+ messages in thread
From: Helin Zhang @ 2015-02-27 13:11 UTC (permalink / raw)
To: dev
Severl useless code lines are added accidenly, which blocks packet
type unification. They should be deleted at all.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
app/test/packet_burst_generator.c | 10 ----------
1 file changed, 10 deletions(-)
v4 changes:
* Removed several useless code lines which block packet type unification.
diff --git a/app/test/packet_burst_generator.c b/app/test/packet_burst_generator.c
index b46eed7..b9f8f1a 100644
--- a/app/test/packet_burst_generator.c
+++ b/app/test/packet_burst_generator.c
@@ -272,19 +272,9 @@ nomore_mbuf:
if (ipv4) {
pkt->vlan_tci = ETHER_TYPE_IPv4;
pkt->l3_len = sizeof(struct ipv4_hdr);
-
- if (vlan_enabled)
- pkt->ol_flags = PKT_RX_IPV4_HDR | PKT_RX_VLAN_PKT;
- else
- pkt->ol_flags = PKT_RX_IPV4_HDR;
} else {
pkt->vlan_tci = ETHER_TYPE_IPv6;
pkt->l3_len = sizeof(struct ipv6_hdr);
-
- if (vlan_enabled)
- pkt->ol_flags = PKT_RX_IPV6_HDR | PKT_RX_VLAN_PKT;
- else
- pkt->ol_flags = PKT_RX_IPV6_HDR;
}
pkts_burst[nb_pkt] = pkt;
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v4 13/18] examples/ip_fragmentation: support of unified packet type
2015-02-27 13:11 ` [dpdk-dev] [PATCH v4 00/18] " Helin Zhang
` (11 preceding siblings ...)
2015-02-27 13:11 ` [dpdk-dev] [PATCH v4 12/18] app/test: Remove useless code Helin Zhang
@ 2015-02-27 13:11 ` Helin Zhang
2015-02-27 13:11 ` [dpdk-dev] [PATCH v4 14/18] examples/ip_reassembly: " Helin Zhang
` (5 subsequent siblings)
18 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-02-27 13:11 UTC (permalink / raw)
To: dev
To unify packet types among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
examples/ip_fragmentation/main.c | 7 +++----
1 file changed, 3 insertions(+), 4 deletions(-)
v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.
diff --git a/examples/ip_fragmentation/main.c b/examples/ip_fragmentation/main.c
index eac5427..152844e 100644
--- a/examples/ip_fragmentation/main.c
+++ b/examples/ip_fragmentation/main.c
@@ -286,7 +286,7 @@ l3fwd_simple_forward(struct rte_mbuf *m, struct lcore_queue_conf *qconf,
len = qconf->tx_mbufs[port_out].len;
/* if this is an IPv4 packet */
- if (m->ol_flags & PKT_RX_IPV4_HDR) {
+ if (RTE_ETH_IS_IPV4_HDR(m->packet_type)) {
struct ipv4_hdr *ip_hdr;
uint32_t ip_dst;
/* Read the lookup key (i.e. ip_dst) from the input packet */
@@ -320,9 +320,8 @@ l3fwd_simple_forward(struct rte_mbuf *m, struct lcore_queue_conf *qconf,
if (unlikely (len2 < 0))
return;
}
- }
- /* if this is an IPv6 packet */
- else if (m->ol_flags & PKT_RX_IPV6_HDR) {
+ } else if (RTE_ETH_IS_IPV6_HDR(m->packet_type)) {
+ /* if this is an IPv6 packet */
struct ipv6_hdr *ip_hdr;
ipv6 = 1;
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v4 14/18] examples/ip_reassembly: support of unified packet type
2015-02-27 13:11 ` [dpdk-dev] [PATCH v4 00/18] " Helin Zhang
` (12 preceding siblings ...)
2015-02-27 13:11 ` [dpdk-dev] [PATCH v4 13/18] examples/ip_fragmentation: support of unified packet type Helin Zhang
@ 2015-02-27 13:11 ` Helin Zhang
2015-02-27 13:11 ` [dpdk-dev] [PATCH v4 15/18] examples/l3fwd-acl: " Helin Zhang
` (4 subsequent siblings)
18 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-02-27 13:11 UTC (permalink / raw)
To: dev
To unify packet types among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
examples/ip_reassembly/main.c | 7 +++----
1 file changed, 3 insertions(+), 4 deletions(-)
v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.
diff --git a/examples/ip_reassembly/main.c b/examples/ip_reassembly/main.c
index 8492153..5ef2135 100644
--- a/examples/ip_reassembly/main.c
+++ b/examples/ip_reassembly/main.c
@@ -357,7 +357,7 @@ reassemble(struct rte_mbuf *m, uint8_t portid, uint32_t queue,
dst_port = portid;
/* if packet is IPv4 */
- if (m->ol_flags & (PKT_RX_IPV4_HDR)) {
+ if (RTE_ETH_IS_IPV4_HDR(m->packet_type)) {
struct ipv4_hdr *ip_hdr;
uint32_t ip_dst;
@@ -397,9 +397,8 @@ reassemble(struct rte_mbuf *m, uint8_t portid, uint32_t queue,
}
eth_hdr->ether_type = rte_be_to_cpu_16(ETHER_TYPE_IPv4);
- }
- /* if packet is IPv6 */
- else if (m->ol_flags & (PKT_RX_IPV6_HDR | PKT_RX_IPV6_HDR_EXT)) {
+ } else if (RTE_ETH_IS_IPV6_HDR(m->packet_type)) {
+ /* if packet is IPv6 */
struct ipv6_extension_fragment *frag_hdr;
struct ipv6_hdr *ip_hdr;
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v4 15/18] examples/l3fwd-acl: support of unified packet type
2015-02-27 13:11 ` [dpdk-dev] [PATCH v4 00/18] " Helin Zhang
` (13 preceding siblings ...)
2015-02-27 13:11 ` [dpdk-dev] [PATCH v4 14/18] examples/ip_reassembly: " Helin Zhang
@ 2015-02-27 13:11 ` Helin Zhang
2015-02-27 13:11 ` [dpdk-dev] [PATCH v4 16/18] examples/l3fwd-power: " Helin Zhang
` (3 subsequent siblings)
18 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-02-27 13:11 UTC (permalink / raw)
To: dev
To unify packet types among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
examples/l3fwd-acl/main.c | 19 ++++++-------------
1 file changed, 6 insertions(+), 13 deletions(-)
v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.
diff --git a/examples/l3fwd-acl/main.c b/examples/l3fwd-acl/main.c
index e851768..5df2e83 100644
--- a/examples/l3fwd-acl/main.c
+++ b/examples/l3fwd-acl/main.c
@@ -648,9 +648,7 @@ prepare_one_packet(struct rte_mbuf **pkts_in, struct acl_search_t *acl,
struct ipv4_hdr *ipv4_hdr;
struct rte_mbuf *pkt = pkts_in[index];
- int type = pkt->ol_flags & (PKT_RX_IPV4_HDR | PKT_RX_IPV6_HDR);
-
- if (type == PKT_RX_IPV4_HDR) {
+ if (RTE_ETH_IS_IPV4_HDR(pkt->packet_type)) {
ipv4_hdr = (struct ipv4_hdr *)(rte_pktmbuf_mtod(pkt,
unsigned char *) + sizeof(struct ether_hdr));
@@ -671,8 +669,7 @@ prepare_one_packet(struct rte_mbuf **pkts_in, struct acl_search_t *acl,
rte_pktmbuf_free(pkt);
}
- } else if (type == PKT_RX_IPV6_HDR) {
-
+ } else if (RTE_ETH_IS_IPV6_HDR(pkt->packet_type)) {
/* Fill acl structure */
acl->data_ipv6[acl->num_ipv6] = MBUF_IPV6_2PROTO(pkt);
acl->m_ipv6[(acl->num_ipv6)++] = pkt;
@@ -690,17 +687,13 @@ prepare_one_packet(struct rte_mbuf **pkts_in, struct acl_search_t *acl,
{
struct rte_mbuf *pkt = pkts_in[index];
- int type = pkt->ol_flags & (PKT_RX_IPV4_HDR | PKT_RX_IPV6_HDR);
-
- if (type == PKT_RX_IPV4_HDR) {
-
+ if (RTE_ETH_IS_IPV4_HDR(pkt->packet_type)) {
/* Fill acl structure */
acl->data_ipv4[acl->num_ipv4] = MBUF_IPV4_2PROTO(pkt);
acl->m_ipv4[(acl->num_ipv4)++] = pkt;
- } else if (type == PKT_RX_IPV6_HDR) {
-
+ } else if (RTE_ETH_IS_IPV6_HDR(pkt->packet_type)) {
/* Fill acl structure */
acl->data_ipv6[acl->num_ipv6] = MBUF_IPV6_2PROTO(pkt);
acl->m_ipv6[(acl->num_ipv6)++] = pkt;
@@ -748,9 +741,9 @@ send_one_packet(struct rte_mbuf *m, uint32_t res)
/* in the ACL list, drop it */
#ifdef L3FWDACL_DEBUG
if ((res & ACL_DENY_SIGNATURE) != 0) {
- if (m->ol_flags & PKT_RX_IPV4_HDR)
+ if (RTE_ETH_IS_IPV4_HDR(m->packet_type))
dump_acl4_rule(m, res);
- else
+ else if (RTE_ETH_IS_IPV6_HDR(m->packet_type))
dump_acl6_rule(m, res);
}
#endif
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v4 16/18] examples/l3fwd-power: support of unified packet type
2015-02-27 13:11 ` [dpdk-dev] [PATCH v4 00/18] " Helin Zhang
` (14 preceding siblings ...)
2015-02-27 13:11 ` [dpdk-dev] [PATCH v4 15/18] examples/l3fwd-acl: " Helin Zhang
@ 2015-02-27 13:11 ` Helin Zhang
2015-02-27 13:11 ` [dpdk-dev] [PATCH v4 17/18] examples/l3fwd: " Helin Zhang
` (2 subsequent siblings)
18 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-02-27 13:11 UTC (permalink / raw)
To: dev
To unify packet types among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
examples/l3fwd-power/main.c | 5 ++---
1 file changed, 2 insertions(+), 3 deletions(-)
v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.
diff --git a/examples/l3fwd-power/main.c b/examples/l3fwd-power/main.c
index f6b55b9..964e5b9 100644
--- a/examples/l3fwd-power/main.c
+++ b/examples/l3fwd-power/main.c
@@ -638,7 +638,7 @@ l3fwd_simple_forward(struct rte_mbuf *m, uint8_t portid,
eth_hdr = rte_pktmbuf_mtod(m, struct ether_hdr *);
- if (m->ol_flags & PKT_RX_IPV4_HDR) {
+ if (RTE_ETH_IS_IPV4_HDR(m->packet_type)) {
/* Handle IPv4 headers.*/
ipv4_hdr =
(struct ipv4_hdr *)(rte_pktmbuf_mtod(m, unsigned char*)
@@ -673,8 +673,7 @@ l3fwd_simple_forward(struct rte_mbuf *m, uint8_t portid,
ether_addr_copy(&ports_eth_addr[dst_port], ð_hdr->s_addr);
send_single_packet(m, dst_port);
- }
- else {
+ } else if (RTE_ETH_IS_IPV6_HDR(m->packet_type)) {
/* Handle IPv6 headers.*/
#if (APP_LOOKUP_METHOD == APP_LOOKUP_EXACT_MATCH)
struct ipv6_hdr *ipv6_hdr;
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v4 17/18] examples/l3fwd: support of unified packet type
2015-02-27 13:11 ` [dpdk-dev] [PATCH v4 00/18] " Helin Zhang
` (15 preceding siblings ...)
2015-02-27 13:11 ` [dpdk-dev] [PATCH v4 16/18] examples/l3fwd-power: " Helin Zhang
@ 2015-02-27 13:11 ` Helin Zhang
2015-02-27 13:11 ` [dpdk-dev] [PATCH v4 18/18] mbuf: remove old packet type bit masks Helin Zhang
2015-05-22 8:44 ` [dpdk-dev] [PATCH v5 00/18] unified packet type Helin Zhang
18 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-02-27 13:11 UTC (permalink / raw)
To: dev
To unify packet types among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
examples/l3fwd/main.c | 71 +++++++++++++++++++++++++++++----------------------
1 file changed, 40 insertions(+), 31 deletions(-)
v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.
v3 changes:
* Minor bug fixes and enhancements.
diff --git a/examples/l3fwd/main.c b/examples/l3fwd/main.c
index 6f7d7d4..49000f3 100644
--- a/examples/l3fwd/main.c
+++ b/examples/l3fwd/main.c
@@ -958,7 +958,7 @@ l3fwd_simple_forward(struct rte_mbuf *m, uint8_t portid, struct lcore_conf *qcon
eth_hdr = rte_pktmbuf_mtod(m, struct ether_hdr *);
- if (m->ol_flags & PKT_RX_IPV4_HDR) {
+ if (RTE_ETH_IS_IPV4_HDR(m->packet_type)) {
/* Handle IPv4 headers.*/
ipv4_hdr = (struct ipv4_hdr *)(rte_pktmbuf_mtod(m, unsigned char *) +
sizeof(struct ether_hdr));
@@ -993,7 +993,7 @@ l3fwd_simple_forward(struct rte_mbuf *m, uint8_t portid, struct lcore_conf *qcon
send_single_packet(m, dst_port);
- } else {
+ } else if (RTE_ETH_IS_IPV6_HDR(m->packet_type)) {
/* Handle IPv6 headers.*/
struct ipv6_hdr *ipv6_hdr;
@@ -1014,8 +1014,9 @@ l3fwd_simple_forward(struct rte_mbuf *m, uint8_t portid, struct lcore_conf *qcon
ether_addr_copy(&ports_eth_addr[dst_port], ð_hdr->s_addr);
send_single_packet(m, dst_port);
- }
-
+ } else
+ /* Free the mbuf that contains non-IPV4/IPV6 packet */
+ rte_pktmbuf_free(m);
}
#ifdef DO_RFC_1812_CHECKS
@@ -1039,11 +1040,11 @@ l3fwd_simple_forward(struct rte_mbuf *m, uint8_t portid, struct lcore_conf *qcon
* to BAD_PORT value.
*/
static inline __attribute__((always_inline)) void
-rfc1812_process(struct ipv4_hdr *ipv4_hdr, uint16_t *dp, uint32_t flags)
+rfc1812_process(struct ipv4_hdr *ipv4_hdr, uint16_t *dp, uint32_t ptype)
{
uint8_t ihl;
- if ((flags & PKT_RX_IPV4_HDR) != 0) {
+ if (RTE_ETH_IS_IPV4_HDR(ptype)) {
ihl = ipv4_hdr->version_ihl - IPV4_MIN_VER_IHL;
@@ -1074,11 +1075,11 @@ get_dst_port(const struct lcore_conf *qconf, struct rte_mbuf *pkt,
struct ipv6_hdr *ipv6_hdr;
struct ether_hdr *eth_hdr;
- if (pkt->ol_flags & PKT_RX_IPV4_HDR) {
+ if (RTE_ETH_IS_IPV4_HDR(pkt->packet_type)) {
if (rte_lpm_lookup(qconf->ipv4_lookup_struct, dst_ipv4,
&next_hop) != 0)
next_hop = portid;
- } else if (pkt->ol_flags & PKT_RX_IPV6_HDR) {
+ } else if (RTE_ETH_IS_IPV6_HDR(pkt->packet_type)) {
eth_hdr = rte_pktmbuf_mtod(pkt, struct ether_hdr *);
ipv6_hdr = (struct ipv6_hdr *)(eth_hdr + 1);
if (rte_lpm6_lookup(qconf->ipv6_lookup_struct,
@@ -1112,17 +1113,19 @@ process_packet(struct lcore_conf *qconf, struct rte_mbuf *pkt,
ve = val_eth[dp];
dst_port[0] = dp;
- rfc1812_process(ipv4_hdr, dst_port, pkt->ol_flags);
+ rfc1812_process(ipv4_hdr, dst_port, pkt->packet_type);
te = _mm_blend_epi16(te, ve, MASK_ETH);
_mm_store_si128((__m128i *)eth_hdr, te);
}
/*
- * Read ol_flags and destination IPV4 addresses from 4 mbufs.
+ * Read packet_type and destination IPV4 addresses from 4 mbufs.
*/
static inline void
-processx4_step1(struct rte_mbuf *pkt[FWDSTEP], __m128i *dip, uint32_t *flag)
+processx4_step1(struct rte_mbuf *pkt[FWDSTEP],
+ __m128i *dip,
+ uint32_t *ipv4_flag)
{
struct ipv4_hdr *ipv4_hdr;
struct ether_hdr *eth_hdr;
@@ -1131,22 +1134,22 @@ processx4_step1(struct rte_mbuf *pkt[FWDSTEP], __m128i *dip, uint32_t *flag)
eth_hdr = rte_pktmbuf_mtod(pkt[0], struct ether_hdr *);
ipv4_hdr = (struct ipv4_hdr *)(eth_hdr + 1);
x0 = ipv4_hdr->dst_addr;
- flag[0] = pkt[0]->ol_flags & PKT_RX_IPV4_HDR;
+ ipv4_flag[0] = pkt[0]->packet_type & RTE_PTYPE_L3_IPV4;
eth_hdr = rte_pktmbuf_mtod(pkt[1], struct ether_hdr *);
ipv4_hdr = (struct ipv4_hdr *)(eth_hdr + 1);
x1 = ipv4_hdr->dst_addr;
- flag[0] &= pkt[1]->ol_flags;
+ ipv4_flag[0] &= pkt[1]->packet_type;
eth_hdr = rte_pktmbuf_mtod(pkt[2], struct ether_hdr *);
ipv4_hdr = (struct ipv4_hdr *)(eth_hdr + 1);
x2 = ipv4_hdr->dst_addr;
- flag[0] &= pkt[2]->ol_flags;
+ ipv4_flag[0] &= pkt[2]->packet_type;
eth_hdr = rte_pktmbuf_mtod(pkt[3], struct ether_hdr *);
ipv4_hdr = (struct ipv4_hdr *)(eth_hdr + 1);
x3 = ipv4_hdr->dst_addr;
- flag[0] &= pkt[3]->ol_flags;
+ ipv4_flag[0] &= pkt[3]->packet_type;
dip[0] = _mm_set_epi32(x3, x2, x1, x0);
}
@@ -1156,8 +1159,12 @@ processx4_step1(struct rte_mbuf *pkt[FWDSTEP], __m128i *dip, uint32_t *flag)
* If lookup fails, use incoming port (portid) as destination port.
*/
static inline void
-processx4_step2(const struct lcore_conf *qconf, __m128i dip, uint32_t flag,
- uint8_t portid, struct rte_mbuf *pkt[FWDSTEP], uint16_t dprt[FWDSTEP])
+processx4_step2(const struct lcore_conf *qconf,
+ __m128i dip,
+ uint32_t ipv4_flag,
+ uint8_t portid,
+ struct rte_mbuf *pkt[FWDSTEP],
+ uint16_t dprt[FWDSTEP])
{
rte_xmm_t dst;
const __m128i bswap_mask = _mm_set_epi8(12, 13, 14, 15, 8, 9, 10, 11,
@@ -1167,7 +1174,7 @@ processx4_step2(const struct lcore_conf *qconf, __m128i dip, uint32_t flag,
dip = _mm_shuffle_epi8(dip, bswap_mask);
/* if all 4 packets are IPV4. */
- if (likely(flag != 0)) {
+ if (likely(ipv4_flag)) {
rte_lpm_lookupx4(qconf->ipv4_lookup_struct, dip, dprt, portid);
} else {
dst.x = dip;
@@ -1218,13 +1225,13 @@ processx4_step3(struct rte_mbuf *pkt[FWDSTEP], uint16_t dst_port[FWDSTEP])
_mm_store_si128(p[3], te[3]);
rfc1812_process((struct ipv4_hdr *)((struct ether_hdr *)p[0] + 1),
- &dst_port[0], pkt[0]->ol_flags);
+ &dst_port[0], pkt[0]->packet_type);
rfc1812_process((struct ipv4_hdr *)((struct ether_hdr *)p[1] + 1),
- &dst_port[1], pkt[1]->ol_flags);
+ &dst_port[1], pkt[1]->packet_type);
rfc1812_process((struct ipv4_hdr *)((struct ether_hdr *)p[2] + 1),
- &dst_port[2], pkt[2]->ol_flags);
+ &dst_port[2], pkt[2]->packet_type);
rfc1812_process((struct ipv4_hdr *)((struct ether_hdr *)p[3] + 1),
- &dst_port[3], pkt[3]->ol_flags);
+ &dst_port[3], pkt[3]->packet_type);
}
/*
@@ -1411,7 +1418,7 @@ main_loop(__attribute__((unused)) void *dummy)
uint16_t *lp;
uint16_t dst_port[MAX_PKT_BURST];
__m128i dip[MAX_PKT_BURST / FWDSTEP];
- uint32_t flag[MAX_PKT_BURST / FWDSTEP];
+ uint32_t ipv4_flag[MAX_PKT_BURST / FWDSTEP];
uint16_t pnum[MAX_PKT_BURST + 1];
#endif
@@ -1481,14 +1488,16 @@ main_loop(__attribute__((unused)) void *dummy)
*/
int32_t n = RTE_ALIGN_FLOOR(nb_rx, 4);
for (j = 0; j < n ; j+=4) {
- uint32_t ol_flag = pkts_burst[j]->ol_flags
- & pkts_burst[j+1]->ol_flags
- & pkts_burst[j+2]->ol_flags
- & pkts_burst[j+3]->ol_flags;
- if (ol_flag & PKT_RX_IPV4_HDR ) {
+ uint32_t pkt_type =
+ pkts_burst[j]->packet_type &
+ pkts_burst[j+1]->packet_type &
+ pkts_burst[j+2]->packet_type &
+ pkts_burst[j+3]->packet_type;
+ if (pkt_type & RTE_PTYPE_L3_IPV4) {
simple_ipv4_fwd_4pkts(&pkts_burst[j],
portid, qconf);
- } else if (ol_flag & PKT_RX_IPV6_HDR) {
+ } else if (pkt_type &
+ RTE_PTYPE_L3_IPV6) {
simple_ipv6_fwd_4pkts(&pkts_burst[j],
portid, qconf);
} else {
@@ -1513,13 +1522,13 @@ main_loop(__attribute__((unused)) void *dummy)
for (j = 0; j != k; j += FWDSTEP) {
processx4_step1(&pkts_burst[j],
&dip[j / FWDSTEP],
- &flag[j / FWDSTEP]);
+ &ipv4_flag[j / FWDSTEP]);
}
k = RTE_ALIGN_FLOOR(nb_rx, FWDSTEP);
for (j = 0; j != k; j += FWDSTEP) {
processx4_step2(qconf, dip[j / FWDSTEP],
- flag[j / FWDSTEP], portid,
+ ipv4_flag[j / FWDSTEP], portid,
&pkts_burst[j], &dst_port[j]);
}
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v4 18/18] mbuf: remove old packet type bit masks
2015-02-27 13:11 ` [dpdk-dev] [PATCH v4 00/18] " Helin Zhang
` (16 preceding siblings ...)
2015-02-27 13:11 ` [dpdk-dev] [PATCH v4 17/18] examples/l3fwd: " Helin Zhang
@ 2015-02-27 13:11 ` Helin Zhang
2015-05-22 8:44 ` [dpdk-dev] [PATCH v5 00/18] unified packet type Helin Zhang
18 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-02-27 13:11 UTC (permalink / raw)
To: dev
As unified packet types are used instead, those old bit masks and
the relevant macros for packet type indication need to be removed.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
lib/librte_mbuf/rte_mbuf.c | 6 ------
lib/librte_mbuf/rte_mbuf.h | 14 ++++----------
2 files changed, 4 insertions(+), 16 deletions(-)
v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.
* Redefined the bit masks for packet RX offload flags.
diff --git a/lib/librte_mbuf/rte_mbuf.c b/lib/librte_mbuf/rte_mbuf.c
index 4c940bd..9650099 100644
--- a/lib/librte_mbuf/rte_mbuf.c
+++ b/lib/librte_mbuf/rte_mbuf.c
@@ -213,14 +213,8 @@ const char *rte_get_rx_ol_flag_name(uint64_t mask)
/* case PKT_RX_HBUF_OVERFLOW: return "PKT_RX_HBUF_OVERFLOW"; */
/* case PKT_RX_RECIP_ERR: return "PKT_RX_RECIP_ERR"; */
/* case PKT_RX_MAC_ERR: return "PKT_RX_MAC_ERR"; */
- case PKT_RX_IPV4_HDR: return "PKT_RX_IPV4_HDR";
- case PKT_RX_IPV4_HDR_EXT: return "PKT_RX_IPV4_HDR_EXT";
- case PKT_RX_IPV6_HDR: return "PKT_RX_IPV6_HDR";
- case PKT_RX_IPV6_HDR_EXT: return "PKT_RX_IPV6_HDR_EXT";
case PKT_RX_IEEE1588_PTP: return "PKT_RX_IEEE1588_PTP";
case PKT_RX_IEEE1588_TMST: return "PKT_RX_IEEE1588_TMST";
- case PKT_RX_TUNNEL_IPV4_HDR: return "PKT_RX_TUNNEL_IPV4_HDR";
- case PKT_RX_TUNNEL_IPV6_HDR: return "PKT_RX_TUNNEL_IPV6_HDR";
default: return NULL;
}
}
diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
index 8de57fd..fb30354 100644
--- a/lib/librte_mbuf/rte_mbuf.h
+++ b/lib/librte_mbuf/rte_mbuf.h
@@ -90,16 +90,10 @@ extern "C" {
#define PKT_RX_HBUF_OVERFLOW (0ULL << 0) /**< Header buffer overflow. */
#define PKT_RX_RECIP_ERR (0ULL << 0) /**< Hardware processing error. */
#define PKT_RX_MAC_ERR (0ULL << 0) /**< MAC error. */
-#define PKT_RX_IPV4_HDR (1ULL << 5) /**< RX packet with IPv4 header. */
-#define PKT_RX_IPV4_HDR_EXT (1ULL << 6) /**< RX packet with extended IPv4 header. */
-#define PKT_RX_IPV6_HDR (1ULL << 7) /**< RX packet with IPv6 header. */
-#define PKT_RX_IPV6_HDR_EXT (1ULL << 8) /**< RX packet with extended IPv6 header. */
-#define PKT_RX_IEEE1588_PTP (1ULL << 9) /**< RX IEEE1588 L2 Ethernet PT Packet. */
-#define PKT_RX_IEEE1588_TMST (1ULL << 10) /**< RX IEEE1588 L2/L4 timestamped packet.*/
-#define PKT_RX_TUNNEL_IPV4_HDR (1ULL << 11) /**< RX tunnel packet with IPv4 header.*/
-#define PKT_RX_TUNNEL_IPV6_HDR (1ULL << 12) /**< RX tunnel packet with IPv6 header. */
-#define PKT_RX_FDIR_ID (1ULL << 13) /**< FD id reported if FDIR match. */
-#define PKT_RX_FDIR_FLX (1ULL << 14) /**< Flexible bytes reported if FDIR match. */
+#define PKT_RX_IEEE1588_PTP (1ULL << 5) /**< RX IEEE1588 L2 Ethernet PT Packet. */
+#define PKT_RX_IEEE1588_TMST (1ULL << 6) /**< RX IEEE1588 L2/L4 timestamped packet.*/
+#define PKT_RX_FDIR_ID (1ULL << 7) /**< FD id reported if FDIR match. */
+#define PKT_RX_FDIR_FLX (1ULL << 8) /**< Flexible bytes reported if FDIR match. */
/* add new RX flags here */
/* add new TX flags here */
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* Re: [dpdk-dev] [PATCH v4 03/18] mbuf: add definitions of unified packet types
2015-02-27 13:11 ` [dpdk-dev] [PATCH v4 03/18] mbuf: add definitions of unified packet types Helin Zhang
@ 2015-02-27 15:02 ` Olivier MATZ
0 siblings, 0 replies; 257+ messages in thread
From: Olivier MATZ @ 2015-02-27 15:02 UTC (permalink / raw)
To: Helin Zhang, dev
Hi Helin,
On 02/27/2015 02:11 PM, Helin Zhang wrote:
> As there are only 6 bit flags in ol_flags for indicating packet
> types, which is not enough to describe all the possible packet
> types hardware can recognize. For example, i40e hardware can
> recognize more than 150 packet types. Unified packet type is
> composed of L2 type, L3 type, L4 type, tunnel type, inner L2 type,
> inner L3 type and inner L4 type fields, and can be stored in
> 'struct rte_mbuf' of 32 bits field 'packet_type'.
>
> Signed-off-by: Helin Zhang <helin.zhang@intel.com>
That's not what I asked in
http://dpdk.org/ml/archives/dev/2015-February/013423.html
A definition of what is the meaning of a packet type in terms of
packet content is really required for the PMD developer and for
the application developer.
By reading the comment, we should be able to answer to the following
question:
- What are the required conditions on the packet headers to recognize
this type? The conditions should be formally described in the comment.
By reading the comment, we should be able to know whether the following
packets can or must not be recognized as an RTE_PTYPE_L3_IPV4:
<Ether type=0x800 |<IP version=4L ihl=5L tos=0x0 len=20 id=1 flags=
frag=0L ttl=64 proto=0 chksum=0x7ce7 src=1.1.1.1 dst=1.1.1.2 |>>
<Ether type=*0x1234* |<IP version=4L ihl=5L tos=0x0 len=20 id=1 flags=
frag=0L ttl=64 proto=0 chksum=0x7ce7 src=1.1.1.1 dst=1.1.1.2 |>>
<Ether type=0x800 |<IP version=*3L* ihl=5L tos=0x0 len=20 id=1 flags=
frag=0L ttl=64 proto=0 chksum=0x7ce7 src=1.1.1.1 dst=1.1.1.2 |>>
<Ether type=0x800 |<IP version=4L ihl=*1L* tos=0x0 len=20 id=1 flags=
frag=0L ttl=64 proto=0 chksum=0x7ce7 src=1.1.1.1 dst=1.1.1.2 |>>
<Ether type=0x800 |<IP version=4L ihl=*8L* tos=0x0 len=20 id=1 flags=
frag=0L ttl=64 proto=0 chksum=0x7ce7 src=1.1.1.1 dst=1.1.1.2 |>>
<Ether type=0x800 |<IP version=4L ihl=5L tos=0x0 len=*0* id=1 flags=
frag=0L ttl=64 proto=0 chksum=0x7ce7 src=1.1.1.1 dst=1.1.1.2 |>>
<Ether type=0x800 |<IP version=4L ihl=5L tos=0x0 len=20 id=1 flags=
frag=0L ttl=64 proto=0 chksum=*0x1234* src=1.1.1.1 dst=1.1.1.2 |>>
<Ether type=0x800 |<IP version=4L ihl=5L tos=0x0 len=20 id=1 flags=
frag=*1234L* ttl=64 proto=0 chksum=*0* src=1.1.1.1 dst=1.1.1.2 |>>
...
Here is an example about why it is important:
Let's assume the definition of RTE_PTYPE_L3_IPV4 is:
- IP version field is 4
- no IP options (header size is 20, ihl=5)
- layer 2 identified the packet as IP (ex: ethertype=0x800)
If a hardware XYZ is able to recognize that a packet is an IP packet by
just checking the ethertype without checking the IP version field, the
PMD has to check by software that IHL is 5 and IP version is 4.
Regards,
Olivier
^ permalink raw reply [flat|nested] 257+ messages in thread
* Re: [dpdk-dev] [PATCH v4 12/18] app/test: Remove useless code
2015-02-27 13:11 ` [dpdk-dev] [PATCH v4 12/18] app/test: Remove useless code Helin Zhang
@ 2015-02-27 16:01 ` Gajdzica, MaciejX T
0 siblings, 0 replies; 257+ messages in thread
From: Gajdzica, MaciejX T @ 2015-02-27 16:01 UTC (permalink / raw)
To: Zhang, Helin, dev
> Severl useless code lines are added accidenly, which blocks packet type
> unification. They should be deleted at all.
>
> Signed-off-by: Helin Zhang <helin.zhang@intel.com>
> ---
> app/test/packet_burst_generator.c | 10 ----------
> 1 file changed, 10 deletions(-)
>
> v4 changes:
> * Removed several useless code lines which block packet type unification.
>
> diff --git a/app/test/packet_burst_generator.c
> b/app/test/packet_burst_generator.c
> index b46eed7..b9f8f1a 100644
> --- a/app/test/packet_burst_generator.c
> +++ b/app/test/packet_burst_generator.c
> @@ -272,19 +272,9 @@ nomore_mbuf:
> if (ipv4) {
> pkt->vlan_tci = ETHER_TYPE_IPv4;
> pkt->l3_len = sizeof(struct ipv4_hdr);
> -
> - if (vlan_enabled)
> - pkt->ol_flags = PKT_RX_IPV4_HDR |
> PKT_RX_VLAN_PKT;
> - else
> - pkt->ol_flags = PKT_RX_IPV4_HDR;
> } else {
> pkt->vlan_tci = ETHER_TYPE_IPv6;
> pkt->l3_len = sizeof(struct ipv6_hdr);
> -
> - if (vlan_enabled)
> - pkt->ol_flags = PKT_RX_IPV6_HDR |
> PKT_RX_VLAN_PKT;
> - else
> - pkt->ol_flags = PKT_RX_IPV6_HDR;
> }
>
> pkts_burst[nb_pkt] = pkt;
> --
> 1.9.3
Acked-by: Maciej Gajdzica <maciejx.t.gajdzica@intel.com>
^ permalink raw reply [flat|nested] 257+ messages in thread
* Re: [dpdk-dev] [PATCH v4 01/18] mbuf: redefinition of packet_type in rte_mbuf
2015-02-27 13:11 ` [dpdk-dev] [PATCH v4 01/18] mbuf: redefinition of packet_type in rte_mbuf Helin Zhang
@ 2015-03-02 11:47 ` Chilikin, Andrey
2015-03-04 8:34 ` Zhang, Helin
0 siblings, 1 reply; 257+ messages in thread
From: Chilikin, Andrey @ 2015-03-02 11:47 UTC (permalink / raw)
To: Zhang, Helin, dev
Hi Helin,
I see that you have removed "uint16_t reserved" member from rte_mbuf:
> + uint16_t data_len; /**< Amount of data in segment buffer. */
> uint16_t vlan_tci; /**< VLAN Tag Control Identifier (CPU order) */
> - uint16_t reserved;
> union {
> uint32_t rss; /**< RSS hash result if RSS enabled */
This reserved field was kept next to vlan_tci as a placeholder for the second VLAN label for QinQ support so if need be vlan_tci + reserved could be casted to 32 bit QinQ value or one 32bit VNTAG label. Without keeping two label adjusted to each other casting to 32 bit will not be possible and will affect QinQ performance.
Regards,
Andrey
> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Helin Zhang
> Sent: Friday, February 27, 2015 1:11 PM
> To: dev@dpdk.org
> Subject: [dpdk-dev] [PATCH v4 01/18] mbuf: redefinition of packet_type in
> rte_mbuf
>
> In order to unify the packet type, the field of 'packet_type' in 'struct
> rte_mbuf' needs to be extended from 16 to 32 bits.
> Accordingly, some fields in 'struct rte_mbuf' are re-organized to support this
> change for Vector PMD. As 'struct rte_kni_mbuf' for KNI should be right
> mapped to 'struct rte_mbuf', it should be modified accordingly. In addition,
> Vector PMD of ixgbe is disabled by default, as 'struct rte_mbuf' changed.
>
> Signed-off-by: Helin Zhang <helin.zhang@intel.com>
> Signed-off-by: Cunming Liang <cunming.liang@intel.com>
> ---
> config/common_linuxapp | 2 +-
> .../linuxapp/eal/include/exec-env/rte_kni_common.h | 4 ++--
> lib/librte_mbuf/rte_mbuf.h | 23 +++++++++++++++-------
> 3 files changed, 19 insertions(+), 10 deletions(-)
>
> v2 changes:
> * Enlarged the packet_type field from 16 bits to 32 bits.
> * Redefined the packet type sub-fields.
> * Updated the 'struct rte_kni_mbuf' for KNI according to the mbuf changes.
>
> v3 changes:
> * Put the mbuf layout changes into a single patch.
> * Disabled vector ixgbe PMD by default, as mbuf layout changed.
>
> diff --git a/config/common_linuxapp b/config/common_linuxapp index
> 97f1c9e..97d7bae 100644
> --- a/config/common_linuxapp
> +++ b/config/common_linuxapp
> @@ -166,7 +166,7 @@ CONFIG_RTE_LIBRTE_IXGBE_DEBUG_TX_FREE=n
> CONFIG_RTE_LIBRTE_IXGBE_DEBUG_DRIVER=n
> CONFIG_RTE_LIBRTE_IXGBE_PF_DISABLE_STRIP_CRC=n
> CONFIG_RTE_LIBRTE_IXGBE_RX_ALLOW_BULK_ALLOC=y
> -CONFIG_RTE_IXGBE_INC_VECTOR=y
> +CONFIG_RTE_IXGBE_INC_VECTOR=n
> CONFIG_RTE_IXGBE_RX_OLFLAGS_ENABLE=y
>
> #
> diff --git a/lib/librte_eal/linuxapp/eal/include/exec-env/rte_kni_common.h
> b/lib/librte_eal/linuxapp/eal/include/exec-env/rte_kni_common.h
> index 1e55c2d..bd1cc09 100644
> --- a/lib/librte_eal/linuxapp/eal/include/exec-env/rte_kni_common.h
> +++ b/lib/librte_eal/linuxapp/eal/include/exec-env/rte_kni_common.h
> @@ -117,9 +117,9 @@ struct rte_kni_mbuf {
> uint16_t data_off; /**< Start address of data in segment buffer. */
> char pad1[4];
> uint64_t ol_flags; /**< Offload features. */
> - char pad2[2];
> - uint16_t data_len; /**< Amount of data in segment buffer. */
> + char pad2[4];
> uint32_t pkt_len; /**< Total pkt len: sum of all segment data_len.
> */
> + uint16_t data_len; /**< Amount of data in segment buffer. */
>
> /* fields on second cache line */
> char pad3[8] __attribute__((__aligned__(RTE_CACHE_LINE_SIZE)));
> diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h index
> 17ba791..f5b7a8b 100644
> --- a/lib/librte_mbuf/rte_mbuf.h
> +++ b/lib/librte_mbuf/rte_mbuf.h
> @@ -258,17 +258,26 @@ struct rte_mbuf {
> /* remaining bytes are set on RX when pulling packet from descriptor
> */
> MARKER rx_descriptor_fields1;
>
> - /**
> - * The packet type, which is used to indicate ordinary packet and also
> - * tunneled packet format, i.e. each number is represented a type of
> - * packet.
> + /*
> + * The packet type, which is the combination of outer/inner L2, L3, L4
> + * and tunnel types.
> */
> - uint16_t packet_type;
> + union {
> + uint32_t packet_type; /**< L2/L3/L4 and tunnel information.
> */
> + struct {
> + uint32_t l2_type:4; /**< (Outer) L2 type. */
> + uint32_t l3_type:4; /**< (Outer) L3 type. */
> + uint32_t l4_type:4; /**< (Outer) L4 type. */
> + uint32_t tun_type:4; /**< Tunnel type. */
> + uint32_t inner_l2_type:4; /**< Inner L2 type. */
> + uint32_t inner_l3_type:4; /**< Inner L3 type. */
> + uint32_t inner_l4_type:4; /**< Inner L4 type. */
> + };
> + };
>
> - uint16_t data_len; /**< Amount of data in segment buffer. */
> uint32_t pkt_len; /**< Total pkt len: sum of all segments. */
> + uint16_t data_len; /**< Amount of data in segment buffer. */
> uint16_t vlan_tci; /**< VLAN Tag Control Identifier (CPU order) */
> - uint16_t reserved;
> union {
> uint32_t rss; /**< RSS hash result if RSS enabled */
> struct {
> --
> 1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* Re: [dpdk-dev] [PATCH v4 01/18] mbuf: redefinition of packet_type in rte_mbuf
2015-03-02 11:47 ` Chilikin, Andrey
@ 2015-03-04 8:34 ` Zhang, Helin
2015-03-04 10:58 ` Chilikin, Andrey
0 siblings, 1 reply; 257+ messages in thread
From: Zhang, Helin @ 2015-03-04 8:34 UTC (permalink / raw)
To: Chilikin, Andrey, dev
> -----Original Message-----
> From: Chilikin, Andrey
> Sent: Monday, March 2, 2015 7:48 PM
> To: Zhang, Helin; dev@dpdk.org
> Subject: RE: [dpdk-dev] [PATCH v4 01/18] mbuf: redefinition of packet_type in
> rte_mbuf
>
> Hi Helin,
>
> I see that you have removed "uint16_t reserved" member from rte_mbuf:
>
> > + uint16_t data_len; /**< Amount of data in segment buffer. */
> > uint16_t vlan_tci; /**< VLAN Tag Control Identifier (CPU order)
> */
> > - uint16_t reserved;
> > union {
> > uint32_t rss; /**< RSS hash result if RSS enabled */
>
> This reserved field was kept next to vlan_tci as a placeholder for the second
> VLAN label for QinQ support so if need be vlan_tci + reserved could be casted to
> 32 bit QinQ value or one 32bit VNTAG label. Without keeping two label adjusted
> to each other casting to 32 bit will not be possible and will affect QinQ
> performance.
Yes, but packet type is quite important which needs to be extended from 16 bits to 32 bits.
For FVL, the vlan tags are in different fields. We can think of putting them together in mbuf,
Possibly move current vlan tag down, and add one more 16 bits. Let's see what is the best then.
Thanks for the notes!
Regards,
Helin
>
> Regards,
> Andrey
>
>
> > -----Original Message-----
> > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Helin Zhang
> > Sent: Friday, February 27, 2015 1:11 PM
> > To: dev@dpdk.org
> > Subject: [dpdk-dev] [PATCH v4 01/18] mbuf: redefinition of packet_type
> > in rte_mbuf
> >
> > In order to unify the packet type, the field of 'packet_type' in
> > 'struct rte_mbuf' needs to be extended from 16 to 32 bits.
> > Accordingly, some fields in 'struct rte_mbuf' are re-organized to
> > support this change for Vector PMD. As 'struct rte_kni_mbuf' for KNI
> > should be right mapped to 'struct rte_mbuf', it should be modified
> > accordingly. In addition, Vector PMD of ixgbe is disabled by default, as 'struct
> rte_mbuf' changed.
> >
> > Signed-off-by: Helin Zhang <helin.zhang@intel.com>
> > Signed-off-by: Cunming Liang <cunming.liang@intel.com>
> > ---
> > config/common_linuxapp | 2 +-
> > .../linuxapp/eal/include/exec-env/rte_kni_common.h | 4 ++--
> > lib/librte_mbuf/rte_mbuf.h | 23
> +++++++++++++++-------
> > 3 files changed, 19 insertions(+), 10 deletions(-)
> >
> > v2 changes:
> > * Enlarged the packet_type field from 16 bits to 32 bits.
> > * Redefined the packet type sub-fields.
> > * Updated the 'struct rte_kni_mbuf' for KNI according to the mbuf changes.
> >
> > v3 changes:
> > * Put the mbuf layout changes into a single patch.
> > * Disabled vector ixgbe PMD by default, as mbuf layout changed.
> >
> > diff --git a/config/common_linuxapp b/config/common_linuxapp index
> > 97f1c9e..97d7bae 100644
> > --- a/config/common_linuxapp
> > +++ b/config/common_linuxapp
> > @@ -166,7 +166,7 @@ CONFIG_RTE_LIBRTE_IXGBE_DEBUG_TX_FREE=n
> > CONFIG_RTE_LIBRTE_IXGBE_DEBUG_DRIVER=n
> > CONFIG_RTE_LIBRTE_IXGBE_PF_DISABLE_STRIP_CRC=n
> > CONFIG_RTE_LIBRTE_IXGBE_RX_ALLOW_BULK_ALLOC=y
> > -CONFIG_RTE_IXGBE_INC_VECTOR=y
> > +CONFIG_RTE_IXGBE_INC_VECTOR=n
> > CONFIG_RTE_IXGBE_RX_OLFLAGS_ENABLE=y
> >
> > #
> > diff --git
> > a/lib/librte_eal/linuxapp/eal/include/exec-env/rte_kni_common.h
> > b/lib/librte_eal/linuxapp/eal/include/exec-env/rte_kni_common.h
> > index 1e55c2d..bd1cc09 100644
> > --- a/lib/librte_eal/linuxapp/eal/include/exec-env/rte_kni_common.h
> > +++ b/lib/librte_eal/linuxapp/eal/include/exec-env/rte_kni_common.h
> > @@ -117,9 +117,9 @@ struct rte_kni_mbuf {
> > uint16_t data_off; /**< Start address of data in segment buffer. */
> > char pad1[4];
> > uint64_t ol_flags; /**< Offload features. */
> > - char pad2[2];
> > - uint16_t data_len; /**< Amount of data in segment buffer. */
> > + char pad2[4];
> > uint32_t pkt_len; /**< Total pkt len: sum of all segment data_len.
> > */
> > + uint16_t data_len; /**< Amount of data in segment buffer. */
> >
> > /* fields on second cache line */
> > char pad3[8] __attribute__((__aligned__(RTE_CACHE_LINE_SIZE)));
> > diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
> > index 17ba791..f5b7a8b 100644
> > --- a/lib/librte_mbuf/rte_mbuf.h
> > +++ b/lib/librte_mbuf/rte_mbuf.h
> > @@ -258,17 +258,26 @@ struct rte_mbuf {
> > /* remaining bytes are set on RX when pulling packet from descriptor
> > */
> > MARKER rx_descriptor_fields1;
> >
> > - /**
> > - * The packet type, which is used to indicate ordinary packet and also
> > - * tunneled packet format, i.e. each number is represented a type of
> > - * packet.
> > + /*
> > + * The packet type, which is the combination of outer/inner L2, L3, L4
> > + * and tunnel types.
> > */
> > - uint16_t packet_type;
> > + union {
> > + uint32_t packet_type; /**< L2/L3/L4 and tunnel information.
> > */
> > + struct {
> > + uint32_t l2_type:4; /**< (Outer) L2 type. */
> > + uint32_t l3_type:4; /**< (Outer) L3 type. */
> > + uint32_t l4_type:4; /**< (Outer) L4 type. */
> > + uint32_t tun_type:4; /**< Tunnel type. */
> > + uint32_t inner_l2_type:4; /**< Inner L2 type. */
> > + uint32_t inner_l3_type:4; /**< Inner L3 type. */
> > + uint32_t inner_l4_type:4; /**< Inner L4 type. */
> > + };
> > + };
> >
> > - uint16_t data_len; /**< Amount of data in segment buffer. */
> > uint32_t pkt_len; /**< Total pkt len: sum of all segments. */
> > + uint16_t data_len; /**< Amount of data in segment buffer. */
> > uint16_t vlan_tci; /**< VLAN Tag Control Identifier (CPU order)
> */
> > - uint16_t reserved;
> > union {
> > uint32_t rss; /**< RSS hash result if RSS enabled */
> > struct {
> > --
> > 1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* Re: [dpdk-dev] [PATCH v4 01/18] mbuf: redefinition of packet_type in rte_mbuf
2015-03-04 8:34 ` Zhang, Helin
@ 2015-03-04 10:58 ` Chilikin, Andrey
2015-03-05 0:55 ` Zhang, Helin
0 siblings, 1 reply; 257+ messages in thread
From: Chilikin, Andrey @ 2015-03-04 10:58 UTC (permalink / raw)
To: Zhang, Helin, dev
> -----Original Message-----
> From: Zhang, Helin
> Sent: Wednesday, March 4, 2015 8:34 AM
> To: Chilikin, Andrey; dev@dpdk.org
> Subject: RE: [dpdk-dev] [PATCH v4 01/18] mbuf: redefinition of packet_type
> in rte_mbuf
>
>
>
> > -----Original Message-----
> > From: Chilikin, Andrey
> > Sent: Monday, March 2, 2015 7:48 PM
> > To: Zhang, Helin; dev@dpdk.org
> > Subject: RE: [dpdk-dev] [PATCH v4 01/18] mbuf: redefinition of
> > packet_type in rte_mbuf
> >
> > Hi Helin,
> >
> > I see that you have removed "uint16_t reserved" member from rte_mbuf:
> >
> > > + uint16_t data_len; /**< Amount of data in segment buffer. */
> > > uint16_t vlan_tci; /**< VLAN Tag Control Identifier (CPU order)
> > */
> > > - uint16_t reserved;
> > > union {
> > > uint32_t rss; /**< RSS hash result if RSS enabled */
> >
> > This reserved field was kept next to vlan_tci as a placeholder for
> > the second VLAN label for QinQ support so if need be vlan_tci +
> > reserved could be casted to
> > 32 bit QinQ value or one 32bit VNTAG label. Without keeping two label
> > adjusted to each other casting to 32 bit will not be possible and will
> > affect QinQ performance.
> Yes, but packet type is quite important which needs to be extended from 16
> bits to 32 bits.
> For FVL, the vlan tags are in different fields.
I do not see how FVL internal descriptor can affect DPDK mbuf structure.
> We can think of putting them
> together in mbuf, Possibly move current vlan tag down, and add one more 16
> bits. Let's see what is the best then.
But if we know that we would need this change for QinQ anyway should we move
vlan_tci +reserved 16bits now in this patch instead of testing performance twice -
for this and for future patch when we move vlan_tci+reserved?
Regards,
Andrey
> Thanks for the notes!
>
> Regards,
> Helin
>
> >
> > Regards,
> > Andrey
> >
> >
> > > -----Original Message-----
> > > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Helin Zhang
> > > Sent: Friday, February 27, 2015 1:11 PM
> > > To: dev@dpdk.org
> > > Subject: [dpdk-dev] [PATCH v4 01/18] mbuf: redefinition of
> > > packet_type in rte_mbuf
> > >
> > > In order to unify the packet type, the field of 'packet_type' in
> > > 'struct rte_mbuf' needs to be extended from 16 to 32 bits.
> > > Accordingly, some fields in 'struct rte_mbuf' are re-organized to
> > > support this change for Vector PMD. As 'struct rte_kni_mbuf' for KNI
> > > should be right mapped to 'struct rte_mbuf', it should be modified
> > > accordingly. In addition, Vector PMD of ixgbe is disabled by
> > > default, as 'struct
> > rte_mbuf' changed.
> > >
> > > Signed-off-by: Helin Zhang <helin.zhang@intel.com>
> > > Signed-off-by: Cunming Liang <cunming.liang@intel.com>
> > > ---
> > > config/common_linuxapp | 2 +-
> > > .../linuxapp/eal/include/exec-env/rte_kni_common.h | 4 ++--
> > > lib/librte_mbuf/rte_mbuf.h | 23
> > +++++++++++++++-------
> > > 3 files changed, 19 insertions(+), 10 deletions(-)
> > >
> > > v2 changes:
> > > * Enlarged the packet_type field from 16 bits to 32 bits.
> > > * Redefined the packet type sub-fields.
> > > * Updated the 'struct rte_kni_mbuf' for KNI according to the mbuf
> changes.
> > >
> > > v3 changes:
> > > * Put the mbuf layout changes into a single patch.
> > > * Disabled vector ixgbe PMD by default, as mbuf layout changed.
> > >
> > > diff --git a/config/common_linuxapp b/config/common_linuxapp index
> > > 97f1c9e..97d7bae 100644
> > > --- a/config/common_linuxapp
> > > +++ b/config/common_linuxapp
> > > @@ -166,7 +166,7 @@ CONFIG_RTE_LIBRTE_IXGBE_DEBUG_TX_FREE=n
> > > CONFIG_RTE_LIBRTE_IXGBE_DEBUG_DRIVER=n
> > > CONFIG_RTE_LIBRTE_IXGBE_PF_DISABLE_STRIP_CRC=n
> > > CONFIG_RTE_LIBRTE_IXGBE_RX_ALLOW_BULK_ALLOC=y
> > > -CONFIG_RTE_IXGBE_INC_VECTOR=y
> > > +CONFIG_RTE_IXGBE_INC_VECTOR=n
> > > CONFIG_RTE_IXGBE_RX_OLFLAGS_ENABLE=y
> > >
> > > #
> > > diff --git
> > > a/lib/librte_eal/linuxapp/eal/include/exec-env/rte_kni_common.h
> > > b/lib/librte_eal/linuxapp/eal/include/exec-env/rte_kni_common.h
> > > index 1e55c2d..bd1cc09 100644
> > > --- a/lib/librte_eal/linuxapp/eal/include/exec-env/rte_kni_common.h
> > > +++ b/lib/librte_eal/linuxapp/eal/include/exec-env/rte_kni_common.h
> > > @@ -117,9 +117,9 @@ struct rte_kni_mbuf {
> > > uint16_t data_off; /**< Start address of data in segment buffer. */
> > > char pad1[4];
> > > uint64_t ol_flags; /**< Offload features. */
> > > - char pad2[2];
> > > - uint16_t data_len; /**< Amount of data in segment buffer. */
> > > + char pad2[4];
> > > uint32_t pkt_len; /**< Total pkt len: sum of all segment data_len.
> > > */
> > > + uint16_t data_len; /**< Amount of data in segment buffer. */
> > >
> > > /* fields on second cache line */
> > > char pad3[8] __attribute__((__aligned__(RTE_CACHE_LINE_SIZE)));
> > > diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
> > > index 17ba791..f5b7a8b 100644
> > > --- a/lib/librte_mbuf/rte_mbuf.h
> > > +++ b/lib/librte_mbuf/rte_mbuf.h
> > > @@ -258,17 +258,26 @@ struct rte_mbuf {
> > > /* remaining bytes are set on RX when pulling packet from
> > > descriptor */
> > > MARKER rx_descriptor_fields1;
> > >
> > > - /**
> > > - * The packet type, which is used to indicate ordinary packet and also
> > > - * tunneled packet format, i.e. each number is represented a type of
> > > - * packet.
> > > + /*
> > > + * The packet type, which is the combination of outer/inner L2, L3, L4
> > > + * and tunnel types.
> > > */
> > > - uint16_t packet_type;
> > > + union {
> > > + uint32_t packet_type; /**< L2/L3/L4 and tunnel information.
> > > */
> > > + struct {
> > > + uint32_t l2_type:4; /**< (Outer) L2 type. */
> > > + uint32_t l3_type:4; /**< (Outer) L3 type. */
> > > + uint32_t l4_type:4; /**< (Outer) L4 type. */
> > > + uint32_t tun_type:4; /**< Tunnel type. */
> > > + uint32_t inner_l2_type:4; /**< Inner L2 type. */
> > > + uint32_t inner_l3_type:4; /**< Inner L3 type. */
> > > + uint32_t inner_l4_type:4; /**< Inner L4 type. */
> > > + };
> > > + };
> > >
> > > - uint16_t data_len; /**< Amount of data in segment buffer. */
> > > uint32_t pkt_len; /**< Total pkt len: sum of all segments. */
> > > + uint16_t data_len; /**< Amount of data in segment buffer. */
> > > uint16_t vlan_tci; /**< VLAN Tag Control Identifier (CPU order)
> > */
> > > - uint16_t reserved;
> > > union {
> > > uint32_t rss; /**< RSS hash result if RSS enabled */
> > > struct {
> > > --
> > > 1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* Re: [dpdk-dev] [PATCH v4 01/18] mbuf: redefinition of packet_type in rte_mbuf
2015-03-04 10:58 ` Chilikin, Andrey
@ 2015-03-05 0:55 ` Zhang, Helin
0 siblings, 0 replies; 257+ messages in thread
From: Zhang, Helin @ 2015-03-05 0:55 UTC (permalink / raw)
To: Chilikin, Andrey, dev
> -----Original Message-----
> From: Chilikin, Andrey
> Sent: Wednesday, March 4, 2015 6:59 PM
> To: Zhang, Helin; dev@dpdk.org
> Subject: RE: [dpdk-dev] [PATCH v4 01/18] mbuf: redefinition of packet_type in
> rte_mbuf
>
> > -----Original Message-----
> > From: Zhang, Helin
> > Sent: Wednesday, March 4, 2015 8:34 AM
> > To: Chilikin, Andrey; dev@dpdk.org
> > Subject: RE: [dpdk-dev] [PATCH v4 01/18] mbuf: redefinition of
> > packet_type in rte_mbuf
> >
> >
> >
> > > -----Original Message-----
> > > From: Chilikin, Andrey
> > > Sent: Monday, March 2, 2015 7:48 PM
> > > To: Zhang, Helin; dev@dpdk.org
> > > Subject: RE: [dpdk-dev] [PATCH v4 01/18] mbuf: redefinition of
> > > packet_type in rte_mbuf
> > >
> > > Hi Helin,
> > >
> > > I see that you have removed "uint16_t reserved" member from rte_mbuf:
> > >
> > > > + uint16_t data_len; /**< Amount of data in segment buffer.
> */
> > > > uint16_t vlan_tci; /**< VLAN Tag Control Identifier (CPU
> order)
> > > */
> > > > - uint16_t reserved;
> > > > union {
> > > > uint32_t rss; /**< RSS hash result if RSS enabled */
> > >
> > > This reserved field was kept next to vlan_tci as a placeholder for
> > > the second VLAN label for QinQ support so if need be vlan_tci +
> > > reserved could be casted to
> > > 32 bit QinQ value or one 32bit VNTAG label. Without keeping two
> > > label adjusted to each other casting to 32 bit will not be possible
> > > and will affect QinQ performance.
> > Yes, but packet type is quite important which needs to be extended
> > from 16 bits to 32 bits.
> > For FVL, the vlan tags are in different fields.
> I do not see how FVL internal descriptor can affect DPDK mbuf structure.
FVL rx descriptor plays key role of the mbuf structure definition, as we have
Vector PMD.
>
> > We can think of putting them
> > together in mbuf, Possibly move current vlan tag down, and add one
> > more 16 bits. Let's see what is the best then.
> But if we know that we would need this change for QinQ anyway should we
> move vlan_tci +reserved 16bits now in this patch instead of testing
> performance twice - for this and for future patch when we move
> vlan_tci+reserved?
Good idea, and I will discuss it with team members to see if there is any objection.
Generally we modify things as needed, but not modify things by prediction. This
was indicated by Thomas and other reviewers several times.
Regards,
Helin
>
> Regards,
> Andrey
>
> > Thanks for the notes!
> >
> > Regards,
> > Helin
> >
> > >
> > > Regards,
> > > Andrey
> > >
> > >
> > > > -----Original Message-----
> > > > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Helin Zhang
> > > > Sent: Friday, February 27, 2015 1:11 PM
> > > > To: dev@dpdk.org
> > > > Subject: [dpdk-dev] [PATCH v4 01/18] mbuf: redefinition of
> > > > packet_type in rte_mbuf
> > > >
> > > > In order to unify the packet type, the field of 'packet_type' in
> > > > 'struct rte_mbuf' needs to be extended from 16 to 32 bits.
> > > > Accordingly, some fields in 'struct rte_mbuf' are re-organized to
> > > > support this change for Vector PMD. As 'struct rte_kni_mbuf' for
> > > > KNI should be right mapped to 'struct rte_mbuf', it should be
> > > > modified accordingly. In addition, Vector PMD of ixgbe is disabled
> > > > by default, as 'struct
> > > rte_mbuf' changed.
> > > >
> > > > Signed-off-by: Helin Zhang <helin.zhang@intel.com>
> > > > Signed-off-by: Cunming Liang <cunming.liang@intel.com>
> > > > ---
> > > > config/common_linuxapp | 2 +-
> > > > .../linuxapp/eal/include/exec-env/rte_kni_common.h | 4 ++--
> > > > lib/librte_mbuf/rte_mbuf.h | 23
> > > +++++++++++++++-------
> > > > 3 files changed, 19 insertions(+), 10 deletions(-)
> > > >
> > > > v2 changes:
> > > > * Enlarged the packet_type field from 16 bits to 32 bits.
> > > > * Redefined the packet type sub-fields.
> > > > * Updated the 'struct rte_kni_mbuf' for KNI according to the mbuf
> > changes.
> > > >
> > > > v3 changes:
> > > > * Put the mbuf layout changes into a single patch.
> > > > * Disabled vector ixgbe PMD by default, as mbuf layout changed.
> > > >
> > > > diff --git a/config/common_linuxapp b/config/common_linuxapp index
> > > > 97f1c9e..97d7bae 100644
> > > > --- a/config/common_linuxapp
> > > > +++ b/config/common_linuxapp
> > > > @@ -166,7 +166,7 @@
> CONFIG_RTE_LIBRTE_IXGBE_DEBUG_TX_FREE=n
> > > > CONFIG_RTE_LIBRTE_IXGBE_DEBUG_DRIVER=n
> > > > CONFIG_RTE_LIBRTE_IXGBE_PF_DISABLE_STRIP_CRC=n
> > > > CONFIG_RTE_LIBRTE_IXGBE_RX_ALLOW_BULK_ALLOC=y
> > > > -CONFIG_RTE_IXGBE_INC_VECTOR=y
> > > > +CONFIG_RTE_IXGBE_INC_VECTOR=n
> > > > CONFIG_RTE_IXGBE_RX_OLFLAGS_ENABLE=y
> > > >
> > > > #
> > > > diff --git
> > > > a/lib/librte_eal/linuxapp/eal/include/exec-env/rte_kni_common.h
> > > > b/lib/librte_eal/linuxapp/eal/include/exec-env/rte_kni_common.h
> > > > index 1e55c2d..bd1cc09 100644
> > > > ---
> > > > a/lib/librte_eal/linuxapp/eal/include/exec-env/rte_kni_common.h
> > > > +++ b/lib/librte_eal/linuxapp/eal/include/exec-env/rte_kni_common.
> > > > +++ h
> > > > @@ -117,9 +117,9 @@ struct rte_kni_mbuf {
> > > > uint16_t data_off; /**< Start address of data in segment
> buffer. */
> > > > char pad1[4];
> > > > uint64_t ol_flags; /**< Offload features. */
> > > > - char pad2[2];
> > > > - uint16_t data_len; /**< Amount of data in segment buffer. */
> > > > + char pad2[4];
> > > > uint32_t pkt_len; /**< Total pkt len: sum of all segment
> data_len.
> > > > */
> > > > + uint16_t data_len; /**< Amount of data in segment buffer. */
> > > >
> > > > /* fields on second cache line */
> > > > char pad3[8] __attribute__((__aligned__(RTE_CACHE_LINE_SIZE)));
> > > > diff --git a/lib/librte_mbuf/rte_mbuf.h
> > > > b/lib/librte_mbuf/rte_mbuf.h index 17ba791..f5b7a8b 100644
> > > > --- a/lib/librte_mbuf/rte_mbuf.h
> > > > +++ b/lib/librte_mbuf/rte_mbuf.h
> > > > @@ -258,17 +258,26 @@ struct rte_mbuf {
> > > > /* remaining bytes are set on RX when pulling packet from
> > > > descriptor */
> > > > MARKER rx_descriptor_fields1;
> > > >
> > > > - /**
> > > > - * The packet type, which is used to indicate ordinary packet and also
> > > > - * tunneled packet format, i.e. each number is represented a type of
> > > > - * packet.
> > > > + /*
> > > > + * The packet type, which is the combination of outer/inner L2, L3, L4
> > > > + * and tunnel types.
> > > > */
> > > > - uint16_t packet_type;
> > > > + union {
> > > > + uint32_t packet_type; /**< L2/L3/L4 and tunnel information.
> > > > */
> > > > + struct {
> > > > + uint32_t l2_type:4; /**< (Outer) L2 type. */
> > > > + uint32_t l3_type:4; /**< (Outer) L3 type. */
> > > > + uint32_t l4_type:4; /**< (Outer) L4 type. */
> > > > + uint32_t tun_type:4; /**< Tunnel type. */
> > > > + uint32_t inner_l2_type:4; /**< Inner L2 type. */
> > > > + uint32_t inner_l3_type:4; /**< Inner L3 type. */
> > > > + uint32_t inner_l4_type:4; /**< Inner L4 type. */
> > > > + };
> > > > + };
> > > >
> > > > - uint16_t data_len; /**< Amount of data in segment buffer.
> */
> > > > uint32_t pkt_len; /**< Total pkt len: sum of all segments.
> */
> > > > + uint16_t data_len; /**< Amount of data in segment buffer.
> */
> > > > uint16_t vlan_tci; /**< VLAN Tag Control Identifier (CPU
> order)
> > > */
> > > > - uint16_t reserved;
> > > > union {
> > > > uint32_t rss; /**< RSS hash result if RSS enabled */
> > > > struct {
> > > > --
> > > > 1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v5 00/18] unified packet type
2015-02-27 13:11 ` [dpdk-dev] [PATCH v4 00/18] " Helin Zhang
` (17 preceding siblings ...)
2015-02-27 13:11 ` [dpdk-dev] [PATCH v4 18/18] mbuf: remove old packet type bit masks Helin Zhang
@ 2015-05-22 8:44 ` Helin Zhang
2015-05-22 8:44 ` [dpdk-dev] [PATCH v5 01/18] mbuf: redefine packet_type in rte_mbuf Helin Zhang
` (18 more replies)
18 siblings, 19 replies; 257+ messages in thread
From: Helin Zhang @ 2015-05-22 8:44 UTC (permalink / raw)
To: dev
Currently only 6 bits which are stored in ol_flags are used to indicate
the packet types. This is not enough, as some NIC hardware can recognize
quite a lot of packet types, e.g i40e hardware can recognize more than 150
packet types. Hiding those packet types hides hardware offload capabilities
which could be quite useful for improving performance and for end users. So
an unified packet types are needed to support all possible PMDs. A 16 bits
packet_type in mbuf structure can be changed to 32 bits and used for this
purpose. In addition, all packet types stored in ol_flag field should be
deleted at all, and 6 bits of ol_flags can be save as the benifit.
Initially, 32 bits of packet_type can be divided into several sub fields to
indicate different packet type information of a packet. The initial design
is to divide those bits into fields for L2 types, L3 types, L4 types, tunnel
types, inner L2 types, inner L3 types and inner L4 types. All PMDs should
translate the offloaded packet types into these 7 fields of information, for
user applications.
v2 changes:
* Enlarged the packet_type field from 16 bits to 32 bits.
* Redefined the packet type sub-fields.
* Updated the 'struct rte_kni_mbuf' for KNI according to the mbuf changes.
* Used redefined packet types and enlarged packet_type field for all PMDs
and corresponding applications.
* Removed changes in bond and its relevant application, as there is no need
at all according to the recent bond changes.
v3 changes:
* Put the mbuf layout changes into a single patch.
* Put vector ixgbe changes right after mbuf changes.
* Disabled vector ixgbe PMD by default, as mbuf layout changed, and then
re-enabled it after vector ixgbe PMD updated.
* Put the definitions of unified packet type into a single patch.
* Minor bug fixes and enhancements in l3fwd example.
v4 changes:
* Added detailed description of each packet types.
* Supported unified packet type of fm10k.
* Added printing logs of packet types of each received packet for rxonly
mode in testpmd.
* Removed several useless code lines which block packet type unification from
app/test/packet_burst_generator.c.
v5 changes:
* Added more detailed description for each packet types, together with examples.
* Rolled back the macro definitions of RX packet flags, for ABI compitability.
Helin Zhang (18):
mbuf: redefine packet_type in rte_mbuf
ixgbe: support unified packet type in vectorized PMD
mbuf: add definitions of unified packet types
e1000: replace bit mask based packet type with unified packet type
ixgbe: replace bit mask based packet type with unified packet type
i40e: replace bit mask based packet type with unified packet type
enic: replace bit mask based packet type with unified packet type
vmxnet3: replace bit mask based packet type with unified packet type
fm10k: replace bit mask based packet type with unified packet type
app/test-pipeline: replace bit mask based packet type with unified
packet type
app/testpmd: replace bit mask based packet type with unified packet
type
app/test: Remove useless code
examples/ip_fragmentation: replace bit mask based packet type with
unified packet type
examples/ip_reassembly: replace bit mask based packet type with
unified packet type
examples/l3fwd-acl: replace bit mask based packet type with unified
packet type
examples/l3fwd-power: replace bit mask based packet type with unified
packet type
examples/l3fwd: replace bit mask based packet type with unified packet
type
mbuf: remove old packet type bit masks
app/test-pipeline/pipeline_hash.c | 7 +-
app/test-pmd/csumonly.c | 10 +-
app/test-pmd/rxonly.c | 178 ++++-
app/test/packet_burst_generator.c | 10 -
examples/ip_fragmentation/main.c | 7 +-
examples/ip_reassembly/main.c | 7 +-
examples/l3fwd-acl/main.c | 19 +-
examples/l3fwd-power/main.c | 5 +-
examples/l3fwd/main.c | 71 +-
.../linuxapp/eal/include/exec-env/rte_kni_common.h | 4 +-
lib/librte_mbuf/rte_mbuf.c | 6 -
lib/librte_mbuf/rte_mbuf.h | 514 +++++++++++++-
lib/librte_pmd_e1000/igb_rxtx.c | 98 ++-
lib/librte_pmd_enic/enic_main.c | 14 +-
lib/librte_pmd_fm10k/fm10k_rxtx.c | 30 +-
lib/librte_pmd_i40e/i40e_rxtx.c | 786 ++++++++++++++-------
lib/librte_pmd_ixgbe/ixgbe_rxtx.c | 139 +++-
lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c | 49 +-
lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c | 4 +-
19 files changed, 1498 insertions(+), 460 deletions(-)
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v5 01/18] mbuf: redefine packet_type in rte_mbuf
2015-05-22 8:44 ` [dpdk-dev] [PATCH v5 00/18] unified packet type Helin Zhang
@ 2015-05-22 8:44 ` Helin Zhang
2015-05-22 10:09 ` Neil Horman
2015-05-22 8:44 ` [dpdk-dev] [PATCH v5 02/18] ixgbe: support unified packet type in vectorized PMD Helin Zhang
` (17 subsequent siblings)
18 siblings, 1 reply; 257+ messages in thread
From: Helin Zhang @ 2015-05-22 8:44 UTC (permalink / raw)
To: dev
In order to unify the packet type, the field of 'packet_type' in
'struct rte_mbuf' needs to be extended from 16 to 32 bits.
Accordingly, some fields in 'struct rte_mbuf' are re-organized to
support this change for Vector PMD. As 'struct rte_kni_mbuf' for
KNI should be right mapped to 'struct rte_mbuf', it should be
modified accordingly. In addition, Vector PMD of ixgbe is disabled
by default, as 'struct rte_mbuf' changed.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Signed-off-by: Cunming Liang <cunming.liang@intel.com>
---
config/common_linuxapp | 2 +-
.../linuxapp/eal/include/exec-env/rte_kni_common.h | 4 ++--
lib/librte_mbuf/rte_mbuf.h | 23 +++++++++++++++-------
3 files changed, 19 insertions(+), 10 deletions(-)
v2 changes:
* Enlarged the packet_type field from 16 bits to 32 bits.
* Redefined the packet type sub-fields.
* Updated the 'struct rte_kni_mbuf' for KNI according to the mbuf changes.
v3 changes:
* Put the mbuf layout changes into a single patch.
* Disabled vector ixgbe PMD by default, as mbuf layout changed.
v5 changes:
* Re-worded the commit logs.
diff --git a/config/common_linuxapp b/config/common_linuxapp
index 0078dc9..6b067c7 100644
--- a/config/common_linuxapp
+++ b/config/common_linuxapp
@@ -167,7 +167,7 @@ CONFIG_RTE_LIBRTE_IXGBE_DEBUG_TX_FREE=n
CONFIG_RTE_LIBRTE_IXGBE_DEBUG_DRIVER=n
CONFIG_RTE_LIBRTE_IXGBE_PF_DISABLE_STRIP_CRC=n
CONFIG_RTE_LIBRTE_IXGBE_RX_ALLOW_BULK_ALLOC=y
-CONFIG_RTE_IXGBE_INC_VECTOR=y
+CONFIG_RTE_IXGBE_INC_VECTOR=n
CONFIG_RTE_IXGBE_RX_OLFLAGS_ENABLE=y
#
diff --git a/lib/librte_eal/linuxapp/eal/include/exec-env/rte_kni_common.h b/lib/librte_eal/linuxapp/eal/include/exec-env/rte_kni_common.h
index 1e55c2d..bd1cc09 100644
--- a/lib/librte_eal/linuxapp/eal/include/exec-env/rte_kni_common.h
+++ b/lib/librte_eal/linuxapp/eal/include/exec-env/rte_kni_common.h
@@ -117,9 +117,9 @@ struct rte_kni_mbuf {
uint16_t data_off; /**< Start address of data in segment buffer. */
char pad1[4];
uint64_t ol_flags; /**< Offload features. */
- char pad2[2];
- uint16_t data_len; /**< Amount of data in segment buffer. */
+ char pad2[4];
uint32_t pkt_len; /**< Total pkt len: sum of all segment data_len. */
+ uint16_t data_len; /**< Amount of data in segment buffer. */
/* fields on second cache line */
char pad3[8] __attribute__((__aligned__(RTE_CACHE_LINE_SIZE)));
diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
index ab6de67..c2b1463 100644
--- a/lib/librte_mbuf/rte_mbuf.h
+++ b/lib/librte_mbuf/rte_mbuf.h
@@ -269,17 +269,26 @@ struct rte_mbuf {
/* remaining bytes are set on RX when pulling packet from descriptor */
MARKER rx_descriptor_fields1;
- /**
- * The packet type, which is used to indicate ordinary packet and also
- * tunneled packet format, i.e. each number is represented a type of
- * packet.
+ /*
+ * The packet type, which is the combination of outer/inner L2, L3, L4
+ * and tunnel types.
*/
- uint16_t packet_type;
+ union {
+ uint32_t packet_type; /**< L2/L3/L4 and tunnel information. */
+ struct {
+ uint32_t l2_type:4; /**< (Outer) L2 type. */
+ uint32_t l3_type:4; /**< (Outer) L3 type. */
+ uint32_t l4_type:4; /**< (Outer) L4 type. */
+ uint32_t tun_type:4; /**< Tunnel type. */
+ uint32_t inner_l2_type:4; /**< Inner L2 type. */
+ uint32_t inner_l3_type:4; /**< Inner L3 type. */
+ uint32_t inner_l4_type:4; /**< Inner L4 type. */
+ };
+ };
- uint16_t data_len; /**< Amount of data in segment buffer. */
uint32_t pkt_len; /**< Total pkt len: sum of all segments. */
+ uint16_t data_len; /**< Amount of data in segment buffer. */
uint16_t vlan_tci; /**< VLAN Tag Control Identifier (CPU order) */
- uint16_t reserved;
union {
uint32_t rss; /**< RSS hash result if RSS enabled */
struct {
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v5 02/18] ixgbe: support unified packet type in vectorized PMD
2015-05-22 8:44 ` [dpdk-dev] [PATCH v5 00/18] unified packet type Helin Zhang
2015-05-22 8:44 ` [dpdk-dev] [PATCH v5 01/18] mbuf: redefine packet_type in rte_mbuf Helin Zhang
@ 2015-05-22 8:44 ` Helin Zhang
2015-05-22 8:44 ` [dpdk-dev] [PATCH v5 03/18] mbuf: add definitions of unified packet types Helin Zhang
` (16 subsequent siblings)
18 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-05-22 8:44 UTC (permalink / raw)
To: dev
To unify the packet type, bit masks of packet type for ol_flags are
replaced. In addition, more packet types (UDP, TCP and SCTP) are
supported in vectorized ixgbe PMD.
Note that around 2% performance drop (64B) was observed of doing 4
ports (1 port per 82599 card) IO forwarding on the same SNB core.
Signed-off-by: Cunming Liang <cunming.liang@intel.com>
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
config/common_linuxapp | 2 +-
lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c | 49 +++++++++++++++++++----------------
2 files changed, 27 insertions(+), 24 deletions(-)
v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.
v3 changes:
* Put vector ixgbe changes right after mbuf changes.
* Enabled vector ixgbe PMD by default together with changes for updated
vector PMD.
v5 changes:
* Re-worded the commit logs.
diff --git a/config/common_linuxapp b/config/common_linuxapp
index 6b067c7..0078dc9 100644
--- a/config/common_linuxapp
+++ b/config/common_linuxapp
@@ -167,7 +167,7 @@ CONFIG_RTE_LIBRTE_IXGBE_DEBUG_TX_FREE=n
CONFIG_RTE_LIBRTE_IXGBE_DEBUG_DRIVER=n
CONFIG_RTE_LIBRTE_IXGBE_PF_DISABLE_STRIP_CRC=n
CONFIG_RTE_LIBRTE_IXGBE_RX_ALLOW_BULK_ALLOC=y
-CONFIG_RTE_IXGBE_INC_VECTOR=n
+CONFIG_RTE_IXGBE_INC_VECTOR=y
CONFIG_RTE_IXGBE_RX_OLFLAGS_ENABLE=y
#
diff --git a/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c b/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c
index abd10f6..a84d2f6 100644
--- a/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c
+++ b/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c
@@ -134,44 +134,35 @@ ixgbe_rxq_rearm(struct ixgbe_rx_queue *rxq)
*/
#ifdef RTE_IXGBE_RX_OLFLAGS_ENABLE
-#define OLFLAGS_MASK ((uint16_t)(PKT_RX_VLAN_PKT | PKT_RX_IPV4_HDR |\
- PKT_RX_IPV4_HDR_EXT | PKT_RX_IPV6_HDR |\
- PKT_RX_IPV6_HDR_EXT))
-#define OLFLAGS_MASK_V (((uint64_t)OLFLAGS_MASK << 48) | \
- ((uint64_t)OLFLAGS_MASK << 32) | \
- ((uint64_t)OLFLAGS_MASK << 16) | \
- ((uint64_t)OLFLAGS_MASK))
-#define PTYPE_SHIFT (1)
+#define OLFLAGS_MASK_V (((uint64_t)PKT_RX_VLAN_PKT << 48) | \
+ ((uint64_t)PKT_RX_VLAN_PKT << 32) | \
+ ((uint64_t)PKT_RX_VLAN_PKT << 16) | \
+ ((uint64_t)PKT_RX_VLAN_PKT))
#define VTAG_SHIFT (3)
static inline void
desc_to_olflags_v(__m128i descs[4], struct rte_mbuf **rx_pkts)
{
- __m128i ptype0, ptype1, vtag0, vtag1;
+ __m128i vtag0, vtag1;
union {
uint16_t e[4];
uint64_t dword;
} vol;
- ptype0 = _mm_unpacklo_epi16(descs[0], descs[1]);
- ptype1 = _mm_unpacklo_epi16(descs[2], descs[3]);
vtag0 = _mm_unpackhi_epi16(descs[0], descs[1]);
vtag1 = _mm_unpackhi_epi16(descs[2], descs[3]);
- ptype1 = _mm_unpacklo_epi32(ptype0, ptype1);
vtag1 = _mm_unpacklo_epi32(vtag0, vtag1);
-
- ptype1 = _mm_slli_epi16(ptype1, PTYPE_SHIFT);
vtag1 = _mm_srli_epi16(vtag1, VTAG_SHIFT);
- ptype1 = _mm_or_si128(ptype1, vtag1);
- vol.dword = _mm_cvtsi128_si64(ptype1) & OLFLAGS_MASK_V;
+ vol.dword = _mm_cvtsi128_si64(vtag1) & OLFLAGS_MASK_V;
rx_pkts[0]->ol_flags = vol.e[0];
rx_pkts[1]->ol_flags = vol.e[1];
rx_pkts[2]->ol_flags = vol.e[2];
rx_pkts[3]->ol_flags = vol.e[3];
}
+
#else
#define desc_to_olflags_v(desc, rx_pkts) do {} while (0)
#endif
@@ -197,13 +188,15 @@ _recv_raw_pkts_vec(struct ixgbe_rx_queue *rxq, struct rte_mbuf **rx_pkts,
uint64_t var;
__m128i shuf_msk;
__m128i crc_adjust = _mm_set_epi16(
- 0, 0, 0, 0, /* ignore non-length fields */
+ 0, 0, 0, /* ignore non-length fields */
+ -rxq->crc_len, /* sub crc on data_len */
0, /* ignore high-16bits of pkt_len */
-rxq->crc_len, /* sub crc on pkt_len */
- -rxq->crc_len, /* sub crc on data_len */
- 0 /* ignore pkt_type field */
+ 0, 0 /* ignore pkt_type field */
);
__m128i dd_check, eop_check;
+ __m128i desc_mask = _mm_set_epi32(0xFFFFFFFF, 0xFFFFFFFF,
+ 0xFFFFFFFF, 0xFFFF07F0);
if (unlikely(nb_pkts < RTE_IXGBE_VPMD_RX_BURST))
return 0;
@@ -234,12 +227,13 @@ _recv_raw_pkts_vec(struct ixgbe_rx_queue *rxq, struct rte_mbuf **rx_pkts,
/* mask to shuffle from desc. to mbuf */
shuf_msk = _mm_set_epi8(
7, 6, 5, 4, /* octet 4~7, 32bits rss */
- 0xFF, 0xFF, /* skip high 16 bits vlan_macip, zero out */
15, 14, /* octet 14~15, low 16 bits vlan_macip */
+ 13, 12, /* octet 12~13, 16 bits data_len */
0xFF, 0xFF, /* skip high 16 bits pkt_len, zero out */
13, 12, /* octet 12~13, low 16 bits pkt_len */
- 13, 12, /* octet 12~13, 16 bits data_len */
- 0xFF, 0xFF /* skip pkt_type field */
+ 0xFF, 0xFF, /* skip high 16 bits pkt_type */
+ 1, /* octet 1, 8 bits pkt_type field */
+ 0 /* octet 0, 4 bits offset 4 pkt_type field */
);
/* Cache is empty -> need to scan the buffer rings, but first move
@@ -248,6 +242,7 @@ _recv_raw_pkts_vec(struct ixgbe_rx_queue *rxq, struct rte_mbuf **rx_pkts,
/*
* A. load 4 packet in one loop
+ * [A*. mask out 4 unused dirty field in desc]
* B. copy 4 mbuf point from swring to rx_pkts
* C. calc the number of DD bits among the 4 packets
* [C*. extract the end-of-packet bit, if requested]
@@ -289,6 +284,14 @@ _recv_raw_pkts_vec(struct ixgbe_rx_queue *rxq, struct rte_mbuf **rx_pkts,
/* B.2 copy 2 mbuf point into rx_pkts */
_mm_storeu_si128((__m128i *)&rx_pkts[pos+2], mbp2);
+ /* A* mask out 0~3 bits RSS type */
+ descs[3] = _mm_and_si128(descs[3], desc_mask);
+ descs[2] = _mm_and_si128(descs[2], desc_mask);
+
+ /* A* mask out 0~3 bits RSS type */
+ descs[1] = _mm_and_si128(descs[1], desc_mask);
+ descs[0] = _mm_and_si128(descs[0], desc_mask);
+
/* avoid compiler reorder optimization */
rte_compiler_barrier();
@@ -301,7 +304,7 @@ _recv_raw_pkts_vec(struct ixgbe_rx_queue *rxq, struct rte_mbuf **rx_pkts,
/* C.1 4=>2 filter staterr info only */
sterr_tmp1 = _mm_unpackhi_epi32(descs[1], descs[0]);
- /* set ol_flags with packet type and vlan tag */
+ /* set ol_flags with vlan packet type */
desc_to_olflags_v(descs, &rx_pkts[pos]);
/* D.2 pkt 3,4 set in_port/nb_seg and remove crc */
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v5 03/18] mbuf: add definitions of unified packet types
2015-05-22 8:44 ` [dpdk-dev] [PATCH v5 00/18] unified packet type Helin Zhang
2015-05-22 8:44 ` [dpdk-dev] [PATCH v5 01/18] mbuf: redefine packet_type in rte_mbuf Helin Zhang
2015-05-22 8:44 ` [dpdk-dev] [PATCH v5 02/18] ixgbe: support unified packet type in vectorized PMD Helin Zhang
@ 2015-05-22 8:44 ` Helin Zhang
2015-05-22 8:44 ` [dpdk-dev] [PATCH v5 04/18] e1000: replace bit mask based packet type with unified packet type Helin Zhang
` (15 subsequent siblings)
18 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-05-22 8:44 UTC (permalink / raw)
To: dev
As there are only 6 bit flags in ol_flags for indicating packet
types, which is not enough to describe all the possible packet
types hardware can recognize. For example, i40e hardware can
recognize more than 150 packet types. Unified packet type is
composed of L2 type, L3 type, L4 type, tunnel type, inner L2 type,
inner L3 type and inner L4 type fields, and can be stored in
'struct rte_mbuf' of 32 bits field 'packet_type'.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
lib/librte_mbuf/rte_mbuf.h | 485 +++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 485 insertions(+)
v3 changes:
* Put the definitions of unified packet type into a single patch.
v4 changes:
* Added detailed description of each packet types.
v5 changes:
* Re-worded the commit logs.
* Added more detailed description for all packet types, together with examples.
diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
index c2b1463..6a26172 100644
--- a/lib/librte_mbuf/rte_mbuf.h
+++ b/lib/librte_mbuf/rte_mbuf.h
@@ -195,6 +195,491 @@ extern "C" {
/* Use final bit of flags to indicate a control mbuf */
#define CTRL_MBUF_FLAG (1ULL << 63) /**< Mbuf contains control data */
+/*
+ * 32 bits are divided into several fields to mark packet types. Note that
+ * each field is indexical.
+ * - Bit 3:0 is for L2 types.
+ * - Bit 7:4 is for L3 or outer L3 (for tunneling case) types.
+ * - Bit 11:8 is for L4 or outer L4 (for tunneling case) types.
+ * - Bit 15:12 is for tunnel types.
+ * - Bit 19:16 is for inner L2 types.
+ * - Bit 23:20 is for inner L3 types.
+ * - Bit 27:24 is for inner L4 types.
+ * - Bit 31:28 is reserved.
+ *
+ * To be compatible with Vector PMD, RTE_PTYPE_L3_IPV4, RTE_PTYPE_L3_IPV4_EXT,
+ * RTE_PTYPE_L3_IPV6, RTE_PTYPE_L3_IPV6_EXT, RTE_PTYPE_L4_TCP, RTE_PTYPE_L4_UDP
+ * and RTE_PTYPE_L4_SCTP should be kept as below in a contiguous 7 bits.
+ *
+ * Note that L3 types values are selected for checking IPV4/IPV6 header from
+ * performance point of view. Reading annotations of RTE_ETH_IS_IPV4_HDR and
+ * RTE_ETH_IS_IPV6_HDR is needed for any future changes of L3 type values.
+ *
+ * Note that the packet types of the same packet recognized by different
+ * hardware may be different, as different hardware may have different
+ * capability of packet type recognition.
+ *
+ * examples:
+ * <'ether type'=0x0800
+ * | 'version'=4, 'protocol'=0x29
+ * | 'version'=6, 'next header'=0x3A
+ * | 'ICMPv6 header'>
+ * will be recognized on i40e hardware as packet type combination of,
+ * RTE_PTYPE_L2_MAC |
+ * RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ * RTE_PTYPE_TUNNEL_IP |
+ * RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ * RTE_PTYPE_INNER_L4_ICMP.
+ *
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=0x2F
+ * | 'GRE header'
+ * | 'version'=6, 'next header'=0x11
+ * | 'UDP header'>
+ * will be recognized on i40e hardware as packet type combination of,
+ * RTE_PTYPE_L2_MAC |
+ * RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ * RTE_PTYPE_TUNNEL_GRENAT |
+ * RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ * RTE_PTYPE_INNER_L4_UDP.
+ */
+#define RTE_PTYPE_UNKNOWN 0x00000000
+/**
+ * MAC (Media Access Control) packet type.
+ * It is used for outer packet for tunneling cases.
+ *
+ * Packet format:
+ * <'ether type'=[0x0800|0x86DD|others]>
+ */
+#define RTE_PTYPE_L2_MAC 0x00000001
+/**
+ * MAC (Media Access Control) packet type for time sync.
+ *
+ * Packet format:
+ * <'ether type'=0x88F7>
+ */
+#define RTE_PTYPE_L2_MAC_TIMESYNC 0x00000002
+/**
+ * ARP (Address Resolution Protocol) packet type.
+ *
+ * Packet format:
+ * <'ether type'=0x0806>
+ */
+#define RTE_PTYPE_L2_ARP 0x00000003
+/**
+ * LLDP (Link Layer Discovery Protocol) packet type.
+ *
+ * Packet format:
+ * <'ether type'=0x88CC>
+ */
+#define RTE_PTYPE_L2_LLDP 0x00000004
+/**
+ * Mask of layer 2 packet types.
+ * It is used for outer packet for tunneling cases.
+ */
+#define RTE_PTYPE_L2_MASK 0x0000000f
+/**
+ * IP (Internet Protocol) version 4 packet type.
+ * It is used for outer packet for tunneling cases, and does not contain any
+ * header option.
+ *
+ * Packet format:
+ * <'ether type'=0x0800
+ * | 'version'=4, 'ihl'=5>
+ */
+#define RTE_PTYPE_L3_IPV4 0x00000010
+/**
+ * IP (Internet Protocol) version 4 packet type.
+ * It is used for outer packet for tunneling cases, and contains header
+ * options.
+ *
+ * Packet format:
+ * <'ether type'=0x0800
+ * | 'version'=4, 'ihl'=[6-15], 'options'>
+ */
+#define RTE_PTYPE_L3_IPV4_EXT 0x00000030
+/**
+ * IP (Internet Protocol) version 6 packet type.
+ * It is used for outer packet for tunneling cases, and does not contain any
+ * extension header.
+ *
+ * Packet format:
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=0x3B>
+ */
+#define RTE_PTYPE_L3_IPV6 0x00000040
+/**
+ * IP (Internet Protocol) version 4 packet type.
+ * It is used for outer packet for tunneling cases, and may or maynot contain
+ * header options.
+ *
+ * Packet format:
+ * <'ether type'=0x0800
+ * | 'version'=4, 'ihl'=[5-15], <'options'>>
+ */
+#define RTE_PTYPE_L3_IPV4_EXT_UNKNOWN 0x00000090
+/**
+ * IP (Internet Protocol) version 6 packet type.
+ * It is used for outer packet for tunneling cases, and contains extension
+ * headers.
+ *
+ * Packet format:
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=[0x0|0x2B|0x2C|0x32|0x33|0x3C|0x87],
+ * 'extension headers'>
+ */
+#define RTE_PTYPE_L3_IPV6_EXT 0x000000c0
+/**
+ * IP (Internet Protocol) version 6 packet type.
+ * It is used for outer packet for tunneling cases, and may or maynot contain
+ * extension headers.
+ *
+ * Packet format:
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=[0x3B|0x0|0x2B|0x2C|0x32|0x33|0x3C|0x87],
+ * <'extension headers'>>
+ */
+#define RTE_PTYPE_L3_IPV6_EXT_UNKNOWN 0x000000e0
+/**
+ * Mask of layer 3 packet types.
+ * It is used for outer packet for tunneling cases.
+ */
+#define RTE_PTYPE_L3_MASK 0x000000f0
+/**
+ * TCP (Transmission Control Protocol) packet type.
+ * It is used for outer packet for tunneling cases.
+ *
+ * Packet format:
+ * <'ether type'=0x0800
+ * | 'version'=4, 'protocol'=6, 'MF'=0>
+ * or,
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=6>
+ */
+#define RTE_PTYPE_L4_TCP 0x00000100
+/**
+ * UDP (User Datagram Protocol) packet type.
+ * It is used for outer packet for tunneling cases.
+ *
+ * Packet format:
+ * <'ether type'=0x0800
+ * | 'version'=4, 'protocol'=17, 'MF'=0>
+ * or,
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=17>
+ */
+#define RTE_PTYPE_L4_UDP 0x00000200
+/**
+ * Fragmented IP (Internet Protocol) packet type.
+ * It is used for outer packet for tunneling cases.
+ *
+ * It refers to those packets of any IP types, which can be recognized as
+ * fragmented. A fragmented packet cannot be recognized as any other L4 types
+ * (RTE_PTYPE_L4_TCP, RTE_PTYPE_L4_UDP, RTE_PTYPE_L4_SCTP, RTE_PTYPE_L4_ICMP,
+ * RTE_PTYPE_L4_NONFRAG).
+ *
+ * Packet format:
+ * <'ether type'=0x0800
+ * | 'version'=4, 'MF'=1>
+ * or,
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=44>
+ */
+#define RTE_PTYPE_L4_FRAG 0x00000300
+/**
+ * SCTP (Stream Control Transmission Protocol) packet type.
+ * It is used for outer packet for tunneling cases.
+ *
+ * Packet format:
+ * <'ether type'=0x0800
+ * | 'version'=4, 'protocol'=132, 'MF'=0>
+ * or,
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=132>
+ */
+#define RTE_PTYPE_L4_SCTP 0x00000400
+/**
+ * ICMP (Internet Control Message Protocol) packet type.
+ * It is used for outer packet for tunneling cases.
+ *
+ * Packet format:
+ * <'ether type'=0x0800
+ * | 'version'=4, 'protocol'=1, 'MF'=0>
+ * or,
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=1>
+ */
+#define RTE_PTYPE_L4_ICMP 0x00000500
+/**
+ * Non-fragmented IP (Internet Protocol) packet type.
+ * It is used for outer packet for tunneling cases.
+ *
+ * It refers to those packets of any IP types, while cannot be recognized as
+ * any of above L4 types (RTE_PTYPE_L4_TCP, RTE_PTYPE_L4_UDP,
+ * RTE_PTYPE_L4_FRAG, RTE_PTYPE_L4_SCTP, RTE_PTYPE_L4_ICMP).
+ *
+ * Packet format:
+ * <'ether type'=0x0800
+ * | 'version'=4, 'protocol'!=[6|17|132|1], 'MF'=0>
+ * or,
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'!=[6|17|44|132|1]>
+ */
+#define RTE_PTYPE_L4_NONFRAG 0x00000600
+/**
+ * Mask of layer 4 packet types.
+ * It is used for outer packet for tunneling cases.
+ */
+#define RTE_PTYPE_L4_MASK 0x00000f00
+/**
+ * IP (Internet Protocol) in IP (Internet Protocol) tunneling packet type.
+ *
+ * Packet format:
+ * <'ether type'=0x0800
+ * | 'version'=4, 'protocol'=[4|41]>
+ * or,
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=[4|41]>
+ */
+#define RTE_PTYPE_TUNNEL_IP 0x00001000
+/**
+ * GRE (Generic Routing Encapsulation) tunneling packet type.
+ *
+ * Packet format:
+ * <'ether type'=0x0800
+ * | 'version'=4, 'protocol'=47>
+ * or,
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=47>
+ */
+#define RTE_PTYPE_TUNNEL_GRE 0x00002000
+/**
+ * VXLAN (Virtual eXtensible Local Area Network) tunneling packet type.
+ *
+ * Packet format:
+ * <'ether type'=0x0800
+ * | 'version'=4, 'protocol'=17
+ * | 'destination port'=4798>
+ * or,
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=17
+ * | 'destination port'=4798>
+ */
+#define RTE_PTYPE_TUNNEL_VXLAN 0x00003000
+/**
+ * NVGRE (Network Virtualization using Generic Routing Encapsulation) tunneling
+ * packet type.
+ *
+ * Packet format:
+ * <'ether type'=0x0800
+ * | 'version'=4, 'protocol'=47
+ * | 'protocol type'=0x6558>
+ * or,
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=47
+ * | 'protocol type'=0x6558'>
+ */
+#define RTE_PTYPE_TUNNEL_NVGRE 0x00004000
+/**
+ * GENEVE (Generic Network Virtualization Encapsulation) tunneling packet type.
+ *
+ * Packet format:
+ * <'ether type'=0x0800
+ * | 'version'=4, 'protocol'=17
+ * | 'destination port'=6081>
+ * or,
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=17
+ * | 'destination port'=6081>
+ */
+#define RTE_PTYPE_TUNNEL_GENEVE 0x00005000
+/**
+ * Tunneling packet type of Teredo, VXLAN (Virtual eXtensible Local Area
+ * Network) or GRE (Generic Routing Encapsulation) could be recognized as this
+ * packet type, if they can not be recognized independently as of hardware
+ * capability.
+ */
+#define RTE_PTYPE_TUNNEL_GRENAT 0x00006000
+/**
+ * Mask of tunneling packet types.
+ */
+#define RTE_PTYPE_TUNNEL_MASK 0x0000f000
+/**
+ * MAC (Media Access Control) packet type.
+ * It is used for inner packet type only.
+ *
+ * Packet format (inner only):
+ * <'ether type'=[0x800|0x86DD]>
+ */
+#define RTE_PTYPE_INNER_L2_MAC 0x00010000
+/**
+ * MAC (Media Access Control) packet type with VLAN (Virtual Local Area
+ * Network) tag.
+ *
+ * Packet format (inner only):
+ * <'ether type'=[0x800|0x86DD], vlan=[1-4095]>
+ */
+#define RTE_PTYPE_INNER_L2_MAC_VLAN 0x00020000
+/**
+ * Mask of inner layer 2 packet types.
+ */
+#define RTE_PTYPE_INNER_L2_MASK 0x000f0000
+/**
+ * IP (Internet Protocol) version 4 packet type.
+ * It is used for inner packet only, and does not contain any header option.
+ *
+ * Packet format (inner only):
+ * <'ether type'=0x0800
+ * | 'version'=4, 'ihl'=5>
+ */
+#define RTE_PTYPE_INNER_L3_IPV4 0x00100000
+/**
+ * IP (Internet Protocol) version 4 packet type.
+ * It is used for inner packet only, and contains header options.
+ *
+ * Packet format (inner only):
+ * <'ether type'=0x0800
+ * | 'version'=4, 'ihl'=[6-15], 'options'>
+ */
+#define RTE_PTYPE_INNER_L3_IPV4_EXT 0x00200000
+/**
+ * IP (Internet Protocol) version 6 packet type.
+ * It is used for inner packet only, and does not contain any extension header.
+ *
+ * Packet format (inner only):
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=0x3B>
+ */
+#define RTE_PTYPE_INNER_L3_IPV6 0x00300000
+/**
+ * IP (Internet Protocol) version 4 packet type.
+ * It is used for inner packet only, and may or maynot contain header options.
+ *
+ * Packet format (inner only):
+ * <'ether type'=0x0800
+ * | 'version'=4, 'ihl'=[5-15], <'options'>>
+ */
+#define RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN 0x00400000
+/**
+ * IP (Internet Protocol) version 6 packet type.
+ * It is used for inner packet only, and contains extension headers.
+ *
+ * Packet format (inner only):
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=[0x0|0x2B|0x2C|0x32|0x33|0x3C|0x87],
+ * 'extension headers'>
+ */
+#define RTE_PTYPE_INNER_L3_IPV6_EXT 0x00500000
+/**
+ * IP (Internet Protocol) version 6 packet type.
+ * It is used for inner packet only, and may or maynot contain extension
+ * headers.
+ *
+ * Packet format (inner only):
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=[0x3B|0x0|0x2B|0x2C|0x32|0x33|0x3C|0x87],
+ * <'extension headers'>>
+ */
+#define RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN 0x00600000
+/**
+ * Mask of inner layer 3 packet types.
+ */
+#define RTE_PTYPE_INNER_INNER_L3_MASK 0x00f00000
+/**
+ * TCP (Transmission Control Protocol) packet type.
+ * It is used for inner packet only.
+ *
+ * Packet format (inner only):
+ * <'ether type'=0x0800
+ * | 'version'=4, 'protocol'=6, 'MF'=0>
+ * or,
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=6>
+ */
+#define RTE_PTYPE_INNER_L4_TCP 0x01000000
+/**
+ * UDP (User Datagram Protocol) packet type.
+ * It is used for inner packet only.
+ *
+ * Packet format (inner only):
+ * <'ether type'=0x0800
+ * | 'version'=4, 'protocol'=17, 'MF'=0>
+ * or,
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=17>
+ */
+#define RTE_PTYPE_INNER_L4_UDP 0x02000000
+/**
+ * Fragmented IP (Internet Protocol) packet type.
+ * It is used for inner packet only, and may or maynot have layer 4 packet.
+ *
+ * Packet format (inner only):
+ * <'ether type'=0x0800
+ * | 'version'=4, 'MF'=1>
+ * or,
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=44>
+ */
+#define RTE_PTYPE_INNER_L4_FRAG 0x03000000
+/**
+ * SCTP (Stream Control Transmission Protocol) packet type.
+ * It is used for inner packet only.
+ *
+ * Packet format (inner only):
+ * <'ether type'=0x0800
+ * | 'version'=4, 'protocol'=132, 'MF'=0>
+ * or,
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=132>
+ */
+#define RTE_PTYPE_INNER_L4_SCTP 0x04000000
+/**
+ * ICMP (Internet Control Message Protocol) packet type.
+ * It is used for inner packet only.
+ *
+ * Packet format (inner only):
+ * <'ether type'=0x0800
+ * | 'version'=4, 'protocol'=1, 'MF'=0>
+ * or,
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=1>
+ */
+#define RTE_PTYPE_INNER_L4_ICMP 0x05000000
+/**
+ * Non-fragmented IP (Internet Protocol) packet type.
+ * It is used for inner packet only, and may or maynot have other unknown layer
+ * 4 packet types.
+ *
+ * Packet format (inner only):
+ * <'ether type'=0x0800
+ * | 'version'=4, 'protocol'!=[6|17|132|1], 'MF'=0>
+ * or,
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'!=[6|17|44|132|1]>
+ */
+#define RTE_PTYPE_INNER_L4_NONFRAG 0x06000000
+/**
+ * Mask of inner layer 4 packet types.
+ */
+#define RTE_PTYPE_INNER_L4_MASK 0x0f000000
+
+/**
+ * Check if the (outer) L3 header is IPv4. To avoid comparing IPv4 types one by
+ * one, bit 4 is selected to be used for IPv4 only. Then checking bit 4 can
+ * determin if it is an IPV4 packet.
+ */
+#define RTE_ETH_IS_IPV4_HDR(ptype) ((ptype) & RTE_PTYPE_L3_IPV4)
+
+/**
+ * Check if the (outer) L3 header is IPv4. To avoid comparing IPv4 types one by
+ * one, bit 6 is selected to be used for IPv4 only. Then checking bit 6 can
+ * determin if it is an IPV4 packet.
+ */
+#define RTE_ETH_IS_IPV6_HDR(ptype) ((ptype) & RTE_PTYPE_L3_IPV6)
+
+/* Check if it is a tunneling packet */
+#define RTE_ETH_IS_TUNNEL_PKT(ptype) ((ptype) & RTE_PTYPE_TUNNEL_MASK)
+
/**
* Get the name of a RX offload flag
*
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v5 04/18] e1000: replace bit mask based packet type with unified packet type
2015-05-22 8:44 ` [dpdk-dev] [PATCH v5 00/18] unified packet type Helin Zhang
` (2 preceding siblings ...)
2015-05-22 8:44 ` [dpdk-dev] [PATCH v5 03/18] mbuf: add definitions of unified packet types Helin Zhang
@ 2015-05-22 8:44 ` Helin Zhang
2015-05-22 8:44 ` [dpdk-dev] [PATCH v5 05/18] ixgbe: " Helin Zhang
` (14 subsequent siblings)
18 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-05-22 8:44 UTC (permalink / raw)
To: dev
To unify packet types among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
lib/librte_pmd_e1000/igb_rxtx.c | 98 ++++++++++++++++++++++++++++++++++-------
1 file changed, 83 insertions(+), 15 deletions(-)
v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.
v5 changes:
* Re-worded the commit logs.
diff --git a/lib/librte_pmd_e1000/igb_rxtx.c b/lib/librte_pmd_e1000/igb_rxtx.c
index 80d05c0..6174fa7 100644
--- a/lib/librte_pmd_e1000/igb_rxtx.c
+++ b/lib/librte_pmd_e1000/igb_rxtx.c
@@ -590,17 +590,85 @@ eth_igb_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
* RX functions
*
**********************************************************************/
+#define IGB_PACKET_TYPE_IPV4 0X01
+#define IGB_PACKET_TYPE_IPV4_TCP 0X11
+#define IGB_PACKET_TYPE_IPV4_UDP 0X21
+#define IGB_PACKET_TYPE_IPV4_SCTP 0X41
+#define IGB_PACKET_TYPE_IPV4_EXT 0X03
+#define IGB_PACKET_TYPE_IPV4_EXT_SCTP 0X43
+#define IGB_PACKET_TYPE_IPV6 0X04
+#define IGB_PACKET_TYPE_IPV6_TCP 0X14
+#define IGB_PACKET_TYPE_IPV6_UDP 0X24
+#define IGB_PACKET_TYPE_IPV6_EXT 0X0C
+#define IGB_PACKET_TYPE_IPV6_EXT_TCP 0X1C
+#define IGB_PACKET_TYPE_IPV6_EXT_UDP 0X2C
+#define IGB_PACKET_TYPE_IPV4_IPV6 0X05
+#define IGB_PACKET_TYPE_IPV4_IPV6_TCP 0X15
+#define IGB_PACKET_TYPE_IPV4_IPV6_UDP 0X25
+#define IGB_PACKET_TYPE_IPV4_IPV6_EXT 0X0D
+#define IGB_PACKET_TYPE_IPV4_IPV6_EXT_TCP 0X1D
+#define IGB_PACKET_TYPE_IPV4_IPV6_EXT_UDP 0X2D
+#define IGB_PACKET_TYPE_MAX 0X80
+#define IGB_PACKET_TYPE_MASK 0X7F
+#define IGB_PACKET_TYPE_SHIFT 0X04
+static inline uint32_t
+igb_rxd_pkt_info_to_pkt_type(uint16_t pkt_info)
+{
+ static const uint32_t
+ ptype_table[IGB_PACKET_TYPE_MAX] __rte_cache_aligned = {
+ [IGB_PACKET_TYPE_IPV4] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4,
+ [IGB_PACKET_TYPE_IPV4_EXT] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4_EXT,
+ [IGB_PACKET_TYPE_IPV6] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6,
+ [IGB_PACKET_TYPE_IPV4_IPV6] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6,
+ [IGB_PACKET_TYPE_IPV6_EXT] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV6_EXT,
+ [IGB_PACKET_TYPE_IPV4_IPV6_EXT] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT,
+ [IGB_PACKET_TYPE_IPV4_TCP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_TCP,
+ [IGB_PACKET_TYPE_IPV6_TCP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_TCP,
+ [IGB_PACKET_TYPE_IPV4_IPV6_TCP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6 | RTE_PTYPE_INNER_L4_TCP,
+ [IGB_PACKET_TYPE_IPV6_EXT_TCP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV6_EXT | RTE_PTYPE_L4_TCP,
+ [IGB_PACKET_TYPE_IPV4_IPV6_EXT_TCP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT | RTE_PTYPE_INNER_L4_TCP,
+ [IGB_PACKET_TYPE_IPV4_UDP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_UDP,
+ [IGB_PACKET_TYPE_IPV6_UDP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_UDP,
+ [IGB_PACKET_TYPE_IPV4_IPV6_UDP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6 | RTE_PTYPE_INNER_L4_UDP,
+ [IGB_PACKET_TYPE_IPV6_EXT_UDP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV6_EXT | RTE_PTYPE_L4_UDP,
+ [IGB_PACKET_TYPE_IPV4_IPV6_EXT_UDP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT | RTE_PTYPE_INNER_L4_UDP,
+ [IGB_PACKET_TYPE_IPV4_SCTP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_SCTP,
+ [IGB_PACKET_TYPE_IPV4_EXT_SCTP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4_EXT | RTE_PTYPE_L4_SCTP,
+ };
+ if (unlikely(pkt_info & E1000_RXDADV_PKTTYPE_ETQF))
+ return RTE_PTYPE_UNKNOWN;
+
+ pkt_info = (pkt_info >> IGB_PACKET_TYPE_SHIFT) & IGB_PACKET_TYPE_MASK;
+
+ return ptype_table[pkt_info];
+}
+
static inline uint64_t
rx_desc_hlen_type_rss_to_pkt_flags(uint32_t hl_tp_rs)
{
- uint64_t pkt_flags;
-
- static uint64_t ip_pkt_types_map[16] = {
- 0, PKT_RX_IPV4_HDR, PKT_RX_IPV4_HDR_EXT, PKT_RX_IPV4_HDR_EXT,
- PKT_RX_IPV6_HDR, 0, 0, 0,
- PKT_RX_IPV6_HDR_EXT, 0, 0, 0,
- PKT_RX_IPV6_HDR_EXT, 0, 0, 0,
- };
+ uint64_t pkt_flags = ((hl_tp_rs & 0x0F) == 0) ? 0 : PKT_RX_RSS_HASH;
#if defined(RTE_LIBRTE_IEEE1588)
static uint32_t ip_pkt_etqf_map[8] = {
@@ -608,14 +676,10 @@ rx_desc_hlen_type_rss_to_pkt_flags(uint32_t hl_tp_rs)
0, 0, 0, 0,
};
- pkt_flags = (hl_tp_rs & E1000_RXDADV_PKTTYPE_ETQF) ?
- ip_pkt_etqf_map[(hl_tp_rs >> 4) & 0x07] :
- ip_pkt_types_map[(hl_tp_rs >> 4) & 0x0F];
-#else
- pkt_flags = (hl_tp_rs & E1000_RXDADV_PKTTYPE_ETQF) ? 0 :
- ip_pkt_types_map[(hl_tp_rs >> 4) & 0x0F];
+ pkt_flags |= ip_pkt_etqf_map[(hl_tp_rs >> 4) & 0x07];
#endif
- return pkt_flags | (((hl_tp_rs & 0x0F) == 0) ? 0 : PKT_RX_RSS_HASH);
+
+ return pkt_flags;
}
static inline uint64_t
@@ -790,6 +854,8 @@ eth_igb_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
pkt_flags = pkt_flags | rx_desc_status_to_pkt_flags(staterr);
pkt_flags = pkt_flags | rx_desc_error_to_pkt_flags(staterr);
rxm->ol_flags = pkt_flags;
+ rxm->packet_type = igb_rxd_pkt_info_to_pkt_type(rxd.wb.lower.
+ lo_dword.hs_rss.pkt_info);
/*
* Store the mbuf address into the next entry of the array
@@ -1024,6 +1090,8 @@ eth_igb_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
pkt_flags = pkt_flags | rx_desc_status_to_pkt_flags(staterr);
pkt_flags = pkt_flags | rx_desc_error_to_pkt_flags(staterr);
first_seg->ol_flags = pkt_flags;
+ first_seg->packet_type = igb_rxd_pkt_info_to_pkt_type(rxd.wb.
+ lower.lo_dword.hs_rss.pkt_info);
/* Prefetch data of first segment, if configured to do so. */
rte_packet_prefetch((char *)first_seg->buf_addr +
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v5 05/18] ixgbe: replace bit mask based packet type with unified packet type
2015-05-22 8:44 ` [dpdk-dev] [PATCH v5 00/18] unified packet type Helin Zhang
` (3 preceding siblings ...)
2015-05-22 8:44 ` [dpdk-dev] [PATCH v5 04/18] e1000: replace bit mask based packet type with unified packet type Helin Zhang
@ 2015-05-22 8:44 ` Helin Zhang
2015-05-22 8:44 ` [dpdk-dev] [PATCH v5 06/18] i40e: " Helin Zhang
` (13 subsequent siblings)
18 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-05-22 8:44 UTC (permalink / raw)
To: dev
To unify packet type among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.
Note that around 2.5% performance drop (64B) was observed of doing
4 ports (1 port per 82599 card) IO forwarding on the same SNB core.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
lib/librte_pmd_ixgbe/ixgbe_rxtx.c | 139 +++++++++++++++++++++++++++++---------
1 file changed, 108 insertions(+), 31 deletions(-)
v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.
v5 changes:
* Re-worded the commit logs.
diff --git a/lib/librte_pmd_ixgbe/ixgbe_rxtx.c b/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
index 57c9430..d1d8d8b 100644
--- a/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
+++ b/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
@@ -855,40 +855,107 @@ end_of_tx:
* RX functions
*
**********************************************************************/
-static inline uint64_t
-rx_desc_hlen_type_rss_to_pkt_flags(uint32_t hl_tp_rs)
+#define IXGBE_PACKET_TYPE_IPV4 0X01
+#define IXGBE_PACKET_TYPE_IPV4_TCP 0X11
+#define IXGBE_PACKET_TYPE_IPV4_UDP 0X21
+#define IXGBE_PACKET_TYPE_IPV4_SCTP 0X41
+#define IXGBE_PACKET_TYPE_IPV4_EXT 0X03
+#define IXGBE_PACKET_TYPE_IPV4_EXT_SCTP 0X43
+#define IXGBE_PACKET_TYPE_IPV6 0X04
+#define IXGBE_PACKET_TYPE_IPV6_TCP 0X14
+#define IXGBE_PACKET_TYPE_IPV6_UDP 0X24
+#define IXGBE_PACKET_TYPE_IPV6_EXT 0X0C
+#define IXGBE_PACKET_TYPE_IPV6_EXT_TCP 0X1C
+#define IXGBE_PACKET_TYPE_IPV6_EXT_UDP 0X2C
+#define IXGBE_PACKET_TYPE_IPV4_IPV6 0X05
+#define IXGBE_PACKET_TYPE_IPV4_IPV6_TCP 0X15
+#define IXGBE_PACKET_TYPE_IPV4_IPV6_UDP 0X25
+#define IXGBE_PACKET_TYPE_IPV4_IPV6_EXT 0X0D
+#define IXGBE_PACKET_TYPE_IPV4_IPV6_EXT_TCP 0X1D
+#define IXGBE_PACKET_TYPE_IPV4_IPV6_EXT_UDP 0X2D
+#define IXGBE_PACKET_TYPE_MAX 0X80
+#define IXGBE_PACKET_TYPE_MASK 0X7F
+#define IXGBE_PACKET_TYPE_SHIFT 0X04
+static inline uint32_t
+ixgbe_rxd_pkt_info_to_pkt_type(uint16_t pkt_info)
{
- uint64_t pkt_flags;
-
- static const uint64_t ip_pkt_types_map[16] = {
- 0, PKT_RX_IPV4_HDR, PKT_RX_IPV4_HDR_EXT, PKT_RX_IPV4_HDR_EXT,
- PKT_RX_IPV6_HDR, 0, 0, 0,
- PKT_RX_IPV6_HDR_EXT, 0, 0, 0,
- PKT_RX_IPV6_HDR_EXT, 0, 0, 0,
+ static const uint32_t
+ ptype_table[IXGBE_PACKET_TYPE_MAX] __rte_cache_aligned = {
+ [IXGBE_PACKET_TYPE_IPV4] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4,
+ [IXGBE_PACKET_TYPE_IPV4_EXT] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4_EXT,
+ [IXGBE_PACKET_TYPE_IPV6] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV6,
+ [IXGBE_PACKET_TYPE_IPV4_IPV6] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6,
+ [IXGBE_PACKET_TYPE_IPV6_EXT] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV6_EXT,
+ [IXGBE_PACKET_TYPE_IPV4_IPV6_EXT] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT,
+ [IXGBE_PACKET_TYPE_IPV4_TCP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_TCP,
+ [IXGBE_PACKET_TYPE_IPV6_TCP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_TCP,
+ [IXGBE_PACKET_TYPE_IPV4_IPV6_TCP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6 | RTE_PTYPE_INNER_L4_TCP,
+ [IXGBE_PACKET_TYPE_IPV6_EXT_TCP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV6_EXT | RTE_PTYPE_L4_TCP,
+ [IXGBE_PACKET_TYPE_IPV4_IPV6_EXT_TCP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT | RTE_PTYPE_INNER_L4_TCP,
+ [IXGBE_PACKET_TYPE_IPV4_UDP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_UDP,
+ [IXGBE_PACKET_TYPE_IPV6_UDP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_UDP,
+ [IXGBE_PACKET_TYPE_IPV4_IPV6_UDP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6 | RTE_PTYPE_INNER_L4_UDP,
+ [IXGBE_PACKET_TYPE_IPV6_EXT_UDP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV6_EXT | RTE_PTYPE_L4_UDP,
+ [IXGBE_PACKET_TYPE_IPV4_IPV6_EXT_UDP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT | RTE_PTYPE_INNER_L4_UDP,
+ [IXGBE_PACKET_TYPE_IPV4_SCTP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_SCTP,
+ [IXGBE_PACKET_TYPE_IPV4_EXT_SCTP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4_EXT | RTE_PTYPE_L4_SCTP,
};
+ if (unlikely(pkt_info & IXGBE_RXDADV_PKTTYPE_ETQF))
+ return RTE_PTYPE_UNKNOWN;
+
+ pkt_info = (pkt_info >> IXGBE_PACKET_TYPE_SHIFT) &
+ IXGBE_PACKET_TYPE_MASK;
+
+ return ptype_table[pkt_info];
+}
- static const uint64_t ip_rss_types_map[16] = {
+static inline uint64_t
+ixgbe_rxd_pkt_info_to_pkt_flags(uint16_t pkt_info)
+{
+ static uint64_t ip_rss_types_map[16] __rte_cache_aligned = {
0, PKT_RX_RSS_HASH, PKT_RX_RSS_HASH, PKT_RX_RSS_HASH,
0, PKT_RX_RSS_HASH, 0, PKT_RX_RSS_HASH,
PKT_RX_RSS_HASH, 0, 0, 0,
0, 0, 0, PKT_RX_FDIR,
};
-
#ifdef RTE_LIBRTE_IEEE1588
static uint64_t ip_pkt_etqf_map[8] = {
0, 0, 0, PKT_RX_IEEE1588_PTP,
0, 0, 0, 0,
};
- pkt_flags = (hl_tp_rs & IXGBE_RXDADV_PKTTYPE_ETQF) ?
- ip_pkt_etqf_map[(hl_tp_rs >> 4) & 0x07] :
- ip_pkt_types_map[(hl_tp_rs >> 4) & 0x0F];
+ if (likely(pkt_info & IXGBE_RXDADV_PKTTYPE_ETQF))
+ return ip_pkt_etqf_map[(pkt_info >> 4) & 0X07] |
+ ip_rss_types_map[pkt_info & 0xF];
+ else
+ return ip_rss_types_map[pkt_info & 0xF];
#else
- pkt_flags = (hl_tp_rs & IXGBE_RXDADV_PKTTYPE_ETQF) ? 0 :
- ip_pkt_types_map[(hl_tp_rs >> 4) & 0x0F];
-
+ return ip_rss_types_map[pkt_info & 0xF];
#endif
- return pkt_flags | ip_rss_types_map[hl_tp_rs & 0xF];
}
static inline uint64_t
@@ -945,7 +1012,9 @@ ixgbe_rx_scan_hw_ring(struct ixgbe_rx_queue *rxq)
struct rte_mbuf *mb;
uint16_t pkt_len;
uint64_t pkt_flags;
- int s[LOOK_AHEAD], nb_dd;
+ int nb_dd;
+ uint32_t s[LOOK_AHEAD];
+ uint16_t pkt_info[LOOK_AHEAD];
int i, j, nb_rx = 0;
@@ -968,6 +1037,9 @@ ixgbe_rx_scan_hw_ring(struct ixgbe_rx_queue *rxq)
for (j = LOOK_AHEAD-1; j >= 0; --j)
s[j] = rxdp[j].wb.upper.status_error;
+ for (j = LOOK_AHEAD-1; j >= 0; --j)
+ pkt_info[j] = rxdp[j].wb.lower.lo_dword.hs_rss.pkt_info;
+
/* Compute how many status bits were set */
nb_dd = 0;
for (j = 0; j < LOOK_AHEAD; ++j)
@@ -985,12 +1057,13 @@ ixgbe_rx_scan_hw_ring(struct ixgbe_rx_queue *rxq)
mb->vlan_tci = rte_le_to_cpu_16(rxdp[j].wb.upper.vlan);
/* convert descriptor fields to rte mbuf flags */
- pkt_flags = rx_desc_hlen_type_rss_to_pkt_flags(
- rxdp[j].wb.lower.lo_dword.data);
- /* reuse status field from scan list */
- pkt_flags |= rx_desc_status_to_pkt_flags(s[j]);
+ pkt_flags = rx_desc_status_to_pkt_flags(s[j]);
pkt_flags |= rx_desc_error_to_pkt_flags(s[j]);
+ pkt_flags |=
+ ixgbe_rxd_pkt_info_to_pkt_flags(pkt_info[j]);
mb->ol_flags = pkt_flags;
+ mb->packet_type =
+ ixgbe_rxd_pkt_info_to_pkt_type(pkt_info[j]);
if (likely(pkt_flags & PKT_RX_RSS_HASH))
mb->hash.rss = rxdp[j].wb.lower.hi_dword.rss;
@@ -1207,7 +1280,7 @@ ixgbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
union ixgbe_adv_rx_desc rxd;
uint64_t dma_addr;
uint32_t staterr;
- uint32_t hlen_type_rss;
+ uint32_t pkt_info;
uint16_t pkt_len;
uint16_t rx_id;
uint16_t nb_rx;
@@ -1325,14 +1398,17 @@ ixgbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
rxm->data_len = pkt_len;
rxm->port = rxq->port_id;
- hlen_type_rss = rte_le_to_cpu_32(rxd.wb.lower.lo_dword.data);
+ pkt_info = rte_le_to_cpu_32(rxd.wb.lower.lo_dword.hs_rss.
+ pkt_info);
/* Only valid if PKT_RX_VLAN_PKT set in pkt_flags */
rxm->vlan_tci = rte_le_to_cpu_16(rxd.wb.upper.vlan);
- pkt_flags = rx_desc_hlen_type_rss_to_pkt_flags(hlen_type_rss);
- pkt_flags = pkt_flags | rx_desc_status_to_pkt_flags(staterr);
+ pkt_flags = rx_desc_status_to_pkt_flags(staterr);
pkt_flags = pkt_flags | rx_desc_error_to_pkt_flags(staterr);
+ pkt_flags = pkt_flags |
+ ixgbe_rxd_pkt_info_to_pkt_flags(pkt_info);
rxm->ol_flags = pkt_flags;
+ rxm->packet_type = ixgbe_rxd_pkt_info_to_pkt_type(pkt_info);
if (likely(pkt_flags & PKT_RX_RSS_HASH))
rxm->hash.rss = rxd.wb.lower.hi_dword.rss;
@@ -1406,7 +1482,7 @@ ixgbe_fill_cluster_head_buf(
uint8_t port_id,
uint32_t staterr)
{
- uint32_t hlen_type_rss;
+ uint16_t pkt_info;
uint64_t pkt_flags;
head->port = port_id;
@@ -1416,11 +1492,12 @@ ixgbe_fill_cluster_head_buf(
* set in the pkt_flags field.
*/
head->vlan_tci = rte_le_to_cpu_16(desc->wb.upper.vlan);
- hlen_type_rss = rte_le_to_cpu_32(desc->wb.lower.lo_dword.data);
- pkt_flags = rx_desc_hlen_type_rss_to_pkt_flags(hlen_type_rss);
- pkt_flags |= rx_desc_status_to_pkt_flags(staterr);
+ pkt_info = rte_le_to_cpu_32(desc->wb.lower.lo_dword.hs_rss.pkt_info);
+ pkt_flags = rx_desc_status_to_pkt_flags(staterr);
pkt_flags |= rx_desc_error_to_pkt_flags(staterr);
+ pkt_flags |= ixgbe_rxd_pkt_info_to_pkt_flags(pkt_info);
head->ol_flags = pkt_flags;
+ head->packet_type = ixgbe_rxd_pkt_info_to_pkt_type(pkt_info);
if (likely(pkt_flags & PKT_RX_RSS_HASH))
head->hash.rss = rte_le_to_cpu_32(desc->wb.lower.hi_dword.rss);
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v5 06/18] i40e: replace bit mask based packet type with unified packet type
2015-05-22 8:44 ` [dpdk-dev] [PATCH v5 00/18] unified packet type Helin Zhang
` (4 preceding siblings ...)
2015-05-22 8:44 ` [dpdk-dev] [PATCH v5 05/18] ixgbe: " Helin Zhang
@ 2015-05-22 8:44 ` Helin Zhang
2015-05-22 8:44 ` [dpdk-dev] [PATCH v5 07/18] enic: " Helin Zhang
` (12 subsequent siblings)
18 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-05-22 8:44 UTC (permalink / raw)
To: dev
To unify packet types among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
lib/librte_pmd_i40e/i40e_rxtx.c | 786 ++++++++++++++++++++++++++--------------
1 file changed, 512 insertions(+), 274 deletions(-)
v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.
v5 changes:
* Re-worded the commit logs.
diff --git a/lib/librte_pmd_i40e/i40e_rxtx.c b/lib/librte_pmd_i40e/i40e_rxtx.c
index 453f98f..7a84d9a 100644
--- a/lib/librte_pmd_i40e/i40e_rxtx.c
+++ b/lib/librte_pmd_i40e/i40e_rxtx.c
@@ -151,272 +151,511 @@ i40e_rxd_error_to_pkt_flags(uint64_t qword)
return flags;
}
-/* Translate pkt types to pkt flags */
-static inline uint64_t
-i40e_rxd_ptype_to_pkt_flags(uint64_t qword)
+/* For each value it means, datasheet of hardware can tell more details */
+static inline uint32_t
+i40e_rxd_pkt_type_mapping(uint8_t ptype)
{
- uint8_t ptype = (uint8_t)((qword & I40E_RXD_QW1_PTYPE_MASK) >>
- I40E_RXD_QW1_PTYPE_SHIFT);
- static const uint64_t ip_ptype_map[I40E_MAX_PKT_TYPE] = {
- 0, /* PTYPE 0 */
- 0, /* PTYPE 1 */
- 0, /* PTYPE 2 */
- 0, /* PTYPE 3 */
- 0, /* PTYPE 4 */
- 0, /* PTYPE 5 */
- 0, /* PTYPE 6 */
- 0, /* PTYPE 7 */
- 0, /* PTYPE 8 */
- 0, /* PTYPE 9 */
- 0, /* PTYPE 10 */
- 0, /* PTYPE 11 */
- 0, /* PTYPE 12 */
- 0, /* PTYPE 13 */
- 0, /* PTYPE 14 */
- 0, /* PTYPE 15 */
- 0, /* PTYPE 16 */
- 0, /* PTYPE 17 */
- 0, /* PTYPE 18 */
- 0, /* PTYPE 19 */
- 0, /* PTYPE 20 */
- 0, /* PTYPE 21 */
- PKT_RX_IPV4_HDR, /* PTYPE 22 */
- PKT_RX_IPV4_HDR, /* PTYPE 23 */
- PKT_RX_IPV4_HDR, /* PTYPE 24 */
- 0, /* PTYPE 25 */
- PKT_RX_IPV4_HDR, /* PTYPE 26 */
- PKT_RX_IPV4_HDR, /* PTYPE 27 */
- PKT_RX_IPV4_HDR, /* PTYPE 28 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 29 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 30 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 31 */
- 0, /* PTYPE 32 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 33 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 34 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 35 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 36 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 37 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 38 */
- 0, /* PTYPE 39 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 40 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 41 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 42 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 43 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 44 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 45 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 46 */
- 0, /* PTYPE 47 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 48 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 49 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 50 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 51 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 52 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 53 */
- 0, /* PTYPE 54 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 55 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 56 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 57 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 58 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 59 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 60 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 61 */
- 0, /* PTYPE 62 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 63 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 64 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 65 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 66 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 67 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 68 */
- 0, /* PTYPE 69 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 70 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 71 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 72 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 73 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 74 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 75 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 76 */
- 0, /* PTYPE 77 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 78 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 79 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 80 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 81 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 82 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 83 */
- 0, /* PTYPE 84 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 85 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 86 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 87 */
- PKT_RX_IPV6_HDR, /* PTYPE 88 */
- PKT_RX_IPV6_HDR, /* PTYPE 89 */
- PKT_RX_IPV6_HDR, /* PTYPE 90 */
- 0, /* PTYPE 91 */
- PKT_RX_IPV6_HDR, /* PTYPE 92 */
- PKT_RX_IPV6_HDR, /* PTYPE 93 */
- PKT_RX_IPV6_HDR, /* PTYPE 94 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 95 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 96 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 97 */
- 0, /* PTYPE 98 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 99 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 100 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 101 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 102 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 103 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 104 */
- 0, /* PTYPE 105 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 106 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 107 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 108 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 109 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 110 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 111 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 112 */
- 0, /* PTYPE 113 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 114 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 115 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 116 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 117 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 118 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 119 */
- 0, /* PTYPE 120 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 121 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 122 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 123 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 124 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 125 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 126 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 127 */
- 0, /* PTYPE 128 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 129 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 130 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 131 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 132 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 133 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 134 */
- 0, /* PTYPE 135 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 136 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 137 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 138 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 139 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 140 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 141 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 142 */
- 0, /* PTYPE 143 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 144 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 145 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 146 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 147 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 148 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 149 */
- 0, /* PTYPE 150 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 151 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 152 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 153 */
- 0, /* PTYPE 154 */
- 0, /* PTYPE 155 */
- 0, /* PTYPE 156 */
- 0, /* PTYPE 157 */
- 0, /* PTYPE 158 */
- 0, /* PTYPE 159 */
- 0, /* PTYPE 160 */
- 0, /* PTYPE 161 */
- 0, /* PTYPE 162 */
- 0, /* PTYPE 163 */
- 0, /* PTYPE 164 */
- 0, /* PTYPE 165 */
- 0, /* PTYPE 166 */
- 0, /* PTYPE 167 */
- 0, /* PTYPE 168 */
- 0, /* PTYPE 169 */
- 0, /* PTYPE 170 */
- 0, /* PTYPE 171 */
- 0, /* PTYPE 172 */
- 0, /* PTYPE 173 */
- 0, /* PTYPE 174 */
- 0, /* PTYPE 175 */
- 0, /* PTYPE 176 */
- 0, /* PTYPE 177 */
- 0, /* PTYPE 178 */
- 0, /* PTYPE 179 */
- 0, /* PTYPE 180 */
- 0, /* PTYPE 181 */
- 0, /* PTYPE 182 */
- 0, /* PTYPE 183 */
- 0, /* PTYPE 184 */
- 0, /* PTYPE 185 */
- 0, /* PTYPE 186 */
- 0, /* PTYPE 187 */
- 0, /* PTYPE 188 */
- 0, /* PTYPE 189 */
- 0, /* PTYPE 190 */
- 0, /* PTYPE 191 */
- 0, /* PTYPE 192 */
- 0, /* PTYPE 193 */
- 0, /* PTYPE 194 */
- 0, /* PTYPE 195 */
- 0, /* PTYPE 196 */
- 0, /* PTYPE 197 */
- 0, /* PTYPE 198 */
- 0, /* PTYPE 199 */
- 0, /* PTYPE 200 */
- 0, /* PTYPE 201 */
- 0, /* PTYPE 202 */
- 0, /* PTYPE 203 */
- 0, /* PTYPE 204 */
- 0, /* PTYPE 205 */
- 0, /* PTYPE 206 */
- 0, /* PTYPE 207 */
- 0, /* PTYPE 208 */
- 0, /* PTYPE 209 */
- 0, /* PTYPE 210 */
- 0, /* PTYPE 211 */
- 0, /* PTYPE 212 */
- 0, /* PTYPE 213 */
- 0, /* PTYPE 214 */
- 0, /* PTYPE 215 */
- 0, /* PTYPE 216 */
- 0, /* PTYPE 217 */
- 0, /* PTYPE 218 */
- 0, /* PTYPE 219 */
- 0, /* PTYPE 220 */
- 0, /* PTYPE 221 */
- 0, /* PTYPE 222 */
- 0, /* PTYPE 223 */
- 0, /* PTYPE 224 */
- 0, /* PTYPE 225 */
- 0, /* PTYPE 226 */
- 0, /* PTYPE 227 */
- 0, /* PTYPE 228 */
- 0, /* PTYPE 229 */
- 0, /* PTYPE 230 */
- 0, /* PTYPE 231 */
- 0, /* PTYPE 232 */
- 0, /* PTYPE 233 */
- 0, /* PTYPE 234 */
- 0, /* PTYPE 235 */
- 0, /* PTYPE 236 */
- 0, /* PTYPE 237 */
- 0, /* PTYPE 238 */
- 0, /* PTYPE 239 */
- 0, /* PTYPE 240 */
- 0, /* PTYPE 241 */
- 0, /* PTYPE 242 */
- 0, /* PTYPE 243 */
- 0, /* PTYPE 244 */
- 0, /* PTYPE 245 */
- 0, /* PTYPE 246 */
- 0, /* PTYPE 247 */
- 0, /* PTYPE 248 */
- 0, /* PTYPE 249 */
- 0, /* PTYPE 250 */
- 0, /* PTYPE 251 */
- 0, /* PTYPE 252 */
- 0, /* PTYPE 253 */
- 0, /* PTYPE 254 */
- 0, /* PTYPE 255 */
+ static const uint32_t ptype_table[UINT8_MAX] __rte_cache_aligned = {
+ /* L2 types */
+ /* [0] reserved */
+ [1] = RTE_PTYPE_L2_MAC,
+ [2] = RTE_PTYPE_L2_MAC_TIMESYNC,
+ /* [3] - [5] reserved */
+ [6] = RTE_PTYPE_L2_LLDP,
+ /* [7] - [10] reserved */
+ [11] = RTE_PTYPE_L2_ARP,
+ /* [12] - [21] reserved */
+
+ /* Non tunneled IPv4 */
+ [22] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_FRAG,
+ [23] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_NONFRAG,
+ [24] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_UDP,
+ /* [25] reserved */
+ [26] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_TCP,
+ [27] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_SCTP,
+ [28] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_ICMP,
+
+ /* IPv4 --> IPv4 */
+ [29] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [30] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [31] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [32] reserved */
+ [33] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [34] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [35] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv4 --> IPv6 */
+ [36] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [37] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [38] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [39] reserved */
+ [40] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [41] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [42] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv4 --> GRE/Teredo/VXLAN */
+ [43] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT,
+
+ /* IPv4 --> GRE/Teredo/VXLAN --> IPv4 */
+ [44] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [45] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [46] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [47] reserved */
+ [48] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [49] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [50] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv4 --> GRE/Teredo/VXLAN --> IPv6 */
+ [51] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [52] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [53] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [54] reserved */
+ [55] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [56] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [57] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv4 --> GRE/Teredo/VXLAN --> MAC */
+ [58] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC,
+
+ /* IPv4 --> GRE/Teredo/VXLAN --> MAC --> IPv4 */
+ [59] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [60] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [61] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [62] reserved */
+ [63] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [64] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [65] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv4 --> GRE/Teredo/VXLAN --> MAC --> IPv6 */
+ [66] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [67] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [68] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [69] reserved */
+ [70] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [71] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [72] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv4 --> GRE/Teredo/VXLAN --> MAC/VLAN */
+ [73] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN,
+
+ /* IPv4 --> GRE/Teredo/VXLAN --> MAC/VLAN --> IPv4 */
+ [74] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [75] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [76] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [77] reserved */
+ [78] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [79] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [80] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv4 --> GRE/Teredo/VXLAN --> MAC/VLAN --> IPv6 */
+ [81] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [82] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [83] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [84] reserved */
+ [85] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [86] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [87] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* Non tunneled IPv6 */
+ [88] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_FRAG,
+ [89] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_NONFRAG,
+ [90] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_UDP,
+ /* [91] reserved */
+ [92] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_TCP,
+ [93] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_SCTP,
+ [94] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_ICMP,
+
+ /* IPv6 --> IPv4 */
+ [95] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [96] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [97] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [98] reserved */
+ [99] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [100] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [101] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv6 --> IPv6 */
+ [102] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [103] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [104] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [105] reserved */
+ [106] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [107] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [108] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv6 --> GRE/Teredo/VXLAN */
+ [109] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT,
+
+ /* IPv6 --> GRE/Teredo/VXLAN --> IPv4 */
+ [110] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [111] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [112] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [113] reserved */
+ [114] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [115] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [116] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv6 --> GRE/Teredo/VXLAN --> IPv6 */
+ [117] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [118] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [119] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [120] reserved */
+ [121] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [122] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [123] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv6 --> GRE/Teredo/VXLAN --> MAC */
+ [124] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC,
+
+ /* IPv6 --> GRE/Teredo/VXLAN --> MAC --> IPv4 */
+ [125] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [126] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [127] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [128] reserved */
+ [129] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [130] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [131] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv6 --> GRE/Teredo/VXLAN --> MAC --> IPv6 */
+ [132] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [133] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [134] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [135] reserved */
+ [136] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [137] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [138] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv6 --> GRE/Teredo/VXLAN --> MAC/VLAN */
+ [139] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN,
+
+ /* IPv6 --> GRE/Teredo/VXLAN --> MAC/VLAN --> IPv4 */
+ [140] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [141] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [142] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [143] reserved */
+ [144] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [145] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [146] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv6 --> GRE/Teredo/VXLAN --> MAC/VLAN --> IPv6 */
+ [147] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [148] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [149] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [150] reserved */
+ [151] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [152] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [153] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* All others reserved */
};
- return ip_ptype_map[ptype];
+ return ptype_table[ptype];
}
#define I40E_RX_DESC_EXT_STATUS_FLEXBH_MASK 0x03
@@ -709,11 +948,11 @@ i40e_rx_scan_hw_ring(struct i40e_rx_queue *rxq)
rxdp[j].wb.qword0.lo_dword.l2tag1) : 0;
pkt_flags = i40e_rxd_status_to_pkt_flags(qword1);
pkt_flags |= i40e_rxd_error_to_pkt_flags(qword1);
- pkt_flags |= i40e_rxd_ptype_to_pkt_flags(qword1);
- mb->packet_type = (uint16_t)((qword1 &
- I40E_RXD_QW1_PTYPE_MASK) >>
- I40E_RXD_QW1_PTYPE_SHIFT);
+ mb->packet_type =
+ i40e_rxd_pkt_type_mapping((uint8_t)((qword1 &
+ I40E_RXD_QW1_PTYPE_MASK) >>
+ I40E_RXD_QW1_PTYPE_SHIFT));
if (pkt_flags & PKT_RX_RSS_HASH)
mb->hash.rss = rte_le_to_cpu_32(\
rxdp[j].wb.qword0.hi_dword.rss);
@@ -952,9 +1191,9 @@ i40e_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
rte_le_to_cpu_16(rxd.wb.qword0.lo_dword.l2tag1) : 0;
pkt_flags = i40e_rxd_status_to_pkt_flags(qword1);
pkt_flags |= i40e_rxd_error_to_pkt_flags(qword1);
- pkt_flags |= i40e_rxd_ptype_to_pkt_flags(qword1);
- rxm->packet_type = (uint16_t)((qword1 & I40E_RXD_QW1_PTYPE_MASK) >>
- I40E_RXD_QW1_PTYPE_SHIFT);
+ rxm->packet_type =
+ i40e_rxd_pkt_type_mapping((uint8_t)((qword1 &
+ I40E_RXD_QW1_PTYPE_MASK) >> I40E_RXD_QW1_PTYPE_SHIFT));
if (pkt_flags & PKT_RX_RSS_HASH)
rxm->hash.rss =
rte_le_to_cpu_32(rxd.wb.qword0.hi_dword.rss);
@@ -1111,10 +1350,9 @@ i40e_recv_scattered_pkts(void *rx_queue,
rte_le_to_cpu_16(rxd.wb.qword0.lo_dword.l2tag1) : 0;
pkt_flags = i40e_rxd_status_to_pkt_flags(qword1);
pkt_flags |= i40e_rxd_error_to_pkt_flags(qword1);
- pkt_flags |= i40e_rxd_ptype_to_pkt_flags(qword1);
- first_seg->packet_type = (uint16_t)((qword1 &
- I40E_RXD_QW1_PTYPE_MASK) >>
- I40E_RXD_QW1_PTYPE_SHIFT);
+ first_seg->packet_type =
+ i40e_rxd_pkt_type_mapping((uint8_t)((qword1 &
+ I40E_RXD_QW1_PTYPE_MASK) >> I40E_RXD_QW1_PTYPE_SHIFT));
if (pkt_flags & PKT_RX_RSS_HASH)
rxm->hash.rss =
rte_le_to_cpu_32(rxd.wb.qword0.hi_dword.rss);
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v5 07/18] enic: replace bit mask based packet type with unified packet type
2015-05-22 8:44 ` [dpdk-dev] [PATCH v5 00/18] unified packet type Helin Zhang
` (5 preceding siblings ...)
2015-05-22 8:44 ` [dpdk-dev] [PATCH v5 06/18] i40e: " Helin Zhang
@ 2015-05-22 8:44 ` Helin Zhang
2015-05-22 8:44 ` [dpdk-dev] [PATCH v5 08/18] vmxnet3: " Helin Zhang
` (11 subsequent siblings)
18 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-05-22 8:44 UTC (permalink / raw)
To: dev
To unify packet types among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
lib/librte_pmd_enic/enic_main.c | 14 ++++++++------
1 file changed, 8 insertions(+), 6 deletions(-)
v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.
v5 changes:
* Re-worded the commit logs.
diff --git a/lib/librte_pmd_enic/enic_main.c b/lib/librte_pmd_enic/enic_main.c
index 15313c2..da52003 100644
--- a/lib/librte_pmd_enic/enic_main.c
+++ b/lib/librte_pmd_enic/enic_main.c
@@ -423,7 +423,7 @@ static int enic_rq_indicate_buf(struct vnic_rq *rq,
rx_pkt->pkt_len = bytes_written;
if (ipv4) {
- rx_pkt->ol_flags |= PKT_RX_IPV4_HDR;
+ rx_pkt->packet_type = RTE_PTYPE_L3_IPV4;
if (!csum_not_calc) {
if (unlikely(!ipv4_csum_ok))
rx_pkt->ol_flags |= PKT_RX_IP_CKSUM_BAD;
@@ -432,7 +432,7 @@ static int enic_rq_indicate_buf(struct vnic_rq *rq,
rx_pkt->ol_flags |= PKT_RX_L4_CKSUM_BAD;
}
} else if (ipv6)
- rx_pkt->ol_flags |= PKT_RX_IPV6_HDR;
+ rx_pkt->packet_type = RTE_PTYPE_L3_IPV6;
} else {
/* Header split */
if (sop && !eop) {
@@ -445,7 +445,7 @@ static int enic_rq_indicate_buf(struct vnic_rq *rq,
*rx_pkt_bucket = rx_pkt;
rx_pkt->pkt_len = bytes_written;
if (ipv4) {
- rx_pkt->ol_flags |= PKT_RX_IPV4_HDR;
+ rx_pkt->packet_type = RTE_PTYPE_L3_IPV4;
if (!csum_not_calc) {
if (unlikely(!ipv4_csum_ok))
rx_pkt->ol_flags |=
@@ -457,13 +457,14 @@ static int enic_rq_indicate_buf(struct vnic_rq *rq,
PKT_RX_L4_CKSUM_BAD;
}
} else if (ipv6)
- rx_pkt->ol_flags |= PKT_RX_IPV6_HDR;
+ rx_pkt->packet_type = RTE_PTYPE_L3_IPV6;
} else {
/* Payload */
hdr_rx_pkt = *rx_pkt_bucket;
hdr_rx_pkt->pkt_len += bytes_written;
if (ipv4) {
- hdr_rx_pkt->ol_flags |= PKT_RX_IPV4_HDR;
+ hdr_rx_pkt->packet_type =
+ RTE_PTYPE_L3_IPV4;
if (!csum_not_calc) {
if (unlikely(!ipv4_csum_ok))
hdr_rx_pkt->ol_flags |=
@@ -475,7 +476,8 @@ static int enic_rq_indicate_buf(struct vnic_rq *rq,
PKT_RX_L4_CKSUM_BAD;
}
} else if (ipv6)
- hdr_rx_pkt->ol_flags |= PKT_RX_IPV6_HDR;
+ hdr_rx_pkt->packet_type =
+ RTE_PTYPE_L3_IPV6;
}
}
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v5 08/18] vmxnet3: replace bit mask based packet type with unified packet type
2015-05-22 8:44 ` [dpdk-dev] [PATCH v5 00/18] unified packet type Helin Zhang
` (6 preceding siblings ...)
2015-05-22 8:44 ` [dpdk-dev] [PATCH v5 07/18] enic: " Helin Zhang
@ 2015-05-22 8:44 ` Helin Zhang
2015-05-22 8:44 ` [dpdk-dev] [PATCH v5 09/18] fm10k: " Helin Zhang
` (10 subsequent siblings)
18 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-05-22 8:44 UTC (permalink / raw)
To: dev
To unify packet types among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.
v5 changes:
* Re-worded the commit logs.
diff --git a/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c b/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c
index d8019f5..eef09bc 100644
--- a/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c
+++ b/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c
@@ -649,9 +649,9 @@ vmxnet3_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
struct ipv4_hdr *ip = (struct ipv4_hdr *)(eth + 1);
if (((ip->version_ihl & 0xf) << 2) > (int)sizeof(struct ipv4_hdr))
- rxm->ol_flags |= PKT_RX_IPV4_HDR_EXT;
+ rxm->packet_type = RTE_PTYPE_L3_IPV4_EXT;
else
- rxm->ol_flags |= PKT_RX_IPV4_HDR;
+ rxm->packet_type = RTE_PTYPE_L3_IPV4;
if (!rcd->cnc) {
if (!rcd->ipc)
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v5 09/18] fm10k: replace bit mask based packet type with unified packet type
2015-05-22 8:44 ` [dpdk-dev] [PATCH v5 00/18] unified packet type Helin Zhang
` (7 preceding siblings ...)
2015-05-22 8:44 ` [dpdk-dev] [PATCH v5 08/18] vmxnet3: " Helin Zhang
@ 2015-05-22 8:44 ` Helin Zhang
2015-05-22 8:44 ` [dpdk-dev] [PATCH v5 10/18] app/test-pipeline: " Helin Zhang
` (9 subsequent siblings)
18 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-05-22 8:44 UTC (permalink / raw)
To: dev
To unify packet types among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
lib/librte_pmd_fm10k/fm10k_rxtx.c | 30 +++++++++++++++++++++---------
1 file changed, 21 insertions(+), 9 deletions(-)
v4 changes:
* Supported unified packet type of fm10k from v4.
v5 changes:
* Re-worded the commit logs.
diff --git a/lib/librte_pmd_fm10k/fm10k_rxtx.c b/lib/librte_pmd_fm10k/fm10k_rxtx.c
index 56df6cd..b35efd1 100644
--- a/lib/librte_pmd_fm10k/fm10k_rxtx.c
+++ b/lib/librte_pmd_fm10k/fm10k_rxtx.c
@@ -68,13 +68,29 @@ static inline void dump_rxd(union fm10k_rx_desc *rxd)
static inline void
rx_desc_to_ol_flags(struct rte_mbuf *m, const union fm10k_rx_desc *d)
{
- uint16_t ptype;
- static const uint16_t pt_lut[] = { 0,
- PKT_RX_IPV4_HDR, PKT_RX_IPV4_HDR_EXT,
- PKT_RX_IPV6_HDR, PKT_RX_IPV6_HDR_EXT,
- 0, 0, 0
+ static const uint32_t
+ ptype_table[FM10K_RXD_PKTTYPE_MASK >> FM10K_RXD_PKTTYPE_SHIFT]
+ __rte_cache_aligned = {
+ [FM10K_PKTTYPE_OTHER] = RTE_PTYPE_L2_MAC,
+ [FM10K_PKTTYPE_IPV4] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4,
+ [FM10K_PKTTYPE_IPV4_EX] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4_EXT,
+ [FM10K_PKTTYPE_IPV6] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6,
+ [FM10K_PKTTYPE_IPV6_EX] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV6_EXT,
+ [FM10K_PKTTYPE_IPV4 | FM10K_PKTTYPE_TCP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_TCP,
+ [FM10K_PKTTYPE_IPV6 | FM10K_PKTTYPE_TCP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_TCP,
+ [FM10K_PKTTYPE_IPV4 | FM10K_PKTTYPE_UDP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_UDP,
+ [FM10K_PKTTYPE_IPV6 | FM10K_PKTTYPE_UDP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_UDP,
};
+ m->packet_type = ptype_table[(d->w.pkt_info & FM10K_RXD_PKTTYPE_MASK)
+ >> FM10K_RXD_PKTTYPE_SHIFT];
+
if (d->w.pkt_info & FM10K_RXD_RSSTYPE_MASK)
m->ol_flags |= PKT_RX_RSS_HASH;
@@ -96,10 +112,6 @@ rx_desc_to_ol_flags(struct rte_mbuf *m, const union fm10k_rx_desc *d)
if (unlikely(d->d.staterr & FM10K_RXD_STATUS_RXE))
m->ol_flags |= PKT_RX_RECIP_ERR;
-
- ptype = (d->d.data & FM10K_RXD_PKTTYPE_MASK_L3) >>
- FM10K_RXD_PKTTYPE_SHIFT;
- m->ol_flags |= pt_lut[(uint8_t)ptype];
}
uint16_t
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v5 10/18] app/test-pipeline: replace bit mask based packet type with unified packet type
2015-05-22 8:44 ` [dpdk-dev] [PATCH v5 00/18] unified packet type Helin Zhang
` (8 preceding siblings ...)
2015-05-22 8:44 ` [dpdk-dev] [PATCH v5 09/18] fm10k: " Helin Zhang
@ 2015-05-22 8:44 ` Helin Zhang
2015-05-22 8:44 ` [dpdk-dev] [PATCH v5 11/18] app/testpmd: " Helin Zhang
` (8 subsequent siblings)
18 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-05-22 8:44 UTC (permalink / raw)
To: dev
To unify packet types among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
app/test-pipeline/pipeline_hash.c | 7 ++++---
1 file changed, 4 insertions(+), 3 deletions(-)
v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.
v5 changes:
* Re-worded the commit logs.
diff --git a/app/test-pipeline/pipeline_hash.c b/app/test-pipeline/pipeline_hash.c
index 4598ad4..548615f 100644
--- a/app/test-pipeline/pipeline_hash.c
+++ b/app/test-pipeline/pipeline_hash.c
@@ -459,20 +459,21 @@ app_main_loop_rx_metadata(void) {
signature = RTE_MBUF_METADATA_UINT32_PTR(m, 0);
key = RTE_MBUF_METADATA_UINT8_PTR(m, 32);
- if (m->ol_flags & PKT_RX_IPV4_HDR) {
+ if (RTE_ETH_IS_IPV4_HDR(m->packet_type)) {
ip_hdr = (struct ipv4_hdr *)
&m_data[sizeof(struct ether_hdr)];
ip_dst = ip_hdr->dst_addr;
k32 = (uint32_t *) key;
k32[0] = ip_dst & 0xFFFFFF00;
- } else {
+ } else if (RTE_ETH_IS_IPV6_HDR(m->packet_type)) {
ipv6_hdr = (struct ipv6_hdr *)
&m_data[sizeof(struct ether_hdr)];
ipv6_dst = ipv6_hdr->dst_addr;
memcpy(key, ipv6_dst, 16);
- }
+ } else
+ continue;
*signature = test_hash(key, 0, 0);
}
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v5 11/18] app/testpmd: replace bit mask based packet type with unified packet type
2015-05-22 8:44 ` [dpdk-dev] [PATCH v5 00/18] unified packet type Helin Zhang
` (9 preceding siblings ...)
2015-05-22 8:44 ` [dpdk-dev] [PATCH v5 10/18] app/test-pipeline: " Helin Zhang
@ 2015-05-22 8:44 ` Helin Zhang
2015-05-22 8:44 ` [dpdk-dev] [PATCH v5 12/18] app/test: Remove useless code Helin Zhang
` (7 subsequent siblings)
18 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-05-22 8:44 UTC (permalink / raw)
To: dev
To unify packet types among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Signed-off-by: Jijiang Liu <jijiang.liu@intel.com>
---
app/test-pmd/csumonly.c | 10 +--
app/test-pmd/rxonly.c | 178 ++++++++++++++++++++++++++++++++++++++++++++++--
2 files changed, 177 insertions(+), 11 deletions(-)
v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.
v4 changes:
* Added printing logs of packet types of each received packet in rxonly mode.
v5 changes:
* Re-worded the commit logs.
diff --git a/app/test-pmd/csumonly.c b/app/test-pmd/csumonly.c
index c180ff2..2759985 100644
--- a/app/test-pmd/csumonly.c
+++ b/app/test-pmd/csumonly.c
@@ -202,8 +202,9 @@ parse_ethernet(struct ether_hdr *eth_hdr, struct testpmd_offload_info *info)
/* Parse a vxlan header */
static void
-parse_vxlan(struct udp_hdr *udp_hdr, struct testpmd_offload_info *info,
- uint64_t mbuf_olflags)
+parse_vxlan(struct udp_hdr *udp_hdr,
+ struct testpmd_offload_info *info,
+ uint32_t pkt_type)
{
struct ether_hdr *eth_hdr;
@@ -211,8 +212,7 @@ parse_vxlan(struct udp_hdr *udp_hdr, struct testpmd_offload_info *info,
* (rfc7348) or that the rx offload flag is set (i40e only
* currently) */
if (udp_hdr->dst_port != _htons(4789) &&
- (mbuf_olflags & (PKT_RX_TUNNEL_IPV4_HDR |
- PKT_RX_TUNNEL_IPV6_HDR)) == 0)
+ RTE_ETH_IS_TUNNEL_PKT(pkt_type) == 0)
return;
info->is_tunnel = 1;
@@ -549,7 +549,7 @@ pkt_burst_checksum_forward(struct fwd_stream *fs)
struct udp_hdr *udp_hdr;
udp_hdr = (struct udp_hdr *)((char *)l3_hdr +
info.l3_len);
- parse_vxlan(udp_hdr, &info, m->ol_flags);
+ parse_vxlan(udp_hdr, &info, m->packet_type);
} else if (info.l4_proto == IPPROTO_GRE) {
struct simple_gre_hdr *gre_hdr;
gre_hdr = (struct simple_gre_hdr *)
diff --git a/app/test-pmd/rxonly.c b/app/test-pmd/rxonly.c
index ac56090..92c775f 100644
--- a/app/test-pmd/rxonly.c
+++ b/app/test-pmd/rxonly.c
@@ -91,7 +91,7 @@ pkt_burst_receive(struct fwd_stream *fs)
uint64_t ol_flags;
uint16_t nb_rx;
uint16_t i, packet_type;
- uint64_t is_encapsulation;
+ uint16_t is_encapsulation;
#ifdef RTE_TEST_PMD_RECORD_CORE_CYCLES
uint64_t start_tsc;
@@ -134,10 +134,7 @@ pkt_burst_receive(struct fwd_stream *fs)
eth_type = RTE_BE_TO_CPU_16(eth_hdr->ether_type);
ol_flags = mb->ol_flags;
packet_type = mb->packet_type;
-
- is_encapsulation = ol_flags & (PKT_RX_TUNNEL_IPV4_HDR |
- PKT_RX_TUNNEL_IPV6_HDR);
-
+ is_encapsulation = RTE_ETH_IS_TUNNEL_PKT(packet_type);
print_ether_addr(" src=", ð_hdr->s_addr);
print_ether_addr(" - dst=", ð_hdr->d_addr);
printf(" - type=0x%04x - length=%u - nb_segs=%d",
@@ -160,6 +157,175 @@ pkt_burst_receive(struct fwd_stream *fs)
}
if (ol_flags & PKT_RX_VLAN_PKT)
printf(" - VLAN tci=0x%x", mb->vlan_tci);
+ if (mb->packet_type) {
+ uint32_t ptype;
+
+ /* (outer) L2 packet type */
+ ptype = mb->packet_type & RTE_PTYPE_L2_MASK;
+ switch (ptype) {
+ case RTE_PTYPE_L2_MAC:
+ printf(" - (outer) L2 type: MAC");
+ break;
+ case RTE_PTYPE_L2_MAC_TIMESYNC:
+ printf(" - (outer) L2 type: MAC Timesync");
+ break;
+ case RTE_PTYPE_L2_ARP:
+ printf(" - (outer) L2 type: ARP");
+ break;
+ case RTE_PTYPE_L2_LLDP:
+ printf(" - (outer) L2 type: LLDP");
+ break;
+ default:
+ printf(" - (outer) L2 type: Unknown");
+ break;
+ }
+
+ /* (outer) L3 packet type */
+ ptype = mb->packet_type & RTE_PTYPE_L3_MASK;
+ switch (ptype) {
+ case RTE_PTYPE_L3_IPV4:
+ printf(" - (outer) L3 type: IPV4");
+ break;
+ case RTE_PTYPE_L3_IPV4_EXT:
+ printf(" - (outer) L3 type: IPV4_EXT");
+ break;
+ case RTE_PTYPE_L3_IPV6:
+ printf(" - (outer) L3 type: IPV6");
+ break;
+ case RTE_PTYPE_L3_IPV4_EXT_UNKNOWN:
+ printf(" - (outer) L3 type: IPV4_EXT_UNKNOWN");
+ break;
+ case RTE_PTYPE_L3_IPV6_EXT:
+ printf(" - (outer) L3 type: IPV6_EXT");
+ break;
+ case RTE_PTYPE_L3_IPV6_EXT_UNKNOWN:
+ printf(" - (outer) L3 type: IPV6_EXT_UNKNOWN");
+ break;
+ default:
+ printf(" - (outer) L3 type: Unknown");
+ break;
+ }
+
+ /* (outer) L4 packet type */
+ ptype = mb->packet_type & RTE_PTYPE_L4_MASK;
+ switch (ptype) {
+ case RTE_PTYPE_L4_TCP:
+ printf(" - (outer) L4 type: TCP");
+ break;
+ case RTE_PTYPE_L4_UDP:
+ printf(" - (outer) L4 type: UDP");
+ break;
+ case RTE_PTYPE_L4_FRAG:
+ printf(" - (outer) L4 type: L4_FRAG");
+ break;
+ case RTE_PTYPE_L4_SCTP:
+ printf(" - (outer) L4 type: SCTP");
+ break;
+ case RTE_PTYPE_L4_ICMP:
+ printf(" - (outer) L4 type: ICMP");
+ break;
+ case RTE_PTYPE_L4_NONFRAG:
+ printf(" - (outer) L4 type: L4_NONFRAG");
+ break;
+ default:
+ printf(" - (outer) L4 type: Unknown");
+ break;
+ }
+
+ /* packet tunnel type */
+ ptype = mb->packet_type & RTE_PTYPE_TUNNEL_MASK;
+ switch (ptype) {
+ case RTE_PTYPE_TUNNEL_IP:
+ printf(" - Tunnel type: IP");
+ break;
+ case RTE_PTYPE_TUNNEL_GRE:
+ printf(" - Tunnel type: GRE");
+ break;
+ case RTE_PTYPE_TUNNEL_VXLAN:
+ printf(" - Tunnel type: VXLAN");
+ break;
+ case RTE_PTYPE_TUNNEL_NVGRE:
+ printf(" - Tunnel type: NVGRE");
+ break;
+ case RTE_PTYPE_TUNNEL_GENEVE:
+ printf(" - Tunnel type: GENEVE");
+ break;
+ case RTE_PTYPE_TUNNEL_GRENAT:
+ printf(" - Tunnel type: GRENAT");
+ break;
+ default:
+ printf(" - Tunnel type: Unkown");
+ break;
+ }
+
+ /* inner L2 packet type */
+ ptype = mb->packet_type & RTE_PTYPE_INNER_L2_MASK;
+ switch (ptype) {
+ case RTE_PTYPE_INNER_L2_MAC:
+ printf(" - Inner L2 type: MAC");
+ break;
+ case RTE_PTYPE_INNER_L2_MAC_VLAN:
+ printf(" - Inner L2 type: MAC_VLAN");
+ break;
+ default:
+ printf(" - Inner L2 type: Unknown");
+ break;
+ }
+
+ /* inner L3 packet type */
+ ptype = mb->packet_type & RTE_PTYPE_INNER_INNER_L3_MASK;
+ switch (ptype) {
+ case RTE_PTYPE_INNER_L3_IPV4:
+ printf(" - Inner L3 type: IPV4");
+ break;
+ case RTE_PTYPE_INNER_L3_IPV4_EXT:
+ printf(" - Inner L3 type: IPV4_EXT");
+ break;
+ case RTE_PTYPE_INNER_L3_IPV6:
+ printf(" - Inner L3 type: IPV6");
+ break;
+ case RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN:
+ printf(" - Inner L3 type: IPV4_EXT_UNKNOWN");
+ break;
+ case RTE_PTYPE_INNER_L3_IPV6_EXT:
+ printf(" - Inner L3 type: IPV6_EXT");
+ break;
+ case RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN:
+ printf(" - Inner L3 type: IPV6_EXT_UNKOWN");
+ break;
+ default:
+ printf(" - Inner L3 type: Unkown");
+ break;
+ }
+
+ /* inner L4 packet type */
+ ptype = mb->packet_type & RTE_PTYPE_INNER_L4_MASK;
+ switch (ptype) {
+ case RTE_PTYPE_INNER_L4_TCP:
+ printf(" - Inner L4 type: TCP");
+ break;
+ case RTE_PTYPE_INNER_L4_UDP:
+ printf(" - Inner L4 type: UDP");
+ break;
+ case RTE_PTYPE_INNER_L4_FRAG:
+ printf(" - Inner L4 type: L4_FRAG");
+ break;
+ case RTE_PTYPE_INNER_L4_SCTP:
+ printf(" - Inner L4 type: SCTP");
+ break;
+ case RTE_PTYPE_INNER_L4_ICMP:
+ printf(" - Inner L4 type: ICMP");
+ break;
+ case RTE_PTYPE_INNER_L4_NONFRAG:
+ printf(" - Inner L4 type: L4_NONFRAG");
+ break;
+ default:
+ printf(" - Inner L4 type: Unknown");
+ break;
+ }
+ printf("\n");
+ } else
+ printf("Unknown packet type\n");
if (is_encapsulation) {
struct ipv4_hdr *ipv4_hdr;
struct ipv6_hdr *ipv6_hdr;
@@ -173,7 +339,7 @@ pkt_burst_receive(struct fwd_stream *fs)
l2_len = sizeof(struct ether_hdr);
/* Do not support ipv4 option field */
- if (ol_flags & PKT_RX_TUNNEL_IPV4_HDR) {
+ if (RTE_ETH_IS_IPV4_HDR(packet_type)) {
l3_len = sizeof(struct ipv4_hdr);
ipv4_hdr = (struct ipv4_hdr *) (rte_pktmbuf_mtod(mb,
unsigned char *) + l2_len);
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v5 12/18] app/test: Remove useless code
2015-05-22 8:44 ` [dpdk-dev] [PATCH v5 00/18] unified packet type Helin Zhang
` (10 preceding siblings ...)
2015-05-22 8:44 ` [dpdk-dev] [PATCH v5 11/18] app/testpmd: " Helin Zhang
@ 2015-05-22 8:44 ` Helin Zhang
2015-05-22 8:44 ` [dpdk-dev] [PATCH v5 13/18] examples/ip_fragmentation: replace bit mask based packet type with unified packet type Helin Zhang
` (6 subsequent siblings)
18 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-05-22 8:44 UTC (permalink / raw)
To: dev
Severl useless code lines are added accidently, which blocks packet
type unification. They should be deleted at all.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
app/test/packet_burst_generator.c | 10 ----------
1 file changed, 10 deletions(-)
v4 changes:
* Removed several useless code lines which block packet type unification.
v5 changes:
* Re-worded the commit logs.
diff --git a/app/test/packet_burst_generator.c b/app/test/packet_burst_generator.c
index b46eed7..b9f8f1a 100644
--- a/app/test/packet_burst_generator.c
+++ b/app/test/packet_burst_generator.c
@@ -272,19 +272,9 @@ nomore_mbuf:
if (ipv4) {
pkt->vlan_tci = ETHER_TYPE_IPv4;
pkt->l3_len = sizeof(struct ipv4_hdr);
-
- if (vlan_enabled)
- pkt->ol_flags = PKT_RX_IPV4_HDR | PKT_RX_VLAN_PKT;
- else
- pkt->ol_flags = PKT_RX_IPV4_HDR;
} else {
pkt->vlan_tci = ETHER_TYPE_IPv6;
pkt->l3_len = sizeof(struct ipv6_hdr);
-
- if (vlan_enabled)
- pkt->ol_flags = PKT_RX_IPV6_HDR | PKT_RX_VLAN_PKT;
- else
- pkt->ol_flags = PKT_RX_IPV6_HDR;
}
pkts_burst[nb_pkt] = pkt;
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v5 13/18] examples/ip_fragmentation: replace bit mask based packet type with unified packet type
2015-05-22 8:44 ` [dpdk-dev] [PATCH v5 00/18] unified packet type Helin Zhang
` (11 preceding siblings ...)
2015-05-22 8:44 ` [dpdk-dev] [PATCH v5 12/18] app/test: Remove useless code Helin Zhang
@ 2015-05-22 8:44 ` Helin Zhang
2015-05-22 8:44 ` [dpdk-dev] [PATCH v5 14/18] examples/ip_reassembly: " Helin Zhang
` (5 subsequent siblings)
18 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-05-22 8:44 UTC (permalink / raw)
To: dev
To unify packet types among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
examples/ip_fragmentation/main.c | 7 +++----
1 file changed, 3 insertions(+), 4 deletions(-)
v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.
v5 changes:
* Re-worded the commit logs.
diff --git a/examples/ip_fragmentation/main.c b/examples/ip_fragmentation/main.c
index 0922ba6..fbc0b8d 100644
--- a/examples/ip_fragmentation/main.c
+++ b/examples/ip_fragmentation/main.c
@@ -283,7 +283,7 @@ l3fwd_simple_forward(struct rte_mbuf *m, struct lcore_queue_conf *qconf,
len = qconf->tx_mbufs[port_out].len;
/* if this is an IPv4 packet */
- if (m->ol_flags & PKT_RX_IPV4_HDR) {
+ if (RTE_ETH_IS_IPV4_HDR(m->packet_type)) {
struct ipv4_hdr *ip_hdr;
uint32_t ip_dst;
/* Read the lookup key (i.e. ip_dst) from the input packet */
@@ -317,9 +317,8 @@ l3fwd_simple_forward(struct rte_mbuf *m, struct lcore_queue_conf *qconf,
if (unlikely (len2 < 0))
return;
}
- }
- /* if this is an IPv6 packet */
- else if (m->ol_flags & PKT_RX_IPV6_HDR) {
+ } else if (RTE_ETH_IS_IPV6_HDR(m->packet_type)) {
+ /* if this is an IPv6 packet */
struct ipv6_hdr *ip_hdr;
ipv6 = 1;
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v5 14/18] examples/ip_reassembly: replace bit mask based packet type with unified packet type
2015-05-22 8:44 ` [dpdk-dev] [PATCH v5 00/18] unified packet type Helin Zhang
` (12 preceding siblings ...)
2015-05-22 8:44 ` [dpdk-dev] [PATCH v5 13/18] examples/ip_fragmentation: replace bit mask based packet type with unified packet type Helin Zhang
@ 2015-05-22 8:44 ` Helin Zhang
2015-05-22 8:44 ` [dpdk-dev] [PATCH v5 15/18] examples/l3fwd-acl: " Helin Zhang
` (4 subsequent siblings)
18 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-05-22 8:44 UTC (permalink / raw)
To: dev
To unify packet types among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
examples/ip_reassembly/main.c | 7 +++----
1 file changed, 3 insertions(+), 4 deletions(-)
v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.
v5 changes:
* Re-worded the commit logs.
diff --git a/examples/ip_reassembly/main.c b/examples/ip_reassembly/main.c
index 9ecb6f9..741c398 100644
--- a/examples/ip_reassembly/main.c
+++ b/examples/ip_reassembly/main.c
@@ -356,7 +356,7 @@ reassemble(struct rte_mbuf *m, uint8_t portid, uint32_t queue,
dst_port = portid;
/* if packet is IPv4 */
- if (m->ol_flags & (PKT_RX_IPV4_HDR)) {
+ if (RTE_ETH_IS_IPV4_HDR(m->packet_type)) {
struct ipv4_hdr *ip_hdr;
uint32_t ip_dst;
@@ -396,9 +396,8 @@ reassemble(struct rte_mbuf *m, uint8_t portid, uint32_t queue,
}
eth_hdr->ether_type = rte_be_to_cpu_16(ETHER_TYPE_IPv4);
- }
- /* if packet is IPv6 */
- else if (m->ol_flags & (PKT_RX_IPV6_HDR | PKT_RX_IPV6_HDR_EXT)) {
+ } else if (RTE_ETH_IS_IPV6_HDR(m->packet_type)) {
+ /* if packet is IPv6 */
struct ipv6_extension_fragment *frag_hdr;
struct ipv6_hdr *ip_hdr;
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v5 15/18] examples/l3fwd-acl: replace bit mask based packet type with unified packet type
2015-05-22 8:44 ` [dpdk-dev] [PATCH v5 00/18] unified packet type Helin Zhang
` (13 preceding siblings ...)
2015-05-22 8:44 ` [dpdk-dev] [PATCH v5 14/18] examples/ip_reassembly: " Helin Zhang
@ 2015-05-22 8:44 ` Helin Zhang
2015-05-22 8:44 ` [dpdk-dev] [PATCH v5 16/18] examples/l3fwd-power: " Helin Zhang
` (3 subsequent siblings)
18 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-05-22 8:44 UTC (permalink / raw)
To: dev
To unify packet types among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
examples/l3fwd-acl/main.c | 19 ++++++-------------
1 file changed, 6 insertions(+), 13 deletions(-)
v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.
v5 changes:
* Re-worded the commit logs.
diff --git a/examples/l3fwd-acl/main.c b/examples/l3fwd-acl/main.c
index a5d4f25..681b675 100644
--- a/examples/l3fwd-acl/main.c
+++ b/examples/l3fwd-acl/main.c
@@ -645,9 +645,7 @@ prepare_one_packet(struct rte_mbuf **pkts_in, struct acl_search_t *acl,
struct ipv4_hdr *ipv4_hdr;
struct rte_mbuf *pkt = pkts_in[index];
- int type = pkt->ol_flags & (PKT_RX_IPV4_HDR | PKT_RX_IPV6_HDR);
-
- if (type == PKT_RX_IPV4_HDR) {
+ if (RTE_ETH_IS_IPV4_HDR(pkt->packet_type)) {
ipv4_hdr = (struct ipv4_hdr *)(rte_pktmbuf_mtod(pkt,
unsigned char *) + sizeof(struct ether_hdr));
@@ -668,8 +666,7 @@ prepare_one_packet(struct rte_mbuf **pkts_in, struct acl_search_t *acl,
rte_pktmbuf_free(pkt);
}
- } else if (type == PKT_RX_IPV6_HDR) {
-
+ } else if (RTE_ETH_IS_IPV6_HDR(pkt->packet_type)) {
/* Fill acl structure */
acl->data_ipv6[acl->num_ipv6] = MBUF_IPV6_2PROTO(pkt);
acl->m_ipv6[(acl->num_ipv6)++] = pkt;
@@ -687,17 +684,13 @@ prepare_one_packet(struct rte_mbuf **pkts_in, struct acl_search_t *acl,
{
struct rte_mbuf *pkt = pkts_in[index];
- int type = pkt->ol_flags & (PKT_RX_IPV4_HDR | PKT_RX_IPV6_HDR);
-
- if (type == PKT_RX_IPV4_HDR) {
-
+ if (RTE_ETH_IS_IPV4_HDR(pkt->packet_type)) {
/* Fill acl structure */
acl->data_ipv4[acl->num_ipv4] = MBUF_IPV4_2PROTO(pkt);
acl->m_ipv4[(acl->num_ipv4)++] = pkt;
- } else if (type == PKT_RX_IPV6_HDR) {
-
+ } else if (RTE_ETH_IS_IPV6_HDR(pkt->packet_type)) {
/* Fill acl structure */
acl->data_ipv6[acl->num_ipv6] = MBUF_IPV6_2PROTO(pkt);
acl->m_ipv6[(acl->num_ipv6)++] = pkt;
@@ -745,9 +738,9 @@ send_one_packet(struct rte_mbuf *m, uint32_t res)
/* in the ACL list, drop it */
#ifdef L3FWDACL_DEBUG
if ((res & ACL_DENY_SIGNATURE) != 0) {
- if (m->ol_flags & PKT_RX_IPV4_HDR)
+ if (RTE_ETH_IS_IPV4_HDR(m->packet_type))
dump_acl4_rule(m, res);
- else
+ else if (RTE_ETH_IS_IPV6_HDR(m->packet_type))
dump_acl6_rule(m, res);
}
#endif
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v5 16/18] examples/l3fwd-power: replace bit mask based packet type with unified packet type
2015-05-22 8:44 ` [dpdk-dev] [PATCH v5 00/18] unified packet type Helin Zhang
` (14 preceding siblings ...)
2015-05-22 8:44 ` [dpdk-dev] [PATCH v5 15/18] examples/l3fwd-acl: " Helin Zhang
@ 2015-05-22 8:44 ` Helin Zhang
2015-05-22 8:44 ` [dpdk-dev] [PATCH v5 17/18] examples/l3fwd: " Helin Zhang
` (2 subsequent siblings)
18 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-05-22 8:44 UTC (permalink / raw)
To: dev
To unify packet types among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
examples/l3fwd-power/main.c | 5 ++---
1 file changed, 2 insertions(+), 3 deletions(-)
v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.
v5 changes:
* Re-worded the commit logs.
diff --git a/examples/l3fwd-power/main.c b/examples/l3fwd-power/main.c
index 6ac342b..6be0a8c 100644
--- a/examples/l3fwd-power/main.c
+++ b/examples/l3fwd-power/main.c
@@ -635,7 +635,7 @@ l3fwd_simple_forward(struct rte_mbuf *m, uint8_t portid,
eth_hdr = rte_pktmbuf_mtod(m, struct ether_hdr *);
- if (m->ol_flags & PKT_RX_IPV4_HDR) {
+ if (RTE_ETH_IS_IPV4_HDR(m->packet_type)) {
/* Handle IPv4 headers.*/
ipv4_hdr =
(struct ipv4_hdr *)(rte_pktmbuf_mtod(m, unsigned char*)
@@ -670,8 +670,7 @@ l3fwd_simple_forward(struct rte_mbuf *m, uint8_t portid,
ether_addr_copy(&ports_eth_addr[dst_port], ð_hdr->s_addr);
send_single_packet(m, dst_port);
- }
- else {
+ } else if (RTE_ETH_IS_IPV6_HDR(m->packet_type)) {
/* Handle IPv6 headers.*/
#if (APP_LOOKUP_METHOD == APP_LOOKUP_EXACT_MATCH)
struct ipv6_hdr *ipv6_hdr;
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v5 17/18] examples/l3fwd: replace bit mask based packet type with unified packet type
2015-05-22 8:44 ` [dpdk-dev] [PATCH v5 00/18] unified packet type Helin Zhang
` (15 preceding siblings ...)
2015-05-22 8:44 ` [dpdk-dev] [PATCH v5 16/18] examples/l3fwd-power: " Helin Zhang
@ 2015-05-22 8:44 ` Helin Zhang
2015-05-22 8:44 ` [dpdk-dev] [PATCH v5 18/18] mbuf: remove old packet type bit masks Helin Zhang
2015-06-01 7:33 ` [dpdk-dev] [PATCH v6 00/18] unified packet type Helin Zhang
18 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-05-22 8:44 UTC (permalink / raw)
To: dev
To unify packet types among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
examples/l3fwd/main.c | 71 +++++++++++++++++++++++++++++----------------------
1 file changed, 40 insertions(+), 31 deletions(-)
v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.
v3 changes:
* Minor bug fixes and enhancements.
v5 changes:
* Re-worded the commit logs.
diff --git a/examples/l3fwd/main.c b/examples/l3fwd/main.c
index e32512e..1e6aca9 100644
--- a/examples/l3fwd/main.c
+++ b/examples/l3fwd/main.c
@@ -955,7 +955,7 @@ l3fwd_simple_forward(struct rte_mbuf *m, uint8_t portid, struct lcore_conf *qcon
eth_hdr = rte_pktmbuf_mtod(m, struct ether_hdr *);
- if (m->ol_flags & PKT_RX_IPV4_HDR) {
+ if (RTE_ETH_IS_IPV4_HDR(m->packet_type)) {
/* Handle IPv4 headers.*/
ipv4_hdr = (struct ipv4_hdr *)(rte_pktmbuf_mtod(m, unsigned char *) +
sizeof(struct ether_hdr));
@@ -990,7 +990,7 @@ l3fwd_simple_forward(struct rte_mbuf *m, uint8_t portid, struct lcore_conf *qcon
send_single_packet(m, dst_port);
- } else {
+ } else if (RTE_ETH_IS_IPV6_HDR(m->packet_type)) {
/* Handle IPv6 headers.*/
struct ipv6_hdr *ipv6_hdr;
@@ -1011,8 +1011,9 @@ l3fwd_simple_forward(struct rte_mbuf *m, uint8_t portid, struct lcore_conf *qcon
ether_addr_copy(&ports_eth_addr[dst_port], ð_hdr->s_addr);
send_single_packet(m, dst_port);
- }
-
+ } else
+ /* Free the mbuf that contains non-IPV4/IPV6 packet */
+ rte_pktmbuf_free(m);
}
#ifdef DO_RFC_1812_CHECKS
@@ -1036,11 +1037,11 @@ l3fwd_simple_forward(struct rte_mbuf *m, uint8_t portid, struct lcore_conf *qcon
* to BAD_PORT value.
*/
static inline __attribute__((always_inline)) void
-rfc1812_process(struct ipv4_hdr *ipv4_hdr, uint16_t *dp, uint32_t flags)
+rfc1812_process(struct ipv4_hdr *ipv4_hdr, uint16_t *dp, uint32_t ptype)
{
uint8_t ihl;
- if ((flags & PKT_RX_IPV4_HDR) != 0) {
+ if (RTE_ETH_IS_IPV4_HDR(ptype)) {
ihl = ipv4_hdr->version_ihl - IPV4_MIN_VER_IHL;
@@ -1071,11 +1072,11 @@ get_dst_port(const struct lcore_conf *qconf, struct rte_mbuf *pkt,
struct ipv6_hdr *ipv6_hdr;
struct ether_hdr *eth_hdr;
- if (pkt->ol_flags & PKT_RX_IPV4_HDR) {
+ if (RTE_ETH_IS_IPV4_HDR(pkt->packet_type)) {
if (rte_lpm_lookup(qconf->ipv4_lookup_struct, dst_ipv4,
&next_hop) != 0)
next_hop = portid;
- } else if (pkt->ol_flags & PKT_RX_IPV6_HDR) {
+ } else if (RTE_ETH_IS_IPV6_HDR(pkt->packet_type)) {
eth_hdr = rte_pktmbuf_mtod(pkt, struct ether_hdr *);
ipv6_hdr = (struct ipv6_hdr *)(eth_hdr + 1);
if (rte_lpm6_lookup(qconf->ipv6_lookup_struct,
@@ -1109,17 +1110,19 @@ process_packet(struct lcore_conf *qconf, struct rte_mbuf *pkt,
ve = val_eth[dp];
dst_port[0] = dp;
- rfc1812_process(ipv4_hdr, dst_port, pkt->ol_flags);
+ rfc1812_process(ipv4_hdr, dst_port, pkt->packet_type);
te = _mm_blend_epi16(te, ve, MASK_ETH);
_mm_store_si128((__m128i *)eth_hdr, te);
}
/*
- * Read ol_flags and destination IPV4 addresses from 4 mbufs.
+ * Read packet_type and destination IPV4 addresses from 4 mbufs.
*/
static inline void
-processx4_step1(struct rte_mbuf *pkt[FWDSTEP], __m128i *dip, uint32_t *flag)
+processx4_step1(struct rte_mbuf *pkt[FWDSTEP],
+ __m128i *dip,
+ uint32_t *ipv4_flag)
{
struct ipv4_hdr *ipv4_hdr;
struct ether_hdr *eth_hdr;
@@ -1128,22 +1131,22 @@ processx4_step1(struct rte_mbuf *pkt[FWDSTEP], __m128i *dip, uint32_t *flag)
eth_hdr = rte_pktmbuf_mtod(pkt[0], struct ether_hdr *);
ipv4_hdr = (struct ipv4_hdr *)(eth_hdr + 1);
x0 = ipv4_hdr->dst_addr;
- flag[0] = pkt[0]->ol_flags & PKT_RX_IPV4_HDR;
+ ipv4_flag[0] = pkt[0]->packet_type & RTE_PTYPE_L3_IPV4;
eth_hdr = rte_pktmbuf_mtod(pkt[1], struct ether_hdr *);
ipv4_hdr = (struct ipv4_hdr *)(eth_hdr + 1);
x1 = ipv4_hdr->dst_addr;
- flag[0] &= pkt[1]->ol_flags;
+ ipv4_flag[0] &= pkt[1]->packet_type;
eth_hdr = rte_pktmbuf_mtod(pkt[2], struct ether_hdr *);
ipv4_hdr = (struct ipv4_hdr *)(eth_hdr + 1);
x2 = ipv4_hdr->dst_addr;
- flag[0] &= pkt[2]->ol_flags;
+ ipv4_flag[0] &= pkt[2]->packet_type;
eth_hdr = rte_pktmbuf_mtod(pkt[3], struct ether_hdr *);
ipv4_hdr = (struct ipv4_hdr *)(eth_hdr + 1);
x3 = ipv4_hdr->dst_addr;
- flag[0] &= pkt[3]->ol_flags;
+ ipv4_flag[0] &= pkt[3]->packet_type;
dip[0] = _mm_set_epi32(x3, x2, x1, x0);
}
@@ -1153,8 +1156,12 @@ processx4_step1(struct rte_mbuf *pkt[FWDSTEP], __m128i *dip, uint32_t *flag)
* If lookup fails, use incoming port (portid) as destination port.
*/
static inline void
-processx4_step2(const struct lcore_conf *qconf, __m128i dip, uint32_t flag,
- uint8_t portid, struct rte_mbuf *pkt[FWDSTEP], uint16_t dprt[FWDSTEP])
+processx4_step2(const struct lcore_conf *qconf,
+ __m128i dip,
+ uint32_t ipv4_flag,
+ uint8_t portid,
+ struct rte_mbuf *pkt[FWDSTEP],
+ uint16_t dprt[FWDSTEP])
{
rte_xmm_t dst;
const __m128i bswap_mask = _mm_set_epi8(12, 13, 14, 15, 8, 9, 10, 11,
@@ -1164,7 +1171,7 @@ processx4_step2(const struct lcore_conf *qconf, __m128i dip, uint32_t flag,
dip = _mm_shuffle_epi8(dip, bswap_mask);
/* if all 4 packets are IPV4. */
- if (likely(flag != 0)) {
+ if (likely(ipv4_flag)) {
rte_lpm_lookupx4(qconf->ipv4_lookup_struct, dip, dprt, portid);
} else {
dst.x = dip;
@@ -1215,13 +1222,13 @@ processx4_step3(struct rte_mbuf *pkt[FWDSTEP], uint16_t dst_port[FWDSTEP])
_mm_store_si128(p[3], te[3]);
rfc1812_process((struct ipv4_hdr *)((struct ether_hdr *)p[0] + 1),
- &dst_port[0], pkt[0]->ol_flags);
+ &dst_port[0], pkt[0]->packet_type);
rfc1812_process((struct ipv4_hdr *)((struct ether_hdr *)p[1] + 1),
- &dst_port[1], pkt[1]->ol_flags);
+ &dst_port[1], pkt[1]->packet_type);
rfc1812_process((struct ipv4_hdr *)((struct ether_hdr *)p[2] + 1),
- &dst_port[2], pkt[2]->ol_flags);
+ &dst_port[2], pkt[2]->packet_type);
rfc1812_process((struct ipv4_hdr *)((struct ether_hdr *)p[3] + 1),
- &dst_port[3], pkt[3]->ol_flags);
+ &dst_port[3], pkt[3]->packet_type);
}
/*
@@ -1408,7 +1415,7 @@ main_loop(__attribute__((unused)) void *dummy)
uint16_t *lp;
uint16_t dst_port[MAX_PKT_BURST];
__m128i dip[MAX_PKT_BURST / FWDSTEP];
- uint32_t flag[MAX_PKT_BURST / FWDSTEP];
+ uint32_t ipv4_flag[MAX_PKT_BURST / FWDSTEP];
uint16_t pnum[MAX_PKT_BURST + 1];
#endif
@@ -1478,14 +1485,16 @@ main_loop(__attribute__((unused)) void *dummy)
*/
int32_t n = RTE_ALIGN_FLOOR(nb_rx, 4);
for (j = 0; j < n ; j+=4) {
- uint32_t ol_flag = pkts_burst[j]->ol_flags
- & pkts_burst[j+1]->ol_flags
- & pkts_burst[j+2]->ol_flags
- & pkts_burst[j+3]->ol_flags;
- if (ol_flag & PKT_RX_IPV4_HDR ) {
+ uint32_t pkt_type =
+ pkts_burst[j]->packet_type &
+ pkts_burst[j+1]->packet_type &
+ pkts_burst[j+2]->packet_type &
+ pkts_burst[j+3]->packet_type;
+ if (pkt_type & RTE_PTYPE_L3_IPV4) {
simple_ipv4_fwd_4pkts(&pkts_burst[j],
portid, qconf);
- } else if (ol_flag & PKT_RX_IPV6_HDR) {
+ } else if (pkt_type &
+ RTE_PTYPE_L3_IPV6) {
simple_ipv6_fwd_4pkts(&pkts_burst[j],
portid, qconf);
} else {
@@ -1510,13 +1519,13 @@ main_loop(__attribute__((unused)) void *dummy)
for (j = 0; j != k; j += FWDSTEP) {
processx4_step1(&pkts_burst[j],
&dip[j / FWDSTEP],
- &flag[j / FWDSTEP]);
+ &ipv4_flag[j / FWDSTEP]);
}
k = RTE_ALIGN_FLOOR(nb_rx, FWDSTEP);
for (j = 0; j != k; j += FWDSTEP) {
processx4_step2(qconf, dip[j / FWDSTEP],
- flag[j / FWDSTEP], portid,
+ ipv4_flag[j / FWDSTEP], portid,
&pkts_burst[j], &dst_port[j]);
}
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v5 18/18] mbuf: remove old packet type bit masks
2015-05-22 8:44 ` [dpdk-dev] [PATCH v5 00/18] unified packet type Helin Zhang
` (16 preceding siblings ...)
2015-05-22 8:44 ` [dpdk-dev] [PATCH v5 17/18] examples/l3fwd: " Helin Zhang
@ 2015-05-22 8:44 ` Helin Zhang
2015-06-01 7:33 ` [dpdk-dev] [PATCH v6 00/18] unified packet type Helin Zhang
18 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-05-22 8:44 UTC (permalink / raw)
To: dev
As unified packet types are used instead, those old bit masks and
the relevant macros for packet type indication need to be removed.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
lib/librte_mbuf/rte_mbuf.c | 6 ------
lib/librte_mbuf/rte_mbuf.h | 6 ------
2 files changed, 12 deletions(-)
v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.
* Redefined the bit masks for packet RX offload flags.
v5 changes:
* Rolled back the bit masks of RX flags, for ABI compatibility.
diff --git a/lib/librte_mbuf/rte_mbuf.c b/lib/librte_mbuf/rte_mbuf.c
index f506517..78688f7 100644
--- a/lib/librte_mbuf/rte_mbuf.c
+++ b/lib/librte_mbuf/rte_mbuf.c
@@ -251,14 +251,8 @@ const char *rte_get_rx_ol_flag_name(uint64_t mask)
/* case PKT_RX_HBUF_OVERFLOW: return "PKT_RX_HBUF_OVERFLOW"; */
/* case PKT_RX_RECIP_ERR: return "PKT_RX_RECIP_ERR"; */
/* case PKT_RX_MAC_ERR: return "PKT_RX_MAC_ERR"; */
- case PKT_RX_IPV4_HDR: return "PKT_RX_IPV4_HDR";
- case PKT_RX_IPV4_HDR_EXT: return "PKT_RX_IPV4_HDR_EXT";
- case PKT_RX_IPV6_HDR: return "PKT_RX_IPV6_HDR";
- case PKT_RX_IPV6_HDR_EXT: return "PKT_RX_IPV6_HDR_EXT";
case PKT_RX_IEEE1588_PTP: return "PKT_RX_IEEE1588_PTP";
case PKT_RX_IEEE1588_TMST: return "PKT_RX_IEEE1588_TMST";
- case PKT_RX_TUNNEL_IPV4_HDR: return "PKT_RX_TUNNEL_IPV4_HDR";
- case PKT_RX_TUNNEL_IPV6_HDR: return "PKT_RX_TUNNEL_IPV6_HDR";
default: return NULL;
}
}
diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
index 6a26172..aea9ba8 100644
--- a/lib/librte_mbuf/rte_mbuf.h
+++ b/lib/librte_mbuf/rte_mbuf.h
@@ -91,14 +91,8 @@ extern "C" {
#define PKT_RX_HBUF_OVERFLOW (0ULL << 0) /**< Header buffer overflow. */
#define PKT_RX_RECIP_ERR (0ULL << 0) /**< Hardware processing error. */
#define PKT_RX_MAC_ERR (0ULL << 0) /**< MAC error. */
-#define PKT_RX_IPV4_HDR (1ULL << 5) /**< RX packet with IPv4 header. */
-#define PKT_RX_IPV4_HDR_EXT (1ULL << 6) /**< RX packet with extended IPv4 header. */
-#define PKT_RX_IPV6_HDR (1ULL << 7) /**< RX packet with IPv6 header. */
-#define PKT_RX_IPV6_HDR_EXT (1ULL << 8) /**< RX packet with extended IPv6 header. */
#define PKT_RX_IEEE1588_PTP (1ULL << 9) /**< RX IEEE1588 L2 Ethernet PT Packet. */
#define PKT_RX_IEEE1588_TMST (1ULL << 10) /**< RX IEEE1588 L2/L4 timestamped packet.*/
-#define PKT_RX_TUNNEL_IPV4_HDR (1ULL << 11) /**< RX tunnel packet with IPv4 header.*/
-#define PKT_RX_TUNNEL_IPV6_HDR (1ULL << 12) /**< RX tunnel packet with IPv6 header. */
#define PKT_RX_FDIR_ID (1ULL << 13) /**< FD id reported if FDIR match. */
#define PKT_RX_FDIR_FLX (1ULL << 14) /**< Flexible bytes reported if FDIR match. */
/* add new RX flags here */
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* Re: [dpdk-dev] [PATCH v5 01/18] mbuf: redefine packet_type in rte_mbuf
2015-05-22 8:44 ` [dpdk-dev] [PATCH v5 01/18] mbuf: redefine packet_type in rte_mbuf Helin Zhang
@ 2015-05-22 10:09 ` Neil Horman
0 siblings, 0 replies; 257+ messages in thread
From: Neil Horman @ 2015-05-22 10:09 UTC (permalink / raw)
To: Helin Zhang; +Cc: dev
On Fri, May 22, 2015 at 04:44:07PM +0800, Helin Zhang wrote:
> In order to unify the packet type, the field of 'packet_type' in
> 'struct rte_mbuf' needs to be extended from 16 to 32 bits.
> Accordingly, some fields in 'struct rte_mbuf' are re-organized to
> support this change for Vector PMD. As 'struct rte_kni_mbuf' for
> KNI should be right mapped to 'struct rte_mbuf', it should be
> modified accordingly. In addition, Vector PMD of ixgbe is disabled
> by default, as 'struct rte_mbuf' changed.
>
> Signed-off-by: Helin Zhang <helin.zhang@intel.com>
> Signed-off-by: Cunming Liang <cunming.liang@intel.com>
> ---
> config/common_linuxapp | 2 +-
> .../linuxapp/eal/include/exec-env/rte_kni_common.h | 4 ++--
> lib/librte_mbuf/rte_mbuf.h | 23 +++++++++++++++-------
> 3 files changed, 19 insertions(+), 10 deletions(-)
>
> v2 changes:
> * Enlarged the packet_type field from 16 bits to 32 bits.
> * Redefined the packet type sub-fields.
> * Updated the 'struct rte_kni_mbuf' for KNI according to the mbuf changes.
>
> v3 changes:
> * Put the mbuf layout changes into a single patch.
> * Disabled vector ixgbe PMD by default, as mbuf layout changed.
>
> v5 changes:
> * Re-worded the commit logs.
>
> diff --git a/config/common_linuxapp b/config/common_linuxapp
> index 0078dc9..6b067c7 100644
> --- a/config/common_linuxapp
> +++ b/config/common_linuxapp
> @@ -167,7 +167,7 @@ CONFIG_RTE_LIBRTE_IXGBE_DEBUG_TX_FREE=n
> CONFIG_RTE_LIBRTE_IXGBE_DEBUG_DRIVER=n
> CONFIG_RTE_LIBRTE_IXGBE_PF_DISABLE_STRIP_CRC=n
> CONFIG_RTE_LIBRTE_IXGBE_RX_ALLOW_BULK_ALLOC=y
> -CONFIG_RTE_IXGBE_INC_VECTOR=y
> +CONFIG_RTE_IXGBE_INC_VECTOR=n
> CONFIG_RTE_IXGBE_RX_OLFLAGS_ENABLE=y
>
> #
> diff --git a/lib/librte_eal/linuxapp/eal/include/exec-env/rte_kni_common.h b/lib/librte_eal/linuxapp/eal/include/exec-env/rte_kni_common.h
> index 1e55c2d..bd1cc09 100644
> --- a/lib/librte_eal/linuxapp/eal/include/exec-env/rte_kni_common.h
> +++ b/lib/librte_eal/linuxapp/eal/include/exec-env/rte_kni_common.h
> @@ -117,9 +117,9 @@ struct rte_kni_mbuf {
> uint16_t data_off; /**< Start address of data in segment buffer. */
> char pad1[4];
> uint64_t ol_flags; /**< Offload features. */
> - char pad2[2];
> - uint16_t data_len; /**< Amount of data in segment buffer. */
> + char pad2[4];
> uint32_t pkt_len; /**< Total pkt len: sum of all segment data_len. */
> + uint16_t data_len; /**< Amount of data in segment buffer. */
>
> /* fields on second cache line */
> char pad3[8] __attribute__((__aligned__(RTE_CACHE_LINE_SIZE)));
> diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
> index ab6de67..c2b1463 100644
> --- a/lib/librte_mbuf/rte_mbuf.h
> +++ b/lib/librte_mbuf/rte_mbuf.h
> @@ -269,17 +269,26 @@ struct rte_mbuf {
> /* remaining bytes are set on RX when pulling packet from descriptor */
> MARKER rx_descriptor_fields1;
>
> - /**
> - * The packet type, which is used to indicate ordinary packet and also
> - * tunneled packet format, i.e. each number is represented a type of
> - * packet.
> + /*
> + * The packet type, which is the combination of outer/inner L2, L3, L4
> + * and tunnel types.
> */
> - uint16_t packet_type;
> + union {
> + uint32_t packet_type; /**< L2/L3/L4 and tunnel information. */
> + struct {
> + uint32_t l2_type:4; /**< (Outer) L2 type. */
> + uint32_t l3_type:4; /**< (Outer) L3 type. */
> + uint32_t l4_type:4; /**< (Outer) L4 type. */
> + uint32_t tun_type:4; /**< Tunnel type. */
> + uint32_t inner_l2_type:4; /**< Inner L2 type. */
> + uint32_t inner_l3_type:4; /**< Inner L3 type. */
> + uint32_t inner_l4_type:4; /**< Inner L4 type. */
> + };
> + };
>
> - uint16_t data_len; /**< Amount of data in segment buffer. */
> uint32_t pkt_len; /**< Total pkt len: sum of all segments. */
> + uint16_t data_len; /**< Amount of data in segment buffer. */
> uint16_t vlan_tci; /**< VLAN Tag Control Identifier (CPU order) */
> - uint16_t reserved;
> union {
> uint32_t rss; /**< RSS hash result if RSS enabled */
> struct {
ABI Compatibility process?
Neil
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v6 00/18] unified packet type
2015-05-22 8:44 ` [dpdk-dev] [PATCH v5 00/18] unified packet type Helin Zhang
` (17 preceding siblings ...)
2015-05-22 8:44 ` [dpdk-dev] [PATCH v5 18/18] mbuf: remove old packet type bit masks Helin Zhang
@ 2015-06-01 7:33 ` Helin Zhang
2015-06-01 7:33 ` [dpdk-dev] [PATCH v6 01/18] mbuf: redefine packet_type in rte_mbuf Helin Zhang
` (18 more replies)
18 siblings, 19 replies; 257+ messages in thread
From: Helin Zhang @ 2015-06-01 7:33 UTC (permalink / raw)
To: dev
Currently only 6 bits which are stored in ol_flags are used to indicate
the packet types. This is not enough, as some NIC hardware can recognize
quite a lot of packet types, e.g i40e hardware can recognize more than 150
packet types. Hiding those packet types hides hardware offload capabilities
which could be quite useful for improving performance and for end users. So
an unified packet types are needed to support all possible PMDs. A 16 bits
packet_type in mbuf structure can be changed to 32 bits and used for this
purpose. In addition, all packet types stored in ol_flag field should be
deleted at all, and 6 bits of ol_flags can be save as the benifit.
Initially, 32 bits of packet_type can be divided into several sub fields to
indicate different packet type information of a packet. The initial design
is to divide those bits into fields for L2 types, L3 types, L4 types, tunnel
types, inner L2 types, inner L3 types and inner L4 types. All PMDs should
translate the offloaded packet types into these 7 fields of information, for
user applications.
To avoid breaking ABI compatibility, currently all the code changes for
unified packet type are disabled at compile time by default. Users can
enable it manually by defining the macro of RTE_UNIFIED_PKT_TYPE. The code
changes will be valid by default in a future release, and the old version
will be deleted accordingly, after the ABI change process is done.
v2 changes:
* Enlarged the packet_type field from 16 bits to 32 bits.
* Redefined the packet type sub-fields.
* Updated the 'struct rte_kni_mbuf' for KNI according to the mbuf changes.
* Used redefined packet types and enlarged packet_type field for all PMDs
and corresponding applications.
* Removed changes in bond and its relevant application, as there is no need
at all according to the recent bond changes.
v3 changes:
* Put the mbuf layout changes into a single patch.
* Put vector ixgbe changes right after mbuf changes.
* Disabled vector ixgbe PMD by default, as mbuf layout changed, and then
re-enabled it after vector ixgbe PMD updated.
* Put the definitions of unified packet type into a single patch.
* Minor bug fixes and enhancements in l3fwd example.
v4 changes:
* Added detailed description of each packet types.
* Supported unified packet type of fm10k.
* Added printing logs of packet types of each received packet for rxonly
mode in testpmd.
* Removed several useless code lines which block packet type unification from
app/test/packet_burst_generator.c.
v5 changes:
* Added more detailed description for each packet types, together with examples.
* Rolled back the macro definitions of RX packet flags, for ABI compitability.
v6 changes:
* Disabled the code changes for unified packet type by default, to
avoid breaking ABI compatibility.
Helin Zhang (18):
mbuf: redefine packet_type in rte_mbuf
ixgbe: support unified packet type in vectorized PMD
mbuf: add definitions of unified packet types
e1000: replace bit mask based packet type with unified packet type
ixgbe: replace bit mask based packet type with unified packet type
i40e: replace bit mask based packet type with unified packet type
enic: replace bit mask based packet type with unified packet type
vmxnet3: replace bit mask based packet type with unified packet type
fm10k: replace bit mask based packet type with unified packet type
app/test-pipeline: replace bit mask based packet type with unified
packet type
app/testpmd: replace bit mask based packet type with unified packet
type
app/test: Remove useless code
examples/ip_fragmentation: replace bit mask based packet type with
unified packet type
examples/ip_reassembly: replace bit mask based packet type with
unified packet type
examples/l3fwd-acl: replace bit mask based packet type with unified
packet type
examples/l3fwd-power: replace bit mask based packet type with unified
packet type
examples/l3fwd: replace bit mask based packet type with unified packet
type
mbuf: remove old packet type bit masks
app/test-pipeline/pipeline_hash.c | 13 +
app/test-pmd/csumonly.c | 14 +
app/test-pmd/rxonly.c | 183 +++++++
app/test/packet_burst_generator.c | 6 +-
drivers/net/e1000/igb_rxtx.c | 102 ++++
drivers/net/enic/enic_main.c | 26 +
drivers/net/fm10k/fm10k_rxtx.c | 27 ++
drivers/net/i40e/i40e_rxtx.c | 528 +++++++++++++++++++++
drivers/net/ixgbe/ixgbe_rxtx.c | 163 +++++++
drivers/net/ixgbe/ixgbe_rxtx_vec.c | 75 ++-
drivers/net/vmxnet3/vmxnet3_rxtx.c | 8 +
examples/ip_fragmentation/main.c | 9 +
examples/ip_reassembly/main.c | 9 +
examples/l3fwd-acl/main.c | 29 +-
examples/l3fwd-power/main.c | 8 +
examples/l3fwd/main.c | 123 ++++-
.../linuxapp/eal/include/exec-env/rte_kni_common.h | 6 +
lib/librte_mbuf/rte_mbuf.c | 4 +
lib/librte_mbuf/rte_mbuf.h | 514 ++++++++++++++++++++
19 files changed, 1834 insertions(+), 13 deletions(-)
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v6 01/18] mbuf: redefine packet_type in rte_mbuf
2015-06-01 7:33 ` [dpdk-dev] [PATCH v6 00/18] unified packet type Helin Zhang
@ 2015-06-01 7:33 ` Helin Zhang
2015-06-01 8:14 ` Olivier MATZ
2015-06-01 7:33 ` [dpdk-dev] [PATCH v6 02/18] ixgbe: support unified packet type in vectorized PMD Helin Zhang
` (17 subsequent siblings)
18 siblings, 1 reply; 257+ messages in thread
From: Helin Zhang @ 2015-06-01 7:33 UTC (permalink / raw)
To: dev
In order to unify the packet type, the field of 'packet_type' in
'struct rte_mbuf' needs to be extended from 16 to 32 bits.
Accordingly, some fields in 'struct rte_mbuf' are re-organized to
support this change for Vector PMD. As 'struct rte_kni_mbuf' for
KNI should be right mapped to 'struct rte_mbuf', it should be
modified accordingly. In addition, Vector PMD of ixgbe is disabled
by default, as 'struct rte_mbuf' changed.
To avoid breaking ABI compatibility, all the changes would be
enabled by RTE_UNIFIED_PKT_TYPE, which is disabled by default.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Signed-off-by: Cunming Liang <cunming.liang@intel.com>
---
config/common_linuxapp | 2 +-
.../linuxapp/eal/include/exec-env/rte_kni_common.h | 6 ++++++
lib/librte_mbuf/rte_mbuf.h | 23 ++++++++++++++++++++++
3 files changed, 30 insertions(+), 1 deletion(-)
v2 changes:
* Enlarged the packet_type field from 16 bits to 32 bits.
* Redefined the packet type sub-fields.
* Updated the 'struct rte_kni_mbuf' for KNI according to the mbuf changes.
v3 changes:
* Put the mbuf layout changes into a single patch.
* Disabled vector ixgbe PMD by default, as mbuf layout changed.
v5 changes:
* Re-worded the commit logs.
v6 changes:
* Disabled the code changes for unified packet type by default, to
avoid breaking ABI compatibility.
diff --git a/config/common_linuxapp b/config/common_linuxapp
index 0078dc9..6b067c7 100644
--- a/config/common_linuxapp
+++ b/config/common_linuxapp
@@ -167,7 +167,7 @@ CONFIG_RTE_LIBRTE_IXGBE_DEBUG_TX_FREE=n
CONFIG_RTE_LIBRTE_IXGBE_DEBUG_DRIVER=n
CONFIG_RTE_LIBRTE_IXGBE_PF_DISABLE_STRIP_CRC=n
CONFIG_RTE_LIBRTE_IXGBE_RX_ALLOW_BULK_ALLOC=y
-CONFIG_RTE_IXGBE_INC_VECTOR=y
+CONFIG_RTE_IXGBE_INC_VECTOR=n
CONFIG_RTE_IXGBE_RX_OLFLAGS_ENABLE=y
#
diff --git a/lib/librte_eal/linuxapp/eal/include/exec-env/rte_kni_common.h b/lib/librte_eal/linuxapp/eal/include/exec-env/rte_kni_common.h
index 1e55c2d..7a2abbb 100644
--- a/lib/librte_eal/linuxapp/eal/include/exec-env/rte_kni_common.h
+++ b/lib/librte_eal/linuxapp/eal/include/exec-env/rte_kni_common.h
@@ -117,9 +117,15 @@ struct rte_kni_mbuf {
uint16_t data_off; /**< Start address of data in segment buffer. */
char pad1[4];
uint64_t ol_flags; /**< Offload features. */
+#ifdef RTE_UNIFIED_PKT_TYPE
+ char pad2[4];
+ uint32_t pkt_len; /**< Total pkt len: sum of all segment data_len. */
+ uint16_t data_len; /**< Amount of data in segment buffer. */
+#else
char pad2[2];
uint16_t data_len; /**< Amount of data in segment buffer. */
uint32_t pkt_len; /**< Total pkt len: sum of all segment data_len. */
+#endif
/* fields on second cache line */
char pad3[8] __attribute__((__aligned__(RTE_CACHE_LINE_SIZE)));
diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
index ab6de67..a8662c2 100644
--- a/lib/librte_mbuf/rte_mbuf.h
+++ b/lib/librte_mbuf/rte_mbuf.h
@@ -269,6 +269,28 @@ struct rte_mbuf {
/* remaining bytes are set on RX when pulling packet from descriptor */
MARKER rx_descriptor_fields1;
+#ifdef RTE_UNIFIED_PKT_TYPE
+ /*
+ * The packet type, which is the combination of outer/inner L2, L3, L4
+ * and tunnel types.
+ */
+ union {
+ uint32_t packet_type; /**< L2/L3/L4 and tunnel information. */
+ struct {
+ uint32_t l2_type:4; /**< (Outer) L2 type. */
+ uint32_t l3_type:4; /**< (Outer) L3 type. */
+ uint32_t l4_type:4; /**< (Outer) L4 type. */
+ uint32_t tun_type:4; /**< Tunnel type. */
+ uint32_t inner_l2_type:4; /**< Inner L2 type. */
+ uint32_t inner_l3_type:4; /**< Inner L3 type. */
+ uint32_t inner_l4_type:4; /**< Inner L4 type. */
+ };
+ };
+
+ uint32_t pkt_len; /**< Total pkt len: sum of all segments. */
+ uint16_t data_len; /**< Amount of data in segment buffer. */
+ uint16_t vlan_tci; /**< VLAN Tag Control Identifier (CPU order) */
+#else
/**
* The packet type, which is used to indicate ordinary packet and also
* tunneled packet format, i.e. each number is represented a type of
@@ -280,6 +302,7 @@ struct rte_mbuf {
uint32_t pkt_len; /**< Total pkt len: sum of all segments. */
uint16_t vlan_tci; /**< VLAN Tag Control Identifier (CPU order) */
uint16_t reserved;
+#endif
union {
uint32_t rss; /**< RSS hash result if RSS enabled */
struct {
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v6 02/18] ixgbe: support unified packet type in vectorized PMD
2015-06-01 7:33 ` [dpdk-dev] [PATCH v6 00/18] unified packet type Helin Zhang
2015-06-01 7:33 ` [dpdk-dev] [PATCH v6 01/18] mbuf: redefine packet_type in rte_mbuf Helin Zhang
@ 2015-06-01 7:33 ` Helin Zhang
2015-06-01 7:33 ` [dpdk-dev] [PATCH v6 03/18] mbuf: add definitions of unified packet types Helin Zhang
` (16 subsequent siblings)
18 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-06-01 7:33 UTC (permalink / raw)
To: dev
To unify the packet type, bit masks of packet type for ol_flags are
replaced. In addition, more packet types (UDP, TCP and SCTP) are
supported in vectorized ixgbe PMD.
To avoid breaking ABI compatibility, all the changes would be
enabled by RTE_UNIFIED_PKT_TYPE, which is disabled by default.
Note that around 2% performance drop (64B) was observed of doing 4
ports (1 port per 82599 card) IO forwarding on the same SNB core.
Signed-off-by: Cunming Liang <cunming.liang@intel.com>
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
config/common_linuxapp | 2 +-
drivers/net/ixgbe/ixgbe_rxtx_vec.c | 75 +++++++++++++++++++++++++++++++++++++-
2 files changed, 74 insertions(+), 3 deletions(-)
v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.
v3 changes:
* Put vector ixgbe changes right after mbuf changes.
* Enabled vector ixgbe PMD by default together with changes for updated
vector PMD.
v5 changes:
* Re-worded the commit logs.
v6 changes:
* Disabled the code changes for unified packet type by default, to
avoid breaking ABI compatibility.
diff --git a/config/common_linuxapp b/config/common_linuxapp
index 6b067c7..0078dc9 100644
--- a/config/common_linuxapp
+++ b/config/common_linuxapp
@@ -167,7 +167,7 @@ CONFIG_RTE_LIBRTE_IXGBE_DEBUG_TX_FREE=n
CONFIG_RTE_LIBRTE_IXGBE_DEBUG_DRIVER=n
CONFIG_RTE_LIBRTE_IXGBE_PF_DISABLE_STRIP_CRC=n
CONFIG_RTE_LIBRTE_IXGBE_RX_ALLOW_BULK_ALLOC=y
-CONFIG_RTE_IXGBE_INC_VECTOR=n
+CONFIG_RTE_IXGBE_INC_VECTOR=y
CONFIG_RTE_IXGBE_RX_OLFLAGS_ENABLE=y
#
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec.c b/drivers/net/ixgbe/ixgbe_rxtx_vec.c
index abd10f6..382c949 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec.c
@@ -134,6 +134,12 @@ ixgbe_rxq_rearm(struct ixgbe_rx_queue *rxq)
*/
#ifdef RTE_IXGBE_RX_OLFLAGS_ENABLE
+#ifdef RTE_UNIFIED_PKT_TYPE
+#define OLFLAGS_MASK_V (((uint64_t)PKT_RX_VLAN_PKT << 48) | \
+ ((uint64_t)PKT_RX_VLAN_PKT << 32) | \
+ ((uint64_t)PKT_RX_VLAN_PKT << 16) | \
+ ((uint64_t)PKT_RX_VLAN_PKT))
+#else
#define OLFLAGS_MASK ((uint16_t)(PKT_RX_VLAN_PKT | PKT_RX_IPV4_HDR |\
PKT_RX_IPV4_HDR_EXT | PKT_RX_IPV6_HDR |\
PKT_RX_IPV6_HDR_EXT))
@@ -142,11 +148,26 @@ ixgbe_rxq_rearm(struct ixgbe_rx_queue *rxq)
((uint64_t)OLFLAGS_MASK << 16) | \
((uint64_t)OLFLAGS_MASK))
#define PTYPE_SHIFT (1)
+#endif /* RTE_UNIFIED_PKT_TYPE */
+
#define VTAG_SHIFT (3)
static inline void
desc_to_olflags_v(__m128i descs[4], struct rte_mbuf **rx_pkts)
{
+#ifdef RTE_UNIFIED_PKT_TYPE
+ __m128i vtag0, vtag1;
+ union {
+ uint16_t e[4];
+ uint64_t dword;
+ } vol;
+
+ vtag0 = _mm_unpackhi_epi16(descs[0], descs[1]);
+ vtag1 = _mm_unpackhi_epi16(descs[2], descs[3]);
+ vtag1 = _mm_unpacklo_epi32(vtag0, vtag1);
+ vtag1 = _mm_srli_epi16(vtag1, VTAG_SHIFT);
+ vol.dword = _mm_cvtsi128_si64(vtag1) & OLFLAGS_MASK_V;
+#else
__m128i ptype0, ptype1, vtag0, vtag1;
union {
uint16_t e[4];
@@ -166,6 +187,7 @@ desc_to_olflags_v(__m128i descs[4], struct rte_mbuf **rx_pkts)
ptype1 = _mm_or_si128(ptype1, vtag1);
vol.dword = _mm_cvtsi128_si64(ptype1) & OLFLAGS_MASK_V;
+#endif /* RTE_UNIFIED_PKT_TYPE */
rx_pkts[0]->ol_flags = vol.e[0];
rx_pkts[1]->ol_flags = vol.e[1];
@@ -196,6 +218,18 @@ _recv_raw_pkts_vec(struct ixgbe_rx_queue *rxq, struct rte_mbuf **rx_pkts,
int pos;
uint64_t var;
__m128i shuf_msk;
+#ifdef RTE_UNIFIED_PKT_TYPE
+ __m128i crc_adjust = _mm_set_epi16(
+ 0, 0, 0, /* ignore non-length fields */
+ -rxq->crc_len, /* sub crc on data_len */
+ 0, /* ignore high-16bits of pkt_len */
+ -rxq->crc_len, /* sub crc on pkt_len */
+ 0, 0 /* ignore pkt_type field */
+ );
+ __m128i dd_check, eop_check;
+ __m128i desc_mask = _mm_set_epi32(0xFFFFFFFF, 0xFFFFFFFF,
+ 0xFFFFFFFF, 0xFFFF07F0);
+#else
__m128i crc_adjust = _mm_set_epi16(
0, 0, 0, 0, /* ignore non-length fields */
0, /* ignore high-16bits of pkt_len */
@@ -204,6 +238,7 @@ _recv_raw_pkts_vec(struct ixgbe_rx_queue *rxq, struct rte_mbuf **rx_pkts,
0 /* ignore pkt_type field */
);
__m128i dd_check, eop_check;
+#endif /* RTE_UNIFIED_PKT_TYPE */
if (unlikely(nb_pkts < RTE_IXGBE_VPMD_RX_BURST))
return 0;
@@ -232,6 +267,18 @@ _recv_raw_pkts_vec(struct ixgbe_rx_queue *rxq, struct rte_mbuf **rx_pkts,
eop_check = _mm_set_epi64x(0x0000000200000002LL, 0x0000000200000002LL);
/* mask to shuffle from desc. to mbuf */
+#ifdef RTE_UNIFIED_PKT_TYPE
+ shuf_msk = _mm_set_epi8(
+ 7, 6, 5, 4, /* octet 4~7, 32bits rss */
+ 15, 14, /* octet 14~15, low 16 bits vlan_macip */
+ 13, 12, /* octet 12~13, 16 bits data_len */
+ 0xFF, 0xFF, /* skip high 16 bits pkt_len, zero out */
+ 13, 12, /* octet 12~13, low 16 bits pkt_len */
+ 0xFF, 0xFF, /* skip high 16 bits pkt_type */
+ 1, /* octet 1, 8 bits pkt_type field */
+ 0 /* octet 0, 4 bits offset 4 pkt_type field */
+ );
+#else
shuf_msk = _mm_set_epi8(
7, 6, 5, 4, /* octet 4~7, 32bits rss */
0xFF, 0xFF, /* skip high 16 bits vlan_macip, zero out */
@@ -241,18 +288,28 @@ _recv_raw_pkts_vec(struct ixgbe_rx_queue *rxq, struct rte_mbuf **rx_pkts,
13, 12, /* octet 12~13, 16 bits data_len */
0xFF, 0xFF /* skip pkt_type field */
);
+#endif /* RTE_UNIFIED_PKT_TYPE */
/* Cache is empty -> need to scan the buffer rings, but first move
* the next 'n' mbufs into the cache */
sw_ring = &rxq->sw_ring[rxq->rx_tail];
- /*
- * A. load 4 packet in one loop
+#ifdef RTE_UNIFIED_PKT_TYPE
+ /* A. load 4 packet in one loop
+ * [A*. mask out 4 unused dirty field in desc]
* B. copy 4 mbuf point from swring to rx_pkts
* C. calc the number of DD bits among the 4 packets
* [C*. extract the end-of-packet bit, if requested]
* D. fill info. from desc to mbuf
*/
+#else
+ /* A. load 4 packet in one loop
+ * B. copy 4 mbuf point from swring to rx_pkts
+ * C. calc the number of DD bits among the 4 packets
+ * [C*. extract the end-of-packet bit, if requested]
+ * D. fill info. from desc to mbuf
+ */
+#endif /* RTE_UNIFIED_PKT_TYPE */
for (pos = 0, nb_pkts_recd = 0; pos < RTE_IXGBE_VPMD_RX_BURST;
pos += RTE_IXGBE_DESCS_PER_LOOP,
rxdp += RTE_IXGBE_DESCS_PER_LOOP) {
@@ -289,6 +346,16 @@ _recv_raw_pkts_vec(struct ixgbe_rx_queue *rxq, struct rte_mbuf **rx_pkts,
/* B.2 copy 2 mbuf point into rx_pkts */
_mm_storeu_si128((__m128i *)&rx_pkts[pos+2], mbp2);
+#ifdef RTE_UNIFIED_PKT_TYPE
+ /* A* mask out 0~3 bits RSS type */
+ descs[3] = _mm_and_si128(descs[3], desc_mask);
+ descs[2] = _mm_and_si128(descs[2], desc_mask);
+
+ /* A* mask out 0~3 bits RSS type */
+ descs[1] = _mm_and_si128(descs[1], desc_mask);
+ descs[0] = _mm_and_si128(descs[0], desc_mask);
+#endif /* RTE_UNIFIED_PKT_TYPE */
+
/* avoid compiler reorder optimization */
rte_compiler_barrier();
@@ -301,7 +368,11 @@ _recv_raw_pkts_vec(struct ixgbe_rx_queue *rxq, struct rte_mbuf **rx_pkts,
/* C.1 4=>2 filter staterr info only */
sterr_tmp1 = _mm_unpackhi_epi32(descs[1], descs[0]);
+#ifdef RTE_UNIFIED_PKT_TYPE
+ /* set ol_flags with vlan packet type */
+#else
/* set ol_flags with packet type and vlan tag */
+#endif /* RTE_UNIFIED_PKT_TYPE */
desc_to_olflags_v(descs, &rx_pkts[pos]);
/* D.2 pkt 3,4 set in_port/nb_seg and remove crc */
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v6 03/18] mbuf: add definitions of unified packet types
2015-06-01 7:33 ` [dpdk-dev] [PATCH v6 00/18] unified packet type Helin Zhang
2015-06-01 7:33 ` [dpdk-dev] [PATCH v6 01/18] mbuf: redefine packet_type in rte_mbuf Helin Zhang
2015-06-01 7:33 ` [dpdk-dev] [PATCH v6 02/18] ixgbe: support unified packet type in vectorized PMD Helin Zhang
@ 2015-06-01 7:33 ` Helin Zhang
2015-06-01 7:33 ` [dpdk-dev] [PATCH v6 04/18] e1000: replace bit mask based packet type with unified packet type Helin Zhang
` (15 subsequent siblings)
18 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-06-01 7:33 UTC (permalink / raw)
To: dev
As there are only 6 bit flags in ol_flags for indicating packet
types, which is not enough to describe all the possible packet
types hardware can recognize. For example, i40e hardware can
recognize more than 150 packet types. Unified packet type is
composed of L2 type, L3 type, L4 type, tunnel type, inner L2 type,
inner L3 type and inner L4 type fields, and can be stored in
'struct rte_mbuf' of 32 bits field 'packet_type'.
To avoid breaking ABI compatibility, all the changes would be
enabled by RTE_UNIFIED_PKT_TYPE, which is disabled by default.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
lib/librte_mbuf/rte_mbuf.h | 487 +++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 487 insertions(+)
v3 changes:
* Put the definitions of unified packet type into a single patch.
v4 changes:
* Added detailed description of each packet types.
v5 changes:
* Re-worded the commit logs.
* Added more detailed description for all packet types, together with examples.
v6 changes:
* Disabled the code changes for unified packet type by default, to
avoid breaking ABI compatibility.
diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
index a8662c2..94e51cd 100644
--- a/lib/librte_mbuf/rte_mbuf.h
+++ b/lib/librte_mbuf/rte_mbuf.h
@@ -195,6 +195,493 @@ extern "C" {
/* Use final bit of flags to indicate a control mbuf */
#define CTRL_MBUF_FLAG (1ULL << 63) /**< Mbuf contains control data */
+#ifdef RTE_UNIFIED_PKT_TYPE
+/*
+ * 32 bits are divided into several fields to mark packet types. Note that
+ * each field is indexical.
+ * - Bit 3:0 is for L2 types.
+ * - Bit 7:4 is for L3 or outer L3 (for tunneling case) types.
+ * - Bit 11:8 is for L4 or outer L4 (for tunneling case) types.
+ * - Bit 15:12 is for tunnel types.
+ * - Bit 19:16 is for inner L2 types.
+ * - Bit 23:20 is for inner L3 types.
+ * - Bit 27:24 is for inner L4 types.
+ * - Bit 31:28 is reserved.
+ *
+ * To be compatible with Vector PMD, RTE_PTYPE_L3_IPV4, RTE_PTYPE_L3_IPV4_EXT,
+ * RTE_PTYPE_L3_IPV6, RTE_PTYPE_L3_IPV6_EXT, RTE_PTYPE_L4_TCP, RTE_PTYPE_L4_UDP
+ * and RTE_PTYPE_L4_SCTP should be kept as below in a contiguous 7 bits.
+ *
+ * Note that L3 types values are selected for checking IPV4/IPV6 header from
+ * performance point of view. Reading annotations of RTE_ETH_IS_IPV4_HDR and
+ * RTE_ETH_IS_IPV6_HDR is needed for any future changes of L3 type values.
+ *
+ * Note that the packet types of the same packet recognized by different
+ * hardware may be different, as different hardware may have different
+ * capability of packet type recognition.
+ *
+ * examples:
+ * <'ether type'=0x0800
+ * | 'version'=4, 'protocol'=0x29
+ * | 'version'=6, 'next header'=0x3A
+ * | 'ICMPv6 header'>
+ * will be recognized on i40e hardware as packet type combination of,
+ * RTE_PTYPE_L2_MAC |
+ * RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ * RTE_PTYPE_TUNNEL_IP |
+ * RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ * RTE_PTYPE_INNER_L4_ICMP.
+ *
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=0x2F
+ * | 'GRE header'
+ * | 'version'=6, 'next header'=0x11
+ * | 'UDP header'>
+ * will be recognized on i40e hardware as packet type combination of,
+ * RTE_PTYPE_L2_MAC |
+ * RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ * RTE_PTYPE_TUNNEL_GRENAT |
+ * RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ * RTE_PTYPE_INNER_L4_UDP.
+ */
+#define RTE_PTYPE_UNKNOWN 0x00000000
+/**
+ * MAC (Media Access Control) packet type.
+ * It is used for outer packet for tunneling cases.
+ *
+ * Packet format:
+ * <'ether type'=[0x0800|0x86DD|others]>
+ */
+#define RTE_PTYPE_L2_MAC 0x00000001
+/**
+ * MAC (Media Access Control) packet type for time sync.
+ *
+ * Packet format:
+ * <'ether type'=0x88F7>
+ */
+#define RTE_PTYPE_L2_MAC_TIMESYNC 0x00000002
+/**
+ * ARP (Address Resolution Protocol) packet type.
+ *
+ * Packet format:
+ * <'ether type'=0x0806>
+ */
+#define RTE_PTYPE_L2_ARP 0x00000003
+/**
+ * LLDP (Link Layer Discovery Protocol) packet type.
+ *
+ * Packet format:
+ * <'ether type'=0x88CC>
+ */
+#define RTE_PTYPE_L2_LLDP 0x00000004
+/**
+ * Mask of layer 2 packet types.
+ * It is used for outer packet for tunneling cases.
+ */
+#define RTE_PTYPE_L2_MASK 0x0000000f
+/**
+ * IP (Internet Protocol) version 4 packet type.
+ * It is used for outer packet for tunneling cases, and does not contain any
+ * header option.
+ *
+ * Packet format:
+ * <'ether type'=0x0800
+ * | 'version'=4, 'ihl'=5>
+ */
+#define RTE_PTYPE_L3_IPV4 0x00000010
+/**
+ * IP (Internet Protocol) version 4 packet type.
+ * It is used for outer packet for tunneling cases, and contains header
+ * options.
+ *
+ * Packet format:
+ * <'ether type'=0x0800
+ * | 'version'=4, 'ihl'=[6-15], 'options'>
+ */
+#define RTE_PTYPE_L3_IPV4_EXT 0x00000030
+/**
+ * IP (Internet Protocol) version 6 packet type.
+ * It is used for outer packet for tunneling cases, and does not contain any
+ * extension header.
+ *
+ * Packet format:
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=0x3B>
+ */
+#define RTE_PTYPE_L3_IPV6 0x00000040
+/**
+ * IP (Internet Protocol) version 4 packet type.
+ * It is used for outer packet for tunneling cases, and may or maynot contain
+ * header options.
+ *
+ * Packet format:
+ * <'ether type'=0x0800
+ * | 'version'=4, 'ihl'=[5-15], <'options'>>
+ */
+#define RTE_PTYPE_L3_IPV4_EXT_UNKNOWN 0x00000090
+/**
+ * IP (Internet Protocol) version 6 packet type.
+ * It is used for outer packet for tunneling cases, and contains extension
+ * headers.
+ *
+ * Packet format:
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=[0x0|0x2B|0x2C|0x32|0x33|0x3C|0x87],
+ * 'extension headers'>
+ */
+#define RTE_PTYPE_L3_IPV6_EXT 0x000000c0
+/**
+ * IP (Internet Protocol) version 6 packet type.
+ * It is used for outer packet for tunneling cases, and may or maynot contain
+ * extension headers.
+ *
+ * Packet format:
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=[0x3B|0x0|0x2B|0x2C|0x32|0x33|0x3C|0x87],
+ * <'extension headers'>>
+ */
+#define RTE_PTYPE_L3_IPV6_EXT_UNKNOWN 0x000000e0
+/**
+ * Mask of layer 3 packet types.
+ * It is used for outer packet for tunneling cases.
+ */
+#define RTE_PTYPE_L3_MASK 0x000000f0
+/**
+ * TCP (Transmission Control Protocol) packet type.
+ * It is used for outer packet for tunneling cases.
+ *
+ * Packet format:
+ * <'ether type'=0x0800
+ * | 'version'=4, 'protocol'=6, 'MF'=0>
+ * or,
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=6>
+ */
+#define RTE_PTYPE_L4_TCP 0x00000100
+/**
+ * UDP (User Datagram Protocol) packet type.
+ * It is used for outer packet for tunneling cases.
+ *
+ * Packet format:
+ * <'ether type'=0x0800
+ * | 'version'=4, 'protocol'=17, 'MF'=0>
+ * or,
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=17>
+ */
+#define RTE_PTYPE_L4_UDP 0x00000200
+/**
+ * Fragmented IP (Internet Protocol) packet type.
+ * It is used for outer packet for tunneling cases.
+ *
+ * It refers to those packets of any IP types, which can be recognized as
+ * fragmented. A fragmented packet cannot be recognized as any other L4 types
+ * (RTE_PTYPE_L4_TCP, RTE_PTYPE_L4_UDP, RTE_PTYPE_L4_SCTP, RTE_PTYPE_L4_ICMP,
+ * RTE_PTYPE_L4_NONFRAG).
+ *
+ * Packet format:
+ * <'ether type'=0x0800
+ * | 'version'=4, 'MF'=1>
+ * or,
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=44>
+ */
+#define RTE_PTYPE_L4_FRAG 0x00000300
+/**
+ * SCTP (Stream Control Transmission Protocol) packet type.
+ * It is used for outer packet for tunneling cases.
+ *
+ * Packet format:
+ * <'ether type'=0x0800
+ * | 'version'=4, 'protocol'=132, 'MF'=0>
+ * or,
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=132>
+ */
+#define RTE_PTYPE_L4_SCTP 0x00000400
+/**
+ * ICMP (Internet Control Message Protocol) packet type.
+ * It is used for outer packet for tunneling cases.
+ *
+ * Packet format:
+ * <'ether type'=0x0800
+ * | 'version'=4, 'protocol'=1, 'MF'=0>
+ * or,
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=1>
+ */
+#define RTE_PTYPE_L4_ICMP 0x00000500
+/**
+ * Non-fragmented IP (Internet Protocol) packet type.
+ * It is used for outer packet for tunneling cases.
+ *
+ * It refers to those packets of any IP types, while cannot be recognized as
+ * any of above L4 types (RTE_PTYPE_L4_TCP, RTE_PTYPE_L4_UDP,
+ * RTE_PTYPE_L4_FRAG, RTE_PTYPE_L4_SCTP, RTE_PTYPE_L4_ICMP).
+ *
+ * Packet format:
+ * <'ether type'=0x0800
+ * | 'version'=4, 'protocol'!=[6|17|132|1], 'MF'=0>
+ * or,
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'!=[6|17|44|132|1]>
+ */
+#define RTE_PTYPE_L4_NONFRAG 0x00000600
+/**
+ * Mask of layer 4 packet types.
+ * It is used for outer packet for tunneling cases.
+ */
+#define RTE_PTYPE_L4_MASK 0x00000f00
+/**
+ * IP (Internet Protocol) in IP (Internet Protocol) tunneling packet type.
+ *
+ * Packet format:
+ * <'ether type'=0x0800
+ * | 'version'=4, 'protocol'=[4|41]>
+ * or,
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=[4|41]>
+ */
+#define RTE_PTYPE_TUNNEL_IP 0x00001000
+/**
+ * GRE (Generic Routing Encapsulation) tunneling packet type.
+ *
+ * Packet format:
+ * <'ether type'=0x0800
+ * | 'version'=4, 'protocol'=47>
+ * or,
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=47>
+ */
+#define RTE_PTYPE_TUNNEL_GRE 0x00002000
+/**
+ * VXLAN (Virtual eXtensible Local Area Network) tunneling packet type.
+ *
+ * Packet format:
+ * <'ether type'=0x0800
+ * | 'version'=4, 'protocol'=17
+ * | 'destination port'=4798>
+ * or,
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=17
+ * | 'destination port'=4798>
+ */
+#define RTE_PTYPE_TUNNEL_VXLAN 0x00003000
+/**
+ * NVGRE (Network Virtualization using Generic Routing Encapsulation) tunneling
+ * packet type.
+ *
+ * Packet format:
+ * <'ether type'=0x0800
+ * | 'version'=4, 'protocol'=47
+ * | 'protocol type'=0x6558>
+ * or,
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=47
+ * | 'protocol type'=0x6558'>
+ */
+#define RTE_PTYPE_TUNNEL_NVGRE 0x00004000
+/**
+ * GENEVE (Generic Network Virtualization Encapsulation) tunneling packet type.
+ *
+ * Packet format:
+ * <'ether type'=0x0800
+ * | 'version'=4, 'protocol'=17
+ * | 'destination port'=6081>
+ * or,
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=17
+ * | 'destination port'=6081>
+ */
+#define RTE_PTYPE_TUNNEL_GENEVE 0x00005000
+/**
+ * Tunneling packet type of Teredo, VXLAN (Virtual eXtensible Local Area
+ * Network) or GRE (Generic Routing Encapsulation) could be recognized as this
+ * packet type, if they can not be recognized independently as of hardware
+ * capability.
+ */
+#define RTE_PTYPE_TUNNEL_GRENAT 0x00006000
+/**
+ * Mask of tunneling packet types.
+ */
+#define RTE_PTYPE_TUNNEL_MASK 0x0000f000
+/**
+ * MAC (Media Access Control) packet type.
+ * It is used for inner packet type only.
+ *
+ * Packet format (inner only):
+ * <'ether type'=[0x800|0x86DD]>
+ */
+#define RTE_PTYPE_INNER_L2_MAC 0x00010000
+/**
+ * MAC (Media Access Control) packet type with VLAN (Virtual Local Area
+ * Network) tag.
+ *
+ * Packet format (inner only):
+ * <'ether type'=[0x800|0x86DD], vlan=[1-4095]>
+ */
+#define RTE_PTYPE_INNER_L2_MAC_VLAN 0x00020000
+/**
+ * Mask of inner layer 2 packet types.
+ */
+#define RTE_PTYPE_INNER_L2_MASK 0x000f0000
+/**
+ * IP (Internet Protocol) version 4 packet type.
+ * It is used for inner packet only, and does not contain any header option.
+ *
+ * Packet format (inner only):
+ * <'ether type'=0x0800
+ * | 'version'=4, 'ihl'=5>
+ */
+#define RTE_PTYPE_INNER_L3_IPV4 0x00100000
+/**
+ * IP (Internet Protocol) version 4 packet type.
+ * It is used for inner packet only, and contains header options.
+ *
+ * Packet format (inner only):
+ * <'ether type'=0x0800
+ * | 'version'=4, 'ihl'=[6-15], 'options'>
+ */
+#define RTE_PTYPE_INNER_L3_IPV4_EXT 0x00200000
+/**
+ * IP (Internet Protocol) version 6 packet type.
+ * It is used for inner packet only, and does not contain any extension header.
+ *
+ * Packet format (inner only):
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=0x3B>
+ */
+#define RTE_PTYPE_INNER_L3_IPV6 0x00300000
+/**
+ * IP (Internet Protocol) version 4 packet type.
+ * It is used for inner packet only, and may or maynot contain header options.
+ *
+ * Packet format (inner only):
+ * <'ether type'=0x0800
+ * | 'version'=4, 'ihl'=[5-15], <'options'>>
+ */
+#define RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN 0x00400000
+/**
+ * IP (Internet Protocol) version 6 packet type.
+ * It is used for inner packet only, and contains extension headers.
+ *
+ * Packet format (inner only):
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=[0x0|0x2B|0x2C|0x32|0x33|0x3C|0x87],
+ * 'extension headers'>
+ */
+#define RTE_PTYPE_INNER_L3_IPV6_EXT 0x00500000
+/**
+ * IP (Internet Protocol) version 6 packet type.
+ * It is used for inner packet only, and may or maynot contain extension
+ * headers.
+ *
+ * Packet format (inner only):
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=[0x3B|0x0|0x2B|0x2C|0x32|0x33|0x3C|0x87],
+ * <'extension headers'>>
+ */
+#define RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN 0x00600000
+/**
+ * Mask of inner layer 3 packet types.
+ */
+#define RTE_PTYPE_INNER_INNER_L3_MASK 0x00f00000
+/**
+ * TCP (Transmission Control Protocol) packet type.
+ * It is used for inner packet only.
+ *
+ * Packet format (inner only):
+ * <'ether type'=0x0800
+ * | 'version'=4, 'protocol'=6, 'MF'=0>
+ * or,
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=6>
+ */
+#define RTE_PTYPE_INNER_L4_TCP 0x01000000
+/**
+ * UDP (User Datagram Protocol) packet type.
+ * It is used for inner packet only.
+ *
+ * Packet format (inner only):
+ * <'ether type'=0x0800
+ * | 'version'=4, 'protocol'=17, 'MF'=0>
+ * or,
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=17>
+ */
+#define RTE_PTYPE_INNER_L4_UDP 0x02000000
+/**
+ * Fragmented IP (Internet Protocol) packet type.
+ * It is used for inner packet only, and may or maynot have layer 4 packet.
+ *
+ * Packet format (inner only):
+ * <'ether type'=0x0800
+ * | 'version'=4, 'MF'=1>
+ * or,
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=44>
+ */
+#define RTE_PTYPE_INNER_L4_FRAG 0x03000000
+/**
+ * SCTP (Stream Control Transmission Protocol) packet type.
+ * It is used for inner packet only.
+ *
+ * Packet format (inner only):
+ * <'ether type'=0x0800
+ * | 'version'=4, 'protocol'=132, 'MF'=0>
+ * or,
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=132>
+ */
+#define RTE_PTYPE_INNER_L4_SCTP 0x04000000
+/**
+ * ICMP (Internet Control Message Protocol) packet type.
+ * It is used for inner packet only.
+ *
+ * Packet format (inner only):
+ * <'ether type'=0x0800
+ * | 'version'=4, 'protocol'=1, 'MF'=0>
+ * or,
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=1>
+ */
+#define RTE_PTYPE_INNER_L4_ICMP 0x05000000
+/**
+ * Non-fragmented IP (Internet Protocol) packet type.
+ * It is used for inner packet only, and may or maynot have other unknown layer
+ * 4 packet types.
+ *
+ * Packet format (inner only):
+ * <'ether type'=0x0800
+ * | 'version'=4, 'protocol'!=[6|17|132|1], 'MF'=0>
+ * or,
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'!=[6|17|44|132|1]>
+ */
+#define RTE_PTYPE_INNER_L4_NONFRAG 0x06000000
+/**
+ * Mask of inner layer 4 packet types.
+ */
+#define RTE_PTYPE_INNER_L4_MASK 0x0f000000
+
+/**
+ * Check if the (outer) L3 header is IPv4. To avoid comparing IPv4 types one by
+ * one, bit 4 is selected to be used for IPv4 only. Then checking bit 4 can
+ * determin if it is an IPV4 packet.
+ */
+#define RTE_ETH_IS_IPV4_HDR(ptype) ((ptype) & RTE_PTYPE_L3_IPV4)
+
+/**
+ * Check if the (outer) L3 header is IPv4. To avoid comparing IPv4 types one by
+ * one, bit 6 is selected to be used for IPv4 only. Then checking bit 6 can
+ * determin if it is an IPV4 packet.
+ */
+#define RTE_ETH_IS_IPV6_HDR(ptype) ((ptype) & RTE_PTYPE_L3_IPV6)
+
+/* Check if it is a tunneling packet */
+#define RTE_ETH_IS_TUNNEL_PKT(ptype) ((ptype) & RTE_PTYPE_TUNNEL_MASK)
+#endif /* RTE_UNIFIED_PKT_TYPE */
+
/**
* Get the name of a RX offload flag
*
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v6 04/18] e1000: replace bit mask based packet type with unified packet type
2015-06-01 7:33 ` [dpdk-dev] [PATCH v6 00/18] unified packet type Helin Zhang
` (2 preceding siblings ...)
2015-06-01 7:33 ` [dpdk-dev] [PATCH v6 03/18] mbuf: add definitions of unified packet types Helin Zhang
@ 2015-06-01 7:33 ` Helin Zhang
2015-06-01 7:33 ` [dpdk-dev] [PATCH v6 05/18] ixgbe: " Helin Zhang
` (14 subsequent siblings)
18 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-06-01 7:33 UTC (permalink / raw)
To: dev
To unify packet types among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.
To avoid breaking ABI compatibility, all the changes would be
enabled by RTE_UNIFIED_PKT_TYPE, which is disabled by default.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
drivers/net/e1000/igb_rxtx.c | 102 +++++++++++++++++++++++++++++++++++++++++++
1 file changed, 102 insertions(+)
v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.
v5 changes:
* Re-worded the commit logs.
v6 changes:
* Disabled the code changes for unified packet type by default, to
avoid breaking ABI compatibility.
diff --git a/drivers/net/e1000/igb_rxtx.c b/drivers/net/e1000/igb_rxtx.c
index f586311..112b876 100644
--- a/drivers/net/e1000/igb_rxtx.c
+++ b/drivers/net/e1000/igb_rxtx.c
@@ -590,6 +590,99 @@ eth_igb_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
* RX functions
*
**********************************************************************/
+#ifdef RTE_UNIFIED_PKT_TYPE
+#define IGB_PACKET_TYPE_IPV4 0X01
+#define IGB_PACKET_TYPE_IPV4_TCP 0X11
+#define IGB_PACKET_TYPE_IPV4_UDP 0X21
+#define IGB_PACKET_TYPE_IPV4_SCTP 0X41
+#define IGB_PACKET_TYPE_IPV4_EXT 0X03
+#define IGB_PACKET_TYPE_IPV4_EXT_SCTP 0X43
+#define IGB_PACKET_TYPE_IPV6 0X04
+#define IGB_PACKET_TYPE_IPV6_TCP 0X14
+#define IGB_PACKET_TYPE_IPV6_UDP 0X24
+#define IGB_PACKET_TYPE_IPV6_EXT 0X0C
+#define IGB_PACKET_TYPE_IPV6_EXT_TCP 0X1C
+#define IGB_PACKET_TYPE_IPV6_EXT_UDP 0X2C
+#define IGB_PACKET_TYPE_IPV4_IPV6 0X05
+#define IGB_PACKET_TYPE_IPV4_IPV6_TCP 0X15
+#define IGB_PACKET_TYPE_IPV4_IPV6_UDP 0X25
+#define IGB_PACKET_TYPE_IPV4_IPV6_EXT 0X0D
+#define IGB_PACKET_TYPE_IPV4_IPV6_EXT_TCP 0X1D
+#define IGB_PACKET_TYPE_IPV4_IPV6_EXT_UDP 0X2D
+#define IGB_PACKET_TYPE_MAX 0X80
+#define IGB_PACKET_TYPE_MASK 0X7F
+#define IGB_PACKET_TYPE_SHIFT 0X04
+static inline uint32_t
+igb_rxd_pkt_info_to_pkt_type(uint16_t pkt_info)
+{
+ static const uint32_t
+ ptype_table[IGB_PACKET_TYPE_MAX] __rte_cache_aligned = {
+ [IGB_PACKET_TYPE_IPV4] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4,
+ [IGB_PACKET_TYPE_IPV4_EXT] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4_EXT,
+ [IGB_PACKET_TYPE_IPV6] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6,
+ [IGB_PACKET_TYPE_IPV4_IPV6] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6,
+ [IGB_PACKET_TYPE_IPV6_EXT] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV6_EXT,
+ [IGB_PACKET_TYPE_IPV4_IPV6_EXT] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT,
+ [IGB_PACKET_TYPE_IPV4_TCP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_TCP,
+ [IGB_PACKET_TYPE_IPV6_TCP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_TCP,
+ [IGB_PACKET_TYPE_IPV4_IPV6_TCP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6 | RTE_PTYPE_INNER_L4_TCP,
+ [IGB_PACKET_TYPE_IPV6_EXT_TCP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV6_EXT | RTE_PTYPE_L4_TCP,
+ [IGB_PACKET_TYPE_IPV4_IPV6_EXT_TCP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT | RTE_PTYPE_INNER_L4_TCP,
+ [IGB_PACKET_TYPE_IPV4_UDP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_UDP,
+ [IGB_PACKET_TYPE_IPV6_UDP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_UDP,
+ [IGB_PACKET_TYPE_IPV4_IPV6_UDP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6 | RTE_PTYPE_INNER_L4_UDP,
+ [IGB_PACKET_TYPE_IPV6_EXT_UDP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV6_EXT | RTE_PTYPE_L4_UDP,
+ [IGB_PACKET_TYPE_IPV4_IPV6_EXT_UDP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT | RTE_PTYPE_INNER_L4_UDP,
+ [IGB_PACKET_TYPE_IPV4_SCTP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_SCTP,
+ [IGB_PACKET_TYPE_IPV4_EXT_SCTP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4_EXT | RTE_PTYPE_L4_SCTP,
+ };
+ if (unlikely(pkt_info & E1000_RXDADV_PKTTYPE_ETQF))
+ return RTE_PTYPE_UNKNOWN;
+
+ pkt_info = (pkt_info >> IGB_PACKET_TYPE_SHIFT) & IGB_PACKET_TYPE_MASK;
+
+ return ptype_table[pkt_info];
+}
+
+static inline uint64_t
+rx_desc_hlen_type_rss_to_pkt_flags(uint32_t hl_tp_rs)
+{
+ uint64_t pkt_flags = ((hl_tp_rs & 0x0F) == 0) ? 0 : PKT_RX_RSS_HASH;
+
+#if defined(RTE_LIBRTE_IEEE1588)
+ static uint32_t ip_pkt_etqf_map[8] = {
+ 0, 0, 0, PKT_RX_IEEE1588_PTP,
+ 0, 0, 0, 0,
+ };
+
+ pkt_flags |= ip_pkt_etqf_map[(hl_tp_rs >> 4) & 0x07];
+#endif
+
+ return pkt_flags;
+}
+#else /* RTE_UNIFIED_PKT_TYPE */
static inline uint64_t
rx_desc_hlen_type_rss_to_pkt_flags(uint32_t hl_tp_rs)
{
@@ -617,6 +710,7 @@ rx_desc_hlen_type_rss_to_pkt_flags(uint32_t hl_tp_rs)
#endif
return pkt_flags | (((hl_tp_rs & 0x0F) == 0) ? 0 : PKT_RX_RSS_HASH);
}
+#endif /* RTE_UNIFIED_PKT_TYPE */
static inline uint64_t
rx_desc_status_to_pkt_flags(uint32_t rx_status)
@@ -790,6 +884,10 @@ eth_igb_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
pkt_flags = pkt_flags | rx_desc_status_to_pkt_flags(staterr);
pkt_flags = pkt_flags | rx_desc_error_to_pkt_flags(staterr);
rxm->ol_flags = pkt_flags;
+#ifdef RTE_UNIFIED_PKT_TYPE
+ rxm->packet_type = igb_rxd_pkt_info_to_pkt_type(rxd.wb.lower.
+ lo_dword.hs_rss.pkt_info);
+#endif
/*
* Store the mbuf address into the next entry of the array
@@ -1024,6 +1122,10 @@ eth_igb_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
pkt_flags = pkt_flags | rx_desc_status_to_pkt_flags(staterr);
pkt_flags = pkt_flags | rx_desc_error_to_pkt_flags(staterr);
first_seg->ol_flags = pkt_flags;
+#ifdef RTE_UNIFIED_PKT_TYPE
+ first_seg->packet_type = igb_rxd_pkt_info_to_pkt_type(rxd.wb.
+ lower.lo_dword.hs_rss.pkt_info);
+#endif
/* Prefetch data of first segment, if configured to do so. */
rte_packet_prefetch((char *)first_seg->buf_addr +
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v6 05/18] ixgbe: replace bit mask based packet type with unified packet type
2015-06-01 7:33 ` [dpdk-dev] [PATCH v6 00/18] unified packet type Helin Zhang
` (3 preceding siblings ...)
2015-06-01 7:33 ` [dpdk-dev] [PATCH v6 04/18] e1000: replace bit mask based packet type with unified packet type Helin Zhang
@ 2015-06-01 7:33 ` Helin Zhang
2015-06-01 7:33 ` [dpdk-dev] [PATCH v6 06/18] i40e: " Helin Zhang
` (13 subsequent siblings)
18 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-06-01 7:33 UTC (permalink / raw)
To: dev
To unify packet type among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.
To avoid breaking ABI compatibility, all the changes would be
enabled by RTE_UNIFIED_PKT_TYPE, which is disabled by default.
Note that around 2.5% performance drop (64B) was observed of doing
4 ports (1 port per 82599 card) IO forwarding on the same SNB core.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
drivers/net/ixgbe/ixgbe_rxtx.c | 163 +++++++++++++++++++++++++++++++++++++++++
1 file changed, 163 insertions(+)
v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.
v5 changes:
* Re-worded the commit logs.
v6 changes:
* Disabled the code changes for unified packet type by default, to
avoid breaking ABI compatibility.
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c
index 4f9ab22..c4d9b02 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx.c
@@ -855,6 +855,110 @@ end_of_tx:
* RX functions
*
**********************************************************************/
+#ifdef RTE_UNIFIED_PKT_TYPE
+#define IXGBE_PACKET_TYPE_IPV4 0X01
+#define IXGBE_PACKET_TYPE_IPV4_TCP 0X11
+#define IXGBE_PACKET_TYPE_IPV4_UDP 0X21
+#define IXGBE_PACKET_TYPE_IPV4_SCTP 0X41
+#define IXGBE_PACKET_TYPE_IPV4_EXT 0X03
+#define IXGBE_PACKET_TYPE_IPV4_EXT_SCTP 0X43
+#define IXGBE_PACKET_TYPE_IPV6 0X04
+#define IXGBE_PACKET_TYPE_IPV6_TCP 0X14
+#define IXGBE_PACKET_TYPE_IPV6_UDP 0X24
+#define IXGBE_PACKET_TYPE_IPV6_EXT 0X0C
+#define IXGBE_PACKET_TYPE_IPV6_EXT_TCP 0X1C
+#define IXGBE_PACKET_TYPE_IPV6_EXT_UDP 0X2C
+#define IXGBE_PACKET_TYPE_IPV4_IPV6 0X05
+#define IXGBE_PACKET_TYPE_IPV4_IPV6_TCP 0X15
+#define IXGBE_PACKET_TYPE_IPV4_IPV6_UDP 0X25
+#define IXGBE_PACKET_TYPE_IPV4_IPV6_EXT 0X0D
+#define IXGBE_PACKET_TYPE_IPV4_IPV6_EXT_TCP 0X1D
+#define IXGBE_PACKET_TYPE_IPV4_IPV6_EXT_UDP 0X2D
+#define IXGBE_PACKET_TYPE_MAX 0X80
+#define IXGBE_PACKET_TYPE_MASK 0X7F
+#define IXGBE_PACKET_TYPE_SHIFT 0X04
+static inline uint32_t
+ixgbe_rxd_pkt_info_to_pkt_type(uint16_t pkt_info)
+{
+ static const uint32_t
+ ptype_table[IXGBE_PACKET_TYPE_MAX] __rte_cache_aligned = {
+ [IXGBE_PACKET_TYPE_IPV4] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4,
+ [IXGBE_PACKET_TYPE_IPV4_EXT] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4_EXT,
+ [IXGBE_PACKET_TYPE_IPV6] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV6,
+ [IXGBE_PACKET_TYPE_IPV4_IPV6] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6,
+ [IXGBE_PACKET_TYPE_IPV6_EXT] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV6_EXT,
+ [IXGBE_PACKET_TYPE_IPV4_IPV6_EXT] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT,
+ [IXGBE_PACKET_TYPE_IPV4_TCP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_TCP,
+ [IXGBE_PACKET_TYPE_IPV6_TCP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_TCP,
+ [IXGBE_PACKET_TYPE_IPV4_IPV6_TCP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6 | RTE_PTYPE_INNER_L4_TCP,
+ [IXGBE_PACKET_TYPE_IPV6_EXT_TCP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV6_EXT | RTE_PTYPE_L4_TCP,
+ [IXGBE_PACKET_TYPE_IPV4_IPV6_EXT_TCP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT | RTE_PTYPE_INNER_L4_TCP,
+ [IXGBE_PACKET_TYPE_IPV4_UDP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_UDP,
+ [IXGBE_PACKET_TYPE_IPV6_UDP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_UDP,
+ [IXGBE_PACKET_TYPE_IPV4_IPV6_UDP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6 | RTE_PTYPE_INNER_L4_UDP,
+ [IXGBE_PACKET_TYPE_IPV6_EXT_UDP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV6_EXT | RTE_PTYPE_L4_UDP,
+ [IXGBE_PACKET_TYPE_IPV4_IPV6_EXT_UDP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT | RTE_PTYPE_INNER_L4_UDP,
+ [IXGBE_PACKET_TYPE_IPV4_SCTP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_SCTP,
+ [IXGBE_PACKET_TYPE_IPV4_EXT_SCTP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4_EXT | RTE_PTYPE_L4_SCTP,
+ };
+ if (unlikely(pkt_info & IXGBE_RXDADV_PKTTYPE_ETQF))
+ return RTE_PTYPE_UNKNOWN;
+
+ pkt_info = (pkt_info >> IXGBE_PACKET_TYPE_SHIFT) &
+ IXGBE_PACKET_TYPE_MASK;
+
+ return ptype_table[pkt_info];
+}
+
+static inline uint64_t
+ixgbe_rxd_pkt_info_to_pkt_flags(uint16_t pkt_info)
+{
+ static uint64_t ip_rss_types_map[16] __rte_cache_aligned = {
+ 0, PKT_RX_RSS_HASH, PKT_RX_RSS_HASH, PKT_RX_RSS_HASH,
+ 0, PKT_RX_RSS_HASH, 0, PKT_RX_RSS_HASH,
+ PKT_RX_RSS_HASH, 0, 0, 0,
+ 0, 0, 0, PKT_RX_FDIR,
+ };
+#ifdef RTE_LIBRTE_IEEE1588
+ static uint64_t ip_pkt_etqf_map[8] = {
+ 0, 0, 0, PKT_RX_IEEE1588_PTP,
+ 0, 0, 0, 0,
+ };
+
+ if (likely(pkt_info & IXGBE_RXDADV_PKTTYPE_ETQF))
+ return ip_pkt_etqf_map[(pkt_info >> 4) & 0X07] |
+ ip_rss_types_map[pkt_info & 0XF];
+ else
+ return ip_rss_types_map[pkt_info & 0XF];
+#else
+ return ip_rss_types_map[pkt_info & 0XF];
+#endif
+}
+#else /* RTE_UNIFIED_PKT_TYPE */
static inline uint64_t
rx_desc_hlen_type_rss_to_pkt_flags(uint32_t hl_tp_rs)
{
@@ -890,6 +994,7 @@ rx_desc_hlen_type_rss_to_pkt_flags(uint32_t hl_tp_rs)
#endif
return pkt_flags | ip_rss_types_map[hl_tp_rs & 0xF];
}
+#endif /* RTE_UNIFIED_PKT_TYPE */
static inline uint64_t
rx_desc_status_to_pkt_flags(uint32_t rx_status)
@@ -945,7 +1050,13 @@ ixgbe_rx_scan_hw_ring(struct ixgbe_rx_queue *rxq)
struct rte_mbuf *mb;
uint16_t pkt_len;
uint64_t pkt_flags;
+#ifdef RTE_UNIFIED_PKT_TYPE
+ int nb_dd;
+ uint32_t s[LOOK_AHEAD];
+ uint16_t pkt_info[LOOK_AHEAD];
+#else
int s[LOOK_AHEAD], nb_dd;
+#endif /* RTE_UNIFIED_PKT_TYPE */
int i, j, nb_rx = 0;
@@ -968,6 +1079,12 @@ ixgbe_rx_scan_hw_ring(struct ixgbe_rx_queue *rxq)
for (j = LOOK_AHEAD-1; j >= 0; --j)
s[j] = rxdp[j].wb.upper.status_error;
+#ifdef RTE_UNIFIED_PKT_TYPE
+ for (j = LOOK_AHEAD-1; j >= 0; --j)
+ pkt_info[j] = rxdp[j].wb.lower.lo_dword.
+ hs_rss.pkt_info;
+#endif /* RTE_UNIFIED_PKT_TYPE */
+
/* Compute how many status bits were set */
nb_dd = 0;
for (j = 0; j < LOOK_AHEAD; ++j)
@@ -985,12 +1102,22 @@ ixgbe_rx_scan_hw_ring(struct ixgbe_rx_queue *rxq)
mb->vlan_tci = rte_le_to_cpu_16(rxdp[j].wb.upper.vlan);
/* convert descriptor fields to rte mbuf flags */
+#ifdef RTE_UNIFIED_PKT_TYPE
+ pkt_flags = rx_desc_status_to_pkt_flags(s[j]);
+ pkt_flags |= rx_desc_error_to_pkt_flags(s[j]);
+ pkt_flags |=
+ ixgbe_rxd_pkt_info_to_pkt_flags(pkt_info[j]);
+ mb->ol_flags = pkt_flags;
+ mb->packet_type =
+ ixgbe_rxd_pkt_info_to_pkt_type(pkt_info[j]);
+#else /* RTE_UNIFIED_PKT_TYPE */
pkt_flags = rx_desc_hlen_type_rss_to_pkt_flags(
rxdp[j].wb.lower.lo_dword.data);
/* reuse status field from scan list */
pkt_flags |= rx_desc_status_to_pkt_flags(s[j]);
pkt_flags |= rx_desc_error_to_pkt_flags(s[j]);
mb->ol_flags = pkt_flags;
+#endif /* RTE_UNIFIED_PKT_TYPE */
if (likely(pkt_flags & PKT_RX_RSS_HASH))
mb->hash.rss = rxdp[j].wb.lower.hi_dword.rss;
@@ -1207,7 +1334,11 @@ ixgbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
union ixgbe_adv_rx_desc rxd;
uint64_t dma_addr;
uint32_t staterr;
+#ifdef RTE_UNIFIED_PKT_TYPE
+ uint32_t pkt_info;
+#else
uint32_t hlen_type_rss;
+#endif
uint16_t pkt_len;
uint16_t rx_id;
uint16_t nb_rx;
@@ -1325,6 +1456,19 @@ ixgbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
rxm->data_len = pkt_len;
rxm->port = rxq->port_id;
+#ifdef RTE_UNIFIED_PKT_TYPE
+ pkt_info = rte_le_to_cpu_32(rxd.wb.lower.lo_dword.hs_rss.
+ pkt_info);
+ /* Only valid if PKT_RX_VLAN_PKT set in pkt_flags */
+ rxm->vlan_tci = rte_le_to_cpu_16(rxd.wb.upper.vlan);
+
+ pkt_flags = rx_desc_status_to_pkt_flags(staterr);
+ pkt_flags = pkt_flags | rx_desc_error_to_pkt_flags(staterr);
+ pkt_flags = pkt_flags |
+ ixgbe_rxd_pkt_info_to_pkt_flags(pkt_info);
+ rxm->ol_flags = pkt_flags;
+ rxm->packet_type = ixgbe_rxd_pkt_info_to_pkt_type(pkt_info);
+#else /* RTE_UNIFIED_PKT_TYPE */
hlen_type_rss = rte_le_to_cpu_32(rxd.wb.lower.lo_dword.data);
/* Only valid if PKT_RX_VLAN_PKT set in pkt_flags */
rxm->vlan_tci = rte_le_to_cpu_16(rxd.wb.upper.vlan);
@@ -1333,6 +1477,7 @@ ixgbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
pkt_flags = pkt_flags | rx_desc_status_to_pkt_flags(staterr);
pkt_flags = pkt_flags | rx_desc_error_to_pkt_flags(staterr);
rxm->ol_flags = pkt_flags;
+#endif /* RTE_UNIFIED_PKT_TYPE */
if (likely(pkt_flags & PKT_RX_RSS_HASH))
rxm->hash.rss = rxd.wb.lower.hi_dword.rss;
@@ -1406,6 +1551,23 @@ ixgbe_fill_cluster_head_buf(
uint8_t port_id,
uint32_t staterr)
{
+#ifdef RTE_UNIFIED_PKT_TYPE
+ uint16_t pkt_info;
+ uint64_t pkt_flags;
+
+ head->port = port_id;
+
+ /* The vlan_tci field is only valid when PKT_RX_VLAN_PKT is
+ * set in the pkt_flags field.
+ */
+ head->vlan_tci = rte_le_to_cpu_16(desc->wb.upper.vlan);
+ pkt_info = rte_le_to_cpu_32(desc->wb.lower.lo_dword.hs_rss.pkt_info);
+ pkt_flags = rx_desc_status_to_pkt_flags(staterr);
+ pkt_flags |= rx_desc_error_to_pkt_flags(staterr);
+ pkt_flags |= ixgbe_rxd_pkt_info_to_pkt_flags(pkt_info);
+ head->ol_flags = pkt_flags;
+ head->packet_type = ixgbe_rxd_pkt_info_to_pkt_type(pkt_info);
+#else /* RTE_UNIFIED_PKT_TYPE */
uint32_t hlen_type_rss;
uint64_t pkt_flags;
@@ -1421,6 +1583,7 @@ ixgbe_fill_cluster_head_buf(
pkt_flags |= rx_desc_status_to_pkt_flags(staterr);
pkt_flags |= rx_desc_error_to_pkt_flags(staterr);
head->ol_flags = pkt_flags;
+#endif /* RTE_UNIFIED_PKT_TYPE */
if (likely(pkt_flags & PKT_RX_RSS_HASH))
head->hash.rss = rte_le_to_cpu_32(desc->wb.lower.hi_dword.rss);
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v6 06/18] i40e: replace bit mask based packet type with unified packet type
2015-06-01 7:33 ` [dpdk-dev] [PATCH v6 00/18] unified packet type Helin Zhang
` (4 preceding siblings ...)
2015-06-01 7:33 ` [dpdk-dev] [PATCH v6 05/18] ixgbe: " Helin Zhang
@ 2015-06-01 7:33 ` Helin Zhang
2015-06-01 7:33 ` [dpdk-dev] [PATCH v6 07/18] enic: " Helin Zhang
` (12 subsequent siblings)
18 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-06-01 7:33 UTC (permalink / raw)
To: dev
To unify packet types among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.
To avoid breaking ABI compatibility, all the changes would be
enabled by RTE_UNIFIED_PKT_TYPE, which is disabled by default.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
drivers/net/i40e/i40e_rxtx.c | 528 +++++++++++++++++++++++++++++++++++++++++++
1 file changed, 528 insertions(+)
v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.
v5 changes:
* Re-worded the commit logs.
v6 changes:
* Disabled the code changes for unified packet type by default, to
avoid breaking ABI compatibility.
diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
index 787f0bd..e20c98d 100644
--- a/drivers/net/i40e/i40e_rxtx.c
+++ b/drivers/net/i40e/i40e_rxtx.c
@@ -151,6 +151,514 @@ i40e_rxd_error_to_pkt_flags(uint64_t qword)
return flags;
}
+#ifdef RTE_UNIFIED_PKT_TYPE
+/* For each value it means, datasheet of hardware can tell more details */
+static inline uint32_t
+i40e_rxd_pkt_type_mapping(uint8_t ptype)
+{
+ static const uint32_t ptype_table[UINT8_MAX] __rte_cache_aligned = {
+ /* L2 types */
+ /* [0] reserved */
+ [1] = RTE_PTYPE_L2_MAC,
+ [2] = RTE_PTYPE_L2_MAC_TIMESYNC,
+ /* [3] - [5] reserved */
+ [6] = RTE_PTYPE_L2_LLDP,
+ /* [7] - [10] reserved */
+ [11] = RTE_PTYPE_L2_ARP,
+ /* [12] - [21] reserved */
+
+ /* Non tunneled IPv4 */
+ [22] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_FRAG,
+ [23] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_NONFRAG,
+ [24] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_UDP,
+ /* [25] reserved */
+ [26] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_TCP,
+ [27] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_SCTP,
+ [28] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_ICMP,
+
+ /* IPv4 --> IPv4 */
+ [29] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [30] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [31] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [32] reserved */
+ [33] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [34] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [35] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv4 --> IPv6 */
+ [36] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [37] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [38] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [39] reserved */
+ [40] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [41] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [42] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv4 --> GRE/Teredo/VXLAN */
+ [43] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT,
+
+ /* IPv4 --> GRE/Teredo/VXLAN --> IPv4 */
+ [44] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [45] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [46] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [47] reserved */
+ [48] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [49] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [50] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv4 --> GRE/Teredo/VXLAN --> IPv6 */
+ [51] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [52] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [53] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [54] reserved */
+ [55] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [56] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [57] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv4 --> GRE/Teredo/VXLAN --> MAC */
+ [58] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC,
+
+ /* IPv4 --> GRE/Teredo/VXLAN --> MAC --> IPv4 */
+ [59] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [60] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [61] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [62] reserved */
+ [63] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [64] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [65] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv4 --> GRE/Teredo/VXLAN --> MAC --> IPv6 */
+ [66] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [67] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [68] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [69] reserved */
+ [70] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [71] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [72] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv4 --> GRE/Teredo/VXLAN --> MAC/VLAN */
+ [73] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN,
+
+ /* IPv4 --> GRE/Teredo/VXLAN --> MAC/VLAN --> IPv4 */
+ [74] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [75] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [76] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [77] reserved */
+ [78] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [79] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [80] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv4 --> GRE/Teredo/VXLAN --> MAC/VLAN --> IPv6 */
+ [81] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [82] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [83] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [84] reserved */
+ [85] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [86] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [87] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* Non tunneled IPv6 */
+ [88] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_FRAG,
+ [89] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_NONFRAG,
+ [90] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_UDP,
+ /* [91] reserved */
+ [92] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_TCP,
+ [93] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_SCTP,
+ [94] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_ICMP,
+
+ /* IPv6 --> IPv4 */
+ [95] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [96] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [97] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [98] reserved */
+ [99] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [100] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [101] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv6 --> IPv6 */
+ [102] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [103] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [104] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [105] reserved */
+ [106] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [107] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [108] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv6 --> GRE/Teredo/VXLAN */
+ [109] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT,
+
+ /* IPv6 --> GRE/Teredo/VXLAN --> IPv4 */
+ [110] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [111] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [112] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [113] reserved */
+ [114] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [115] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [116] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv6 --> GRE/Teredo/VXLAN --> IPv6 */
+ [117] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [118] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [119] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [120] reserved */
+ [121] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [122] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [123] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv6 --> GRE/Teredo/VXLAN --> MAC */
+ [124] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC,
+
+ /* IPv6 --> GRE/Teredo/VXLAN --> MAC --> IPv4 */
+ [125] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [126] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [127] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [128] reserved */
+ [129] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [130] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [131] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv6 --> GRE/Teredo/VXLAN --> MAC --> IPv6 */
+ [132] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [133] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [134] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [135] reserved */
+ [136] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [137] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [138] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv6 --> GRE/Teredo/VXLAN --> MAC/VLAN */
+ [139] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN,
+
+ /* IPv6 --> GRE/Teredo/VXLAN --> MAC/VLAN --> IPv4 */
+ [140] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [141] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [142] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [143] reserved */
+ [144] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [145] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [146] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv6 --> GRE/Teredo/VXLAN --> MAC/VLAN --> IPv6 */
+ [147] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [148] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [149] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [150] reserved */
+ [151] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [152] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [153] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* All others reserved */
+ };
+
+ return ptype_table[ptype];
+}
+#else /* RTE_UNIFIED_PKT_TYPE */
/* Translate pkt types to pkt flags */
static inline uint64_t
i40e_rxd_ptype_to_pkt_flags(uint64_t qword)
@@ -418,6 +926,7 @@ i40e_rxd_ptype_to_pkt_flags(uint64_t qword)
return ip_ptype_map[ptype];
}
+#endif /* RTE_UNIFIED_PKT_TYPE */
#define I40E_RX_DESC_EXT_STATUS_FLEXBH_MASK 0x03
#define I40E_RX_DESC_EXT_STATUS_FLEXBH_FD_ID 0x01
@@ -709,11 +1218,18 @@ i40e_rx_scan_hw_ring(struct i40e_rx_queue *rxq)
rxdp[j].wb.qword0.lo_dword.l2tag1) : 0;
pkt_flags = i40e_rxd_status_to_pkt_flags(qword1);
pkt_flags |= i40e_rxd_error_to_pkt_flags(qword1);
+#ifdef RTE_UNIFIED_PKT_TYPE
+ mb->packet_type =
+ i40e_rxd_pkt_type_mapping((uint8_t)((qword1 &
+ I40E_RXD_QW1_PTYPE_MASK) >>
+ I40E_RXD_QW1_PTYPE_SHIFT));
+#else
pkt_flags |= i40e_rxd_ptype_to_pkt_flags(qword1);
mb->packet_type = (uint16_t)((qword1 &
I40E_RXD_QW1_PTYPE_MASK) >>
I40E_RXD_QW1_PTYPE_SHIFT);
+#endif /* RTE_UNIFIED_PKT_TYPE */
if (pkt_flags & PKT_RX_RSS_HASH)
mb->hash.rss = rte_le_to_cpu_32(\
rxdp[j].wb.qword0.hi_dword.rss);
@@ -952,9 +1468,15 @@ i40e_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
rte_le_to_cpu_16(rxd.wb.qword0.lo_dword.l2tag1) : 0;
pkt_flags = i40e_rxd_status_to_pkt_flags(qword1);
pkt_flags |= i40e_rxd_error_to_pkt_flags(qword1);
+#ifdef RTE_UNIFIED_PKT_TYPE
+ rxm->packet_type =
+ i40e_rxd_pkt_type_mapping((uint8_t)((qword1 &
+ I40E_RXD_QW1_PTYPE_MASK) >> I40E_RXD_QW1_PTYPE_SHIFT));
+#else
pkt_flags |= i40e_rxd_ptype_to_pkt_flags(qword1);
rxm->packet_type = (uint16_t)((qword1 & I40E_RXD_QW1_PTYPE_MASK) >>
I40E_RXD_QW1_PTYPE_SHIFT);
+#endif /* RTE_UNIFIED_PKT_TYPE */
if (pkt_flags & PKT_RX_RSS_HASH)
rxm->hash.rss =
rte_le_to_cpu_32(rxd.wb.qword0.hi_dword.rss);
@@ -1111,10 +1633,16 @@ i40e_recv_scattered_pkts(void *rx_queue,
rte_le_to_cpu_16(rxd.wb.qword0.lo_dword.l2tag1) : 0;
pkt_flags = i40e_rxd_status_to_pkt_flags(qword1);
pkt_flags |= i40e_rxd_error_to_pkt_flags(qword1);
+#ifdef RTE_UNIFIED_PKT_TYPE
+ first_seg->packet_type =
+ i40e_rxd_pkt_type_mapping((uint8_t)((qword1 &
+ I40E_RXD_QW1_PTYPE_MASK) >> I40E_RXD_QW1_PTYPE_SHIFT));
+#else
pkt_flags |= i40e_rxd_ptype_to_pkt_flags(qword1);
first_seg->packet_type = (uint16_t)((qword1 &
I40E_RXD_QW1_PTYPE_MASK) >>
I40E_RXD_QW1_PTYPE_SHIFT);
+#endif /* RTE_UNIFIED_PKT_TYPE */
if (pkt_flags & PKT_RX_RSS_HASH)
rxm->hash.rss =
rte_le_to_cpu_32(rxd.wb.qword0.hi_dword.rss);
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v6 07/18] enic: replace bit mask based packet type with unified packet type
2015-06-01 7:33 ` [dpdk-dev] [PATCH v6 00/18] unified packet type Helin Zhang
` (5 preceding siblings ...)
2015-06-01 7:33 ` [dpdk-dev] [PATCH v6 06/18] i40e: " Helin Zhang
@ 2015-06-01 7:33 ` Helin Zhang
2015-06-01 7:33 ` [dpdk-dev] [PATCH v6 08/18] vmxnet3: " Helin Zhang
` (11 subsequent siblings)
18 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-06-01 7:33 UTC (permalink / raw)
To: dev
To unify packet types among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.
To avoid breaking ABI compatibility, all the changes would be
enabled by RTE_UNIFIED_PKT_TYPE, which is disabled by default.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
drivers/net/enic/enic_main.c | 26 ++++++++++++++++++++++++++
1 file changed, 26 insertions(+)
v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.
v5 changes:
* Re-worded the commit logs.
v6 changes:
* Disabled the code changes for unified packet type by default, to
avoid breaking ABI compatibility.
diff --git a/drivers/net/enic/enic_main.c b/drivers/net/enic/enic_main.c
index 15313c2..50cd8c9 100644
--- a/drivers/net/enic/enic_main.c
+++ b/drivers/net/enic/enic_main.c
@@ -423,7 +423,11 @@ static int enic_rq_indicate_buf(struct vnic_rq *rq,
rx_pkt->pkt_len = bytes_written;
if (ipv4) {
+#ifdef RTE_UNIFIED_PKT_TYPE
+ rx_pkt->packet_type = RTE_PTYPE_L3_IPV4;
+#else
rx_pkt->ol_flags |= PKT_RX_IPV4_HDR;
+#endif
if (!csum_not_calc) {
if (unlikely(!ipv4_csum_ok))
rx_pkt->ol_flags |= PKT_RX_IP_CKSUM_BAD;
@@ -432,7 +436,11 @@ static int enic_rq_indicate_buf(struct vnic_rq *rq,
rx_pkt->ol_flags |= PKT_RX_L4_CKSUM_BAD;
}
} else if (ipv6)
+#ifdef RTE_UNIFIED_PKT_TYPE
+ rx_pkt->packet_type = RTE_PTYPE_L3_IPV6;
+#else
rx_pkt->ol_flags |= PKT_RX_IPV6_HDR;
+#endif
} else {
/* Header split */
if (sop && !eop) {
@@ -445,7 +453,11 @@ static int enic_rq_indicate_buf(struct vnic_rq *rq,
*rx_pkt_bucket = rx_pkt;
rx_pkt->pkt_len = bytes_written;
if (ipv4) {
+#ifdef RTE_UNIFIED_PKT_TYPE
+ rx_pkt->packet_type = RTE_PTYPE_L3_IPV4;
+#else
rx_pkt->ol_flags |= PKT_RX_IPV4_HDR;
+#endif
if (!csum_not_calc) {
if (unlikely(!ipv4_csum_ok))
rx_pkt->ol_flags |=
@@ -457,13 +469,22 @@ static int enic_rq_indicate_buf(struct vnic_rq *rq,
PKT_RX_L4_CKSUM_BAD;
}
} else if (ipv6)
+#ifdef RTE_UNIFIED_PKT_TYPE
+ rx_pkt->packet_type = RTE_PTYPE_L3_IPV6;
+#else
rx_pkt->ol_flags |= PKT_RX_IPV6_HDR;
+#endif
} else {
/* Payload */
hdr_rx_pkt = *rx_pkt_bucket;
hdr_rx_pkt->pkt_len += bytes_written;
if (ipv4) {
+#ifdef RTE_UNIFIED_PKT_TYPE
+ hdr_rx_pkt->packet_type =
+ RTE_PTYPE_L3_IPV4;
+#else
hdr_rx_pkt->ol_flags |= PKT_RX_IPV4_HDR;
+#endif
if (!csum_not_calc) {
if (unlikely(!ipv4_csum_ok))
hdr_rx_pkt->ol_flags |=
@@ -475,7 +496,12 @@ static int enic_rq_indicate_buf(struct vnic_rq *rq,
PKT_RX_L4_CKSUM_BAD;
}
} else if (ipv6)
+#ifdef RTE_UNIFIED_PKT_TYPE
+ hdr_rx_pkt->packet_type =
+ RTE_PTYPE_L3_IPV6;
+#else
hdr_rx_pkt->ol_flags |= PKT_RX_IPV6_HDR;
+#endif
}
}
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v6 08/18] vmxnet3: replace bit mask based packet type with unified packet type
2015-06-01 7:33 ` [dpdk-dev] [PATCH v6 00/18] unified packet type Helin Zhang
` (6 preceding siblings ...)
2015-06-01 7:33 ` [dpdk-dev] [PATCH v6 07/18] enic: " Helin Zhang
@ 2015-06-01 7:33 ` Helin Zhang
2015-06-01 7:33 ` [dpdk-dev] [PATCH v6 09/18] fm10k: " Helin Zhang
` (10 subsequent siblings)
18 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-06-01 7:33 UTC (permalink / raw)
To: dev
To unify packet types among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.
To avoid breaking ABI compatibility, all the changes would be
enabled by RTE_UNIFIED_PKT_TYPE, which is disabled by default.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
drivers/net/vmxnet3/vmxnet3_rxtx.c | 8 ++++++++
1 file changed, 8 insertions(+)
v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.
v5 changes:
* Re-worded the commit logs.
v6 changes:
* Disabled the code changes for unified packet type by default, to
avoid breaking ABI compatibility.
diff --git a/drivers/net/vmxnet3/vmxnet3_rxtx.c b/drivers/net/vmxnet3/vmxnet3_rxtx.c
index a1eac45..89b600b 100644
--- a/drivers/net/vmxnet3/vmxnet3_rxtx.c
+++ b/drivers/net/vmxnet3/vmxnet3_rxtx.c
@@ -649,9 +649,17 @@ vmxnet3_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
struct ipv4_hdr *ip = (struct ipv4_hdr *)(eth + 1);
if (((ip->version_ihl & 0xf) << 2) > (int)sizeof(struct ipv4_hdr))
+#ifdef RTE_UNIFIED_PKT_TYPE
+ rxm->packet_type = RTE_PTYPE_L3_IPV4_EXT;
+#else
rxm->ol_flags |= PKT_RX_IPV4_HDR_EXT;
+#endif
else
+#ifdef RTE_UNIFIED_PKT_TYPE
+ rxm->packet_type = RTE_PTYPE_L3_IPV4;
+#else
rxm->ol_flags |= PKT_RX_IPV4_HDR;
+#endif
if (!rcd->cnc) {
if (!rcd->ipc)
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v6 09/18] fm10k: replace bit mask based packet type with unified packet type
2015-06-01 7:33 ` [dpdk-dev] [PATCH v6 00/18] unified packet type Helin Zhang
` (7 preceding siblings ...)
2015-06-01 7:33 ` [dpdk-dev] [PATCH v6 08/18] vmxnet3: " Helin Zhang
@ 2015-06-01 7:33 ` Helin Zhang
2015-06-01 7:33 ` [dpdk-dev] [PATCH v6 10/18] app/test-pipeline: " Helin Zhang
` (9 subsequent siblings)
18 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-06-01 7:33 UTC (permalink / raw)
To: dev
To unify packet types among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.
To avoid breaking ABI compatibility, all the changes would be
enabled by RTE_UNIFIED_PKT_TYPE, which is disabled by default.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
drivers/net/fm10k/fm10k_rxtx.c | 27 +++++++++++++++++++++++++++
1 file changed, 27 insertions(+)
v4 changes:
* Supported unified packet type of fm10k from v4.
v5 changes:
* Re-worded the commit logs.
v6 changes:
* Disabled the code changes for unified packet type by default, to
avoid breaking ABI compatibility.
diff --git a/drivers/net/fm10k/fm10k_rxtx.c b/drivers/net/fm10k/fm10k_rxtx.c
index 56df6cd..71a7f5d 100644
--- a/drivers/net/fm10k/fm10k_rxtx.c
+++ b/drivers/net/fm10k/fm10k_rxtx.c
@@ -68,12 +68,37 @@ static inline void dump_rxd(union fm10k_rx_desc *rxd)
static inline void
rx_desc_to_ol_flags(struct rte_mbuf *m, const union fm10k_rx_desc *d)
{
+#ifdef RTE_UNIFIED_PKT_TYPE
+ static const uint32_t
+ ptype_table[FM10K_RXD_PKTTYPE_MASK >> FM10K_RXD_PKTTYPE_SHIFT]
+ __rte_cache_aligned = {
+ [FM10K_PKTTYPE_OTHER] = RTE_PTYPE_L2_MAC,
+ [FM10K_PKTTYPE_IPV4] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4,
+ [FM10K_PKTTYPE_IPV4_EX] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4_EXT,
+ [FM10K_PKTTYPE_IPV6] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6,
+ [FM10K_PKTTYPE_IPV6_EX] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV6_EXT,
+ [FM10K_PKTTYPE_IPV4 | FM10K_PKTTYPE_TCP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_TCP,
+ [FM10K_PKTTYPE_IPV6 | FM10K_PKTTYPE_TCP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_TCP,
+ [FM10K_PKTTYPE_IPV4 | FM10K_PKTTYPE_UDP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_UDP,
+ [FM10K_PKTTYPE_IPV6 | FM10K_PKTTYPE_UDP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_UDP,
+ };
+
+ m->packet_type = ptype_table[(d->w.pkt_info & FM10K_RXD_PKTTYPE_MASK)
+ >> FM10K_RXD_PKTTYPE_SHIFT];
+#else /* RTE_UNIFIED_PKT_TYPE */
uint16_t ptype;
static const uint16_t pt_lut[] = { 0,
PKT_RX_IPV4_HDR, PKT_RX_IPV4_HDR_EXT,
PKT_RX_IPV6_HDR, PKT_RX_IPV6_HDR_EXT,
0, 0, 0
};
+#endif /* RTE_UNIFIED_PKT_TYPE */
if (d->w.pkt_info & FM10K_RXD_RSSTYPE_MASK)
m->ol_flags |= PKT_RX_RSS_HASH;
@@ -97,9 +122,11 @@ rx_desc_to_ol_flags(struct rte_mbuf *m, const union fm10k_rx_desc *d)
if (unlikely(d->d.staterr & FM10K_RXD_STATUS_RXE))
m->ol_flags |= PKT_RX_RECIP_ERR;
+#ifndef RTE_UNIFIED_PKT_TYPE
ptype = (d->d.data & FM10K_RXD_PKTTYPE_MASK_L3) >>
FM10K_RXD_PKTTYPE_SHIFT;
m->ol_flags |= pt_lut[(uint8_t)ptype];
+#endif
}
uint16_t
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v6 10/18] app/test-pipeline: replace bit mask based packet type with unified packet type
2015-06-01 7:33 ` [dpdk-dev] [PATCH v6 00/18] unified packet type Helin Zhang
` (8 preceding siblings ...)
2015-06-01 7:33 ` [dpdk-dev] [PATCH v6 09/18] fm10k: " Helin Zhang
@ 2015-06-01 7:33 ` Helin Zhang
2015-06-01 7:33 ` [dpdk-dev] [PATCH v6 11/18] app/testpmd: " Helin Zhang
` (8 subsequent siblings)
18 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-06-01 7:33 UTC (permalink / raw)
To: dev
To unify packet types among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.
To avoid breaking ABI compatibility, all the changes would be
enabled by RTE_UNIFIED_PKT_TYPE, which is disabled by default.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
app/test-pipeline/pipeline_hash.c | 13 +++++++++++++
1 file changed, 13 insertions(+)
v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.
v5 changes:
* Re-worded the commit logs.
v6 changes:
* Disabled the code changes for unified packet type by default, to
avoid breaking ABI compatibility.
diff --git a/app/test-pipeline/pipeline_hash.c b/app/test-pipeline/pipeline_hash.c
index 4598ad4..bc84920 100644
--- a/app/test-pipeline/pipeline_hash.c
+++ b/app/test-pipeline/pipeline_hash.c
@@ -459,20 +459,33 @@ app_main_loop_rx_metadata(void) {
signature = RTE_MBUF_METADATA_UINT32_PTR(m, 0);
key = RTE_MBUF_METADATA_UINT8_PTR(m, 32);
+#ifdef RTE_UNIFIED_PKT_TYPE
+ if (RTE_ETH_IS_IPV4_HDR(m->packet_type)) {
+#else
if (m->ol_flags & PKT_RX_IPV4_HDR) {
+#endif
ip_hdr = (struct ipv4_hdr *)
&m_data[sizeof(struct ether_hdr)];
ip_dst = ip_hdr->dst_addr;
k32 = (uint32_t *) key;
k32[0] = ip_dst & 0xFFFFFF00;
+#ifdef RTE_UNIFIED_PKT_TYPE
+ } else if (RTE_ETH_IS_IPV6_HDR(m->packet_type)) {
+#else
} else {
+#endif
ipv6_hdr = (struct ipv6_hdr *)
&m_data[sizeof(struct ether_hdr)];
ipv6_dst = ipv6_hdr->dst_addr;
memcpy(key, ipv6_dst, 16);
+#ifdef RTE_UNIFIED_PKT_TYPE
+ } else
+ continue;
+#else
}
+#endif
*signature = test_hash(key, 0, 0);
}
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v6 11/18] app/testpmd: replace bit mask based packet type with unified packet type
2015-06-01 7:33 ` [dpdk-dev] [PATCH v6 00/18] unified packet type Helin Zhang
` (9 preceding siblings ...)
2015-06-01 7:33 ` [dpdk-dev] [PATCH v6 10/18] app/test-pipeline: " Helin Zhang
@ 2015-06-01 7:33 ` Helin Zhang
2015-06-01 7:33 ` [dpdk-dev] [PATCH v6 12/18] app/test: Remove useless code Helin Zhang
` (7 subsequent siblings)
18 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-06-01 7:33 UTC (permalink / raw)
To: dev
To unify packet types among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.
To avoid breaking ABI compatibility, all the changes would be
enabled by RTE_UNIFIED_PKT_TYPE, which is disabled by default.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Signed-off-by: Jijiang Liu <jijiang.liu@intel.com>
---
app/test-pmd/csumonly.c | 14 ++++
app/test-pmd/rxonly.c | 183 ++++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 197 insertions(+)
v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.
v4 changes:
* Added printing logs of packet types of each received packet in rxonly mode.
v5 changes:
* Re-worded the commit logs.
v6 changes:
* Disabled the code changes for unified packet type by default, to
avoid breaking ABI compatibility.
diff --git a/app/test-pmd/csumonly.c b/app/test-pmd/csumonly.c
index c180ff2..43ab6f8 100644
--- a/app/test-pmd/csumonly.c
+++ b/app/test-pmd/csumonly.c
@@ -202,8 +202,14 @@ parse_ethernet(struct ether_hdr *eth_hdr, struct testpmd_offload_info *info)
/* Parse a vxlan header */
static void
+#ifdef RTE_UNIFIED_PKT_TYPE
+parse_vxlan(struct udp_hdr *udp_hdr,
+ struct testpmd_offload_info *info,
+ uint32_t pkt_type)
+#else
parse_vxlan(struct udp_hdr *udp_hdr, struct testpmd_offload_info *info,
uint64_t mbuf_olflags)
+#endif
{
struct ether_hdr *eth_hdr;
@@ -211,8 +217,12 @@ parse_vxlan(struct udp_hdr *udp_hdr, struct testpmd_offload_info *info,
* (rfc7348) or that the rx offload flag is set (i40e only
* currently) */
if (udp_hdr->dst_port != _htons(4789) &&
+#ifdef RTE_UNIFIED_PKT_TYPE
+ RTE_ETH_IS_TUNNEL_PKT(pkt_type) == 0)
+#else
(mbuf_olflags & (PKT_RX_TUNNEL_IPV4_HDR |
PKT_RX_TUNNEL_IPV6_HDR)) == 0)
+#endif
return;
info->is_tunnel = 1;
@@ -549,7 +559,11 @@ pkt_burst_checksum_forward(struct fwd_stream *fs)
struct udp_hdr *udp_hdr;
udp_hdr = (struct udp_hdr *)((char *)l3_hdr +
info.l3_len);
+#ifdef RTE_UNIFIED_PKT_TYPE
+ parse_vxlan(udp_hdr, &info, m->packet_type);
+#else
parse_vxlan(udp_hdr, &info, m->ol_flags);
+#endif
} else if (info.l4_proto == IPPROTO_GRE) {
struct simple_gre_hdr *gre_hdr;
gre_hdr = (struct simple_gre_hdr *)
diff --git a/app/test-pmd/rxonly.c b/app/test-pmd/rxonly.c
index ac56090..e6767be 100644
--- a/app/test-pmd/rxonly.c
+++ b/app/test-pmd/rxonly.c
@@ -91,7 +91,11 @@ pkt_burst_receive(struct fwd_stream *fs)
uint64_t ol_flags;
uint16_t nb_rx;
uint16_t i, packet_type;
+#ifdef RTE_UNIFIED_PKT_TYPE
+ uint16_t is_encapsulation;
+#else
uint64_t is_encapsulation;
+#endif
#ifdef RTE_TEST_PMD_RECORD_CORE_CYCLES
uint64_t start_tsc;
@@ -135,8 +139,12 @@ pkt_burst_receive(struct fwd_stream *fs)
ol_flags = mb->ol_flags;
packet_type = mb->packet_type;
+#ifdef RTE_UNIFIED_PKT_TYPE
+ is_encapsulation = RTE_ETH_IS_TUNNEL_PKT(packet_type);
+#else
is_encapsulation = ol_flags & (PKT_RX_TUNNEL_IPV4_HDR |
PKT_RX_TUNNEL_IPV6_HDR);
+#endif
print_ether_addr(" src=", ð_hdr->s_addr);
print_ether_addr(" - dst=", ð_hdr->d_addr);
@@ -160,6 +168,177 @@ pkt_burst_receive(struct fwd_stream *fs)
}
if (ol_flags & PKT_RX_VLAN_PKT)
printf(" - VLAN tci=0x%x", mb->vlan_tci);
+#ifdef RTE_UNIFIED_PKT_TYPE
+ if (mb->packet_type) {
+ uint32_t ptype;
+
+ /* (outer) L2 packet type */
+ ptype = mb->packet_type & RTE_PTYPE_L2_MASK;
+ switch (ptype) {
+ case RTE_PTYPE_L2_MAC:
+ printf(" - (outer) L2 type: MAC");
+ break;
+ case RTE_PTYPE_L2_MAC_TIMESYNC:
+ printf(" - (outer) L2 type: MAC Timesync");
+ break;
+ case RTE_PTYPE_L2_ARP:
+ printf(" - (outer) L2 type: ARP");
+ break;
+ case RTE_PTYPE_L2_LLDP:
+ printf(" - (outer) L2 type: LLDP");
+ break;
+ default:
+ printf(" - (outer) L2 type: Unknown");
+ break;
+ }
+
+ /* (outer) L3 packet type */
+ ptype = mb->packet_type & RTE_PTYPE_L3_MASK;
+ switch (ptype) {
+ case RTE_PTYPE_L3_IPV4:
+ printf(" - (outer) L3 type: IPV4");
+ break;
+ case RTE_PTYPE_L3_IPV4_EXT:
+ printf(" - (outer) L3 type: IPV4_EXT");
+ break;
+ case RTE_PTYPE_L3_IPV6:
+ printf(" - (outer) L3 type: IPV6");
+ break;
+ case RTE_PTYPE_L3_IPV4_EXT_UNKNOWN:
+ printf(" - (outer) L3 type: IPV4_EXT_UNKNOWN");
+ break;
+ case RTE_PTYPE_L3_IPV6_EXT:
+ printf(" - (outer) L3 type: IPV6_EXT");
+ break;
+ case RTE_PTYPE_L3_IPV6_EXT_UNKNOWN:
+ printf(" - (outer) L3 type: IPV6_EXT_UNKNOWN");
+ break;
+ default:
+ printf(" - (outer) L3 type: Unknown");
+ break;
+ }
+
+ /* (outer) L4 packet type */
+ ptype = mb->packet_type & RTE_PTYPE_L4_MASK;
+ switch (ptype) {
+ case RTE_PTYPE_L4_TCP:
+ printf(" - (outer) L4 type: TCP");
+ break;
+ case RTE_PTYPE_L4_UDP:
+ printf(" - (outer) L4 type: UDP");
+ break;
+ case RTE_PTYPE_L4_FRAG:
+ printf(" - (outer) L4 type: L4_FRAG");
+ break;
+ case RTE_PTYPE_L4_SCTP:
+ printf(" - (outer) L4 type: SCTP");
+ break;
+ case RTE_PTYPE_L4_ICMP:
+ printf(" - (outer) L4 type: ICMP");
+ break;
+ case RTE_PTYPE_L4_NONFRAG:
+ printf(" - (outer) L4 type: L4_NONFRAG");
+ break;
+ default:
+ printf(" - (outer) L4 type: Unknown");
+ break;
+ }
+
+ /* packet tunnel type */
+ ptype = mb->packet_type & RTE_PTYPE_TUNNEL_MASK;
+ switch (ptype) {
+ case RTE_PTYPE_TUNNEL_IP:
+ printf(" - Tunnel type: IP");
+ break;
+ case RTE_PTYPE_TUNNEL_GRE:
+ printf(" - Tunnel type: GRE");
+ break;
+ case RTE_PTYPE_TUNNEL_VXLAN:
+ printf(" - Tunnel type: VXLAN");
+ break;
+ case RTE_PTYPE_TUNNEL_NVGRE:
+ printf(" - Tunnel type: NVGRE");
+ break;
+ case RTE_PTYPE_TUNNEL_GENEVE:
+ printf(" - Tunnel type: GENEVE");
+ break;
+ case RTE_PTYPE_TUNNEL_GRENAT:
+ printf(" - Tunnel type: GRENAT");
+ break;
+ default:
+ printf(" - Tunnel type: Unkown");
+ break;
+ }
+
+ /* inner L2 packet type */
+ ptype = mb->packet_type & RTE_PTYPE_INNER_L2_MASK;
+ switch (ptype) {
+ case RTE_PTYPE_INNER_L2_MAC:
+ printf(" - Inner L2 type: MAC");
+ break;
+ case RTE_PTYPE_INNER_L2_MAC_VLAN:
+ printf(" - Inner L2 type: MAC_VLAN");
+ break;
+ default:
+ printf(" - Inner L2 type: Unknown");
+ break;
+ }
+
+ /* inner L3 packet type */
+ ptype = mb->packet_type & RTE_PTYPE_INNER_INNER_L3_MASK;
+ switch (ptype) {
+ case RTE_PTYPE_INNER_L3_IPV4:
+ printf(" - Inner L3 type: IPV4");
+ break;
+ case RTE_PTYPE_INNER_L3_IPV4_EXT:
+ printf(" - Inner L3 type: IPV4_EXT");
+ break;
+ case RTE_PTYPE_INNER_L3_IPV6:
+ printf(" - Inner L3 type: IPV6");
+ break;
+ case RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN:
+ printf(" - Inner L3 type: IPV4_EXT_UNKNOWN");
+ break;
+ case RTE_PTYPE_INNER_L3_IPV6_EXT:
+ printf(" - Inner L3 type: IPV6_EXT");
+ break;
+ case RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN:
+ printf(" - Inner L3 type: IPV6_EXT_UNKOWN");
+ break;
+ default:
+ printf(" - Inner L3 type: Unkown");
+ break;
+ }
+
+ /* inner L4 packet type */
+ ptype = mb->packet_type & RTE_PTYPE_INNER_L4_MASK;
+ switch (ptype) {
+ case RTE_PTYPE_INNER_L4_TCP:
+ printf(" - Inner L4 type: TCP");
+ break;
+ case RTE_PTYPE_INNER_L4_UDP:
+ printf(" - Inner L4 type: UDP");
+ break;
+ case RTE_PTYPE_INNER_L4_FRAG:
+ printf(" - Inner L4 type: L4_FRAG");
+ break;
+ case RTE_PTYPE_INNER_L4_SCTP:
+ printf(" - Inner L4 type: SCTP");
+ break;
+ case RTE_PTYPE_INNER_L4_ICMP:
+ printf(" - Inner L4 type: ICMP");
+ break;
+ case RTE_PTYPE_INNER_L4_NONFRAG:
+ printf(" - Inner L4 type: L4_NONFRAG");
+ break;
+ default:
+ printf(" - Inner L4 type: Unknown");
+ break;
+ }
+ printf("\n");
+ } else
+ printf("Unknown packet type\n");
+#endif /* RTE_UNIFIED_PKT_TYPE */
if (is_encapsulation) {
struct ipv4_hdr *ipv4_hdr;
struct ipv6_hdr *ipv6_hdr;
@@ -173,7 +352,11 @@ pkt_burst_receive(struct fwd_stream *fs)
l2_len = sizeof(struct ether_hdr);
/* Do not support ipv4 option field */
+#ifdef RTE_UNIFIED_PKT_TYPE
+ if (RTE_ETH_IS_IPV4_HDR(packet_type)) {
+#else
if (ol_flags & PKT_RX_TUNNEL_IPV4_HDR) {
+#endif
l3_len = sizeof(struct ipv4_hdr);
ipv4_hdr = (struct ipv4_hdr *) (rte_pktmbuf_mtod(mb,
unsigned char *) + l2_len);
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v6 12/18] app/test: Remove useless code
2015-06-01 7:33 ` [dpdk-dev] [PATCH v6 00/18] unified packet type Helin Zhang
` (10 preceding siblings ...)
2015-06-01 7:33 ` [dpdk-dev] [PATCH v6 11/18] app/testpmd: " Helin Zhang
@ 2015-06-01 7:33 ` Helin Zhang
2015-06-01 7:34 ` [dpdk-dev] [PATCH v6 13/18] examples/ip_fragmentation: replace bit mask based packet type with unified packet type Helin Zhang
` (6 subsequent siblings)
18 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-06-01 7:33 UTC (permalink / raw)
To: dev
Severl useless code lines are added accidently, which blocks packet
type unification. They should be deleted at all.
To avoid breaking ABI compatibility, all the changes would be
enabled by RTE_UNIFIED_PKT_TYPE, which is disabled by default.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
app/test/packet_burst_generator.c | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
v4 changes:
* Removed several useless code lines which block packet type unification.
v5 changes:
* Re-worded the commit logs.
v6 changes:
* Disabled the code changes for unified packet type by default, to
avoid breaking ABI compatibility.
diff --git a/app/test/packet_burst_generator.c b/app/test/packet_burst_generator.c
index b46eed7..6b1bbb5 100644
--- a/app/test/packet_burst_generator.c
+++ b/app/test/packet_burst_generator.c
@@ -272,19 +272,21 @@ nomore_mbuf:
if (ipv4) {
pkt->vlan_tci = ETHER_TYPE_IPv4;
pkt->l3_len = sizeof(struct ipv4_hdr);
-
+#ifndef RTE_UNIFIED_PKT_TYPE
if (vlan_enabled)
pkt->ol_flags = PKT_RX_IPV4_HDR | PKT_RX_VLAN_PKT;
else
pkt->ol_flags = PKT_RX_IPV4_HDR;
+#endif
} else {
pkt->vlan_tci = ETHER_TYPE_IPv6;
pkt->l3_len = sizeof(struct ipv6_hdr);
-
+#ifndef RTE_UNIFIED_PKT_TYPE
if (vlan_enabled)
pkt->ol_flags = PKT_RX_IPV6_HDR | PKT_RX_VLAN_PKT;
else
pkt->ol_flags = PKT_RX_IPV6_HDR;
+#endif
}
pkts_burst[nb_pkt] = pkt;
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v6 13/18] examples/ip_fragmentation: replace bit mask based packet type with unified packet type
2015-06-01 7:33 ` [dpdk-dev] [PATCH v6 00/18] unified packet type Helin Zhang
` (11 preceding siblings ...)
2015-06-01 7:33 ` [dpdk-dev] [PATCH v6 12/18] app/test: Remove useless code Helin Zhang
@ 2015-06-01 7:34 ` Helin Zhang
2015-06-01 7:34 ` [dpdk-dev] [PATCH v6 14/18] examples/ip_reassembly: " Helin Zhang
` (5 subsequent siblings)
18 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-06-01 7:34 UTC (permalink / raw)
To: dev
To unify packet types among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.
To avoid breaking ABI compatibility, all the changes would be
enabled by RTE_UNIFIED_PKT_TYPE, which is disabled by default.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
examples/ip_fragmentation/main.c | 9 +++++++++
1 file changed, 9 insertions(+)
v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.
v5 changes:
* Re-worded the commit logs.
v6 changes:
* Disabled the code changes for unified packet type by default, to
avoid breaking ABI compatibility.
diff --git a/examples/ip_fragmentation/main.c b/examples/ip_fragmentation/main.c
index 0922ba6..5eccecc 100644
--- a/examples/ip_fragmentation/main.c
+++ b/examples/ip_fragmentation/main.c
@@ -283,7 +283,11 @@ l3fwd_simple_forward(struct rte_mbuf *m, struct lcore_queue_conf *qconf,
len = qconf->tx_mbufs[port_out].len;
/* if this is an IPv4 packet */
+#ifdef RTE_UNIFIED_PKT_TYPE
+ if (RTE_ETH_IS_IPV4_HDR(m->packet_type)) {
+#else
if (m->ol_flags & PKT_RX_IPV4_HDR) {
+#endif
struct ipv4_hdr *ip_hdr;
uint32_t ip_dst;
/* Read the lookup key (i.e. ip_dst) from the input packet */
@@ -317,9 +321,14 @@ l3fwd_simple_forward(struct rte_mbuf *m, struct lcore_queue_conf *qconf,
if (unlikely (len2 < 0))
return;
}
+#ifdef RTE_UNIFIED_PKT_TYPE
+ } else if (RTE_ETH_IS_IPV6_HDR(m->packet_type)) {
+ /* if this is an IPv6 packet */
+#else
}
/* if this is an IPv6 packet */
else if (m->ol_flags & PKT_RX_IPV6_HDR) {
+#endif
struct ipv6_hdr *ip_hdr;
ipv6 = 1;
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v6 14/18] examples/ip_reassembly: replace bit mask based packet type with unified packet type
2015-06-01 7:33 ` [dpdk-dev] [PATCH v6 00/18] unified packet type Helin Zhang
` (12 preceding siblings ...)
2015-06-01 7:34 ` [dpdk-dev] [PATCH v6 13/18] examples/ip_fragmentation: replace bit mask based packet type with unified packet type Helin Zhang
@ 2015-06-01 7:34 ` Helin Zhang
2015-06-01 7:34 ` [dpdk-dev] [PATCH v6 15/18] examples/l3fwd-acl: " Helin Zhang
` (4 subsequent siblings)
18 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-06-01 7:34 UTC (permalink / raw)
To: dev
To unify packet types among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.
To avoid breaking ABI compatibility, all the changes would be
enabled by RTE_UNIFIED_PKT_TYPE, which is disabled by default.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
examples/ip_reassembly/main.c | 9 +++++++++
1 file changed, 9 insertions(+)
v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.
v5 changes:
* Re-worded the commit logs.
v6 changes:
* Disabled the code changes for unified packet type by default, to
avoid breaking ABI compatibility.
diff --git a/examples/ip_reassembly/main.c b/examples/ip_reassembly/main.c
index 9ecb6f9..cb131f6 100644
--- a/examples/ip_reassembly/main.c
+++ b/examples/ip_reassembly/main.c
@@ -356,7 +356,11 @@ reassemble(struct rte_mbuf *m, uint8_t portid, uint32_t queue,
dst_port = portid;
/* if packet is IPv4 */
+#ifdef RTE_UNIFIED_PKT_TYPE
+ if (RTE_ETH_IS_IPV4_HDR(m->packet_type)) {
+#else
if (m->ol_flags & (PKT_RX_IPV4_HDR)) {
+#endif
struct ipv4_hdr *ip_hdr;
uint32_t ip_dst;
@@ -396,9 +400,14 @@ reassemble(struct rte_mbuf *m, uint8_t portid, uint32_t queue,
}
eth_hdr->ether_type = rte_be_to_cpu_16(ETHER_TYPE_IPv4);
+#ifdef RTE_UNIFIED_PKT_TYPE
+ } else if (RTE_ETH_IS_IPV6_HDR(m->packet_type)) {
+ /* if packet is IPv6 */
+#else
}
/* if packet is IPv6 */
else if (m->ol_flags & (PKT_RX_IPV6_HDR | PKT_RX_IPV6_HDR_EXT)) {
+#endif
struct ipv6_extension_fragment *frag_hdr;
struct ipv6_hdr *ip_hdr;
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v6 15/18] examples/l3fwd-acl: replace bit mask based packet type with unified packet type
2015-06-01 7:33 ` [dpdk-dev] [PATCH v6 00/18] unified packet type Helin Zhang
` (13 preceding siblings ...)
2015-06-01 7:34 ` [dpdk-dev] [PATCH v6 14/18] examples/ip_reassembly: " Helin Zhang
@ 2015-06-01 7:34 ` Helin Zhang
2015-06-01 7:34 ` [dpdk-dev] [PATCH v6 16/18] examples/l3fwd-power: " Helin Zhang
` (3 subsequent siblings)
18 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-06-01 7:34 UTC (permalink / raw)
To: dev
To unify packet types among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.
To avoid breaking ABI compatibility, all the changes would be
enabled by RTE_UNIFIED_PKT_TYPE, which is disabled by default.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
examples/l3fwd-acl/main.c | 29 +++++++++++++++++++++++------
1 file changed, 23 insertions(+), 6 deletions(-)
v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.
v5 changes:
* Re-worded the commit logs.
v6 changes:
* Disabled the code changes for unified packet type by default, to
avoid breaking ABI compatibility.
diff --git a/examples/l3fwd-acl/main.c b/examples/l3fwd-acl/main.c
index a5d4f25..2da8bf1 100644
--- a/examples/l3fwd-acl/main.c
+++ b/examples/l3fwd-acl/main.c
@@ -645,10 +645,13 @@ prepare_one_packet(struct rte_mbuf **pkts_in, struct acl_search_t *acl,
struct ipv4_hdr *ipv4_hdr;
struct rte_mbuf *pkt = pkts_in[index];
+#ifdef RTE_UNIFIED_PKT_TYPE
+ if (RTE_ETH_IS_IPV4_HDR(pkt->packet_type)) {
+#else
int type = pkt->ol_flags & (PKT_RX_IPV4_HDR | PKT_RX_IPV6_HDR);
if (type == PKT_RX_IPV4_HDR) {
-
+#endif
ipv4_hdr = (struct ipv4_hdr *)(rte_pktmbuf_mtod(pkt,
unsigned char *) + sizeof(struct ether_hdr));
@@ -667,9 +670,11 @@ prepare_one_packet(struct rte_mbuf **pkts_in, struct acl_search_t *acl,
/* Not a valid IPv4 packet */
rte_pktmbuf_free(pkt);
}
-
+#ifdef RTE_UNIFIED_PKT_TYPE
+ } else if (RTE_ETH_IS_IPV6_HDR(pkt->packet_type)) {
+#else
} else if (type == PKT_RX_IPV6_HDR) {
-
+#endif
/* Fill acl structure */
acl->data_ipv6[acl->num_ipv6] = MBUF_IPV6_2PROTO(pkt);
acl->m_ipv6[(acl->num_ipv6)++] = pkt;
@@ -687,17 +692,22 @@ prepare_one_packet(struct rte_mbuf **pkts_in, struct acl_search_t *acl,
{
struct rte_mbuf *pkt = pkts_in[index];
+#ifdef RTE_UNIFIED_PKT_TYPE
+ if (RTE_ETH_IS_IPV4_HDR(pkt->packet_type)) {
+#else
int type = pkt->ol_flags & (PKT_RX_IPV4_HDR | PKT_RX_IPV6_HDR);
if (type == PKT_RX_IPV4_HDR) {
-
+#endif
/* Fill acl structure */
acl->data_ipv4[acl->num_ipv4] = MBUF_IPV4_2PROTO(pkt);
acl->m_ipv4[(acl->num_ipv4)++] = pkt;
-
+#ifdef RTE_UNIFIED_PKT_TYPE
+ } else if (RTE_ETH_IS_IPV6_HDR(pkt->packet_type)) {
+#else
} else if (type == PKT_RX_IPV6_HDR) {
-
+#endif
/* Fill acl structure */
acl->data_ipv6[acl->num_ipv6] = MBUF_IPV6_2PROTO(pkt);
acl->m_ipv6[(acl->num_ipv6)++] = pkt;
@@ -745,10 +755,17 @@ send_one_packet(struct rte_mbuf *m, uint32_t res)
/* in the ACL list, drop it */
#ifdef L3FWDACL_DEBUG
if ((res & ACL_DENY_SIGNATURE) != 0) {
+#ifdef RTE_UNIFIED_PKT_TYPE
+ if (RTE_ETH_IS_IPV4_HDR(m->packet_type))
+ dump_acl4_rule(m, res);
+ else if (RTE_ETH_IS_IPV6_HDR(m->packet_type))
+ dump_acl6_rule(m, res);
+#else
if (m->ol_flags & PKT_RX_IPV4_HDR)
dump_acl4_rule(m, res);
else
dump_acl6_rule(m, res);
+#endif /* RTE_UNIFIED_PKT_TYPE */
}
#endif
rte_pktmbuf_free(m);
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v6 16/18] examples/l3fwd-power: replace bit mask based packet type with unified packet type
2015-06-01 7:33 ` [dpdk-dev] [PATCH v6 00/18] unified packet type Helin Zhang
` (14 preceding siblings ...)
2015-06-01 7:34 ` [dpdk-dev] [PATCH v6 15/18] examples/l3fwd-acl: " Helin Zhang
@ 2015-06-01 7:34 ` Helin Zhang
2015-06-01 7:34 ` [dpdk-dev] [PATCH v6 17/18] examples/l3fwd: " Helin Zhang
` (2 subsequent siblings)
18 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-06-01 7:34 UTC (permalink / raw)
To: dev
To unify packet types among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.
To avoid breaking ABI compatibility, all the changes would be
enabled by RTE_UNIFIED_PKT_TYPE, which is disabled by default.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
examples/l3fwd-power/main.c | 8 ++++++++
1 file changed, 8 insertions(+)
v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.
v5 changes:
* Re-worded the commit logs.
v6 changes:
* Disabled the code changes for unified packet type by default, to
avoid breaking ABI compatibility.
diff --git a/examples/l3fwd-power/main.c b/examples/l3fwd-power/main.c
index 6ac342b..e27ad4e 100644
--- a/examples/l3fwd-power/main.c
+++ b/examples/l3fwd-power/main.c
@@ -635,7 +635,11 @@ l3fwd_simple_forward(struct rte_mbuf *m, uint8_t portid,
eth_hdr = rte_pktmbuf_mtod(m, struct ether_hdr *);
+#ifdef RTE_UNIFIED_PKT_TYPE
+ if (RTE_ETH_IS_IPV4_HDR(m->packet_type)) {
+#else
if (m->ol_flags & PKT_RX_IPV4_HDR) {
+#endif
/* Handle IPv4 headers.*/
ipv4_hdr =
(struct ipv4_hdr *)(rte_pktmbuf_mtod(m, unsigned char*)
@@ -670,8 +674,12 @@ l3fwd_simple_forward(struct rte_mbuf *m, uint8_t portid,
ether_addr_copy(&ports_eth_addr[dst_port], ð_hdr->s_addr);
send_single_packet(m, dst_port);
+#ifdef RTE_UNIFIED_PKT_TYPE
+ } else if (RTE_ETH_IS_IPV6_HDR(m->packet_type)) {
+#else
}
else {
+#endif
/* Handle IPv6 headers.*/
#if (APP_LOOKUP_METHOD == APP_LOOKUP_EXACT_MATCH)
struct ipv6_hdr *ipv6_hdr;
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v6 17/18] examples/l3fwd: replace bit mask based packet type with unified packet type
2015-06-01 7:33 ` [dpdk-dev] [PATCH v6 00/18] unified packet type Helin Zhang
` (15 preceding siblings ...)
2015-06-01 7:34 ` [dpdk-dev] [PATCH v6 16/18] examples/l3fwd-power: " Helin Zhang
@ 2015-06-01 7:34 ` Helin Zhang
2015-06-01 7:34 ` [dpdk-dev] [PATCH v6 18/18] mbuf: remove old packet type bit masks Helin Zhang
2015-06-19 8:14 ` [dpdk-dev] [PATCH v7 00/18] unified packet type Helin Zhang
18 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-06-01 7:34 UTC (permalink / raw)
To: dev
To unify packet types among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.
To avoid breaking ABI compatibility, all the changes would be
enabled by RTE_UNIFIED_PKT_TYPE, which is disabled by default.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
examples/l3fwd/main.c | 123 ++++++++++++++++++++++++++++++++++++++++++++++++--
1 file changed, 120 insertions(+), 3 deletions(-)
v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.
v3 changes:
* Minor bug fixes and enhancements.
v5 changes:
* Re-worded the commit logs.
v6 changes:
* Disabled the code changes for unified packet type by default, to
avoid breaking ABI compatibility.
diff --git a/examples/l3fwd/main.c b/examples/l3fwd/main.c
index e32512e..72d9ab7 100644
--- a/examples/l3fwd/main.c
+++ b/examples/l3fwd/main.c
@@ -955,7 +955,11 @@ l3fwd_simple_forward(struct rte_mbuf *m, uint8_t portid, struct lcore_conf *qcon
eth_hdr = rte_pktmbuf_mtod(m, struct ether_hdr *);
+#ifdef RTE_UNIFIED_PKT_TYPE
+ if (RTE_ETH_IS_IPV4_HDR(m->packet_type)) {
+#else
if (m->ol_flags & PKT_RX_IPV4_HDR) {
+#endif
/* Handle IPv4 headers.*/
ipv4_hdr = (struct ipv4_hdr *)(rte_pktmbuf_mtod(m, unsigned char *) +
sizeof(struct ether_hdr));
@@ -989,8 +993,11 @@ l3fwd_simple_forward(struct rte_mbuf *m, uint8_t portid, struct lcore_conf *qcon
ether_addr_copy(&ports_eth_addr[dst_port], ð_hdr->s_addr);
send_single_packet(m, dst_port);
-
+#ifdef RTE_UNIFIED_PKT_TYPE
+ } else if (RTE_ETH_IS_IPV6_HDR(m->packet_type)) {
+#else
} else {
+#endif
/* Handle IPv6 headers.*/
struct ipv6_hdr *ipv6_hdr;
@@ -1011,8 +1018,13 @@ l3fwd_simple_forward(struct rte_mbuf *m, uint8_t portid, struct lcore_conf *qcon
ether_addr_copy(&ports_eth_addr[dst_port], ð_hdr->s_addr);
send_single_packet(m, dst_port);
+#ifdef RTE_UNIFIED_PKT_TYPE
+ } else
+ /* Free the mbuf that contains non-IPV4/IPV6 packet */
+ rte_pktmbuf_free(m);
+#else
}
-
+#endif
}
#ifdef DO_RFC_1812_CHECKS
@@ -1036,12 +1048,19 @@ l3fwd_simple_forward(struct rte_mbuf *m, uint8_t portid, struct lcore_conf *qcon
* to BAD_PORT value.
*/
static inline __attribute__((always_inline)) void
+#ifdef RTE_UNIFIED_PKT_TYPE
+rfc1812_process(struct ipv4_hdr *ipv4_hdr, uint16_t *dp, uint32_t ptype)
+#else
rfc1812_process(struct ipv4_hdr *ipv4_hdr, uint16_t *dp, uint32_t flags)
+#endif
{
uint8_t ihl;
+#ifdef RTE_UNIFIED_PKT_TYPE
+ if (RTE_ETH_IS_IPV4_HDR(ptype)) {
+#else
if ((flags & PKT_RX_IPV4_HDR) != 0) {
-
+#endif
ihl = ipv4_hdr->version_ihl - IPV4_MIN_VER_IHL;
ipv4_hdr->time_to_live--;
@@ -1071,11 +1090,19 @@ get_dst_port(const struct lcore_conf *qconf, struct rte_mbuf *pkt,
struct ipv6_hdr *ipv6_hdr;
struct ether_hdr *eth_hdr;
+#ifdef RTE_UNIFIED_PKT_TYPE
+ if (RTE_ETH_IS_IPV4_HDR(pkt->packet_type)) {
+#else
if (pkt->ol_flags & PKT_RX_IPV4_HDR) {
+#endif
if (rte_lpm_lookup(qconf->ipv4_lookup_struct, dst_ipv4,
&next_hop) != 0)
next_hop = portid;
+#ifdef RTE_UNIFIED_PKT_TYPE
+ } else if (RTE_ETH_IS_IPV6_HDR(pkt->packet_type)) {
+#else
} else if (pkt->ol_flags & PKT_RX_IPV6_HDR) {
+#endif
eth_hdr = rte_pktmbuf_mtod(pkt, struct ether_hdr *);
ipv6_hdr = (struct ipv6_hdr *)(eth_hdr + 1);
if (rte_lpm6_lookup(qconf->ipv6_lookup_struct,
@@ -1109,12 +1136,52 @@ process_packet(struct lcore_conf *qconf, struct rte_mbuf *pkt,
ve = val_eth[dp];
dst_port[0] = dp;
+#ifdef RTE_UNIFIED_PKT_TYPE
+ rfc1812_process(ipv4_hdr, dst_port, pkt->packet_type);
+#else
rfc1812_process(ipv4_hdr, dst_port, pkt->ol_flags);
+#endif
te = _mm_blend_epi16(te, ve, MASK_ETH);
_mm_store_si128((__m128i *)eth_hdr, te);
}
+#ifdef RTE_UNIFIED_PKT_TYPE
+/*
+ * Read packet_type and destination IPV4 addresses from 4 mbufs.
+ */
+static inline void
+processx4_step1(struct rte_mbuf *pkt[FWDSTEP],
+ __m128i *dip,
+ uint32_t *ipv4_flag)
+{
+ struct ipv4_hdr *ipv4_hdr;
+ struct ether_hdr *eth_hdr;
+ uint32_t x0, x1, x2, x3;
+
+ eth_hdr = rte_pktmbuf_mtod(pkt[0], struct ether_hdr *);
+ ipv4_hdr = (struct ipv4_hdr *)(eth_hdr + 1);
+ x0 = ipv4_hdr->dst_addr;
+ ipv4_flag[0] = pkt[0]->packet_type & RTE_PTYPE_L3_IPV4;
+
+ eth_hdr = rte_pktmbuf_mtod(pkt[1], struct ether_hdr *);
+ ipv4_hdr = (struct ipv4_hdr *)(eth_hdr + 1);
+ x1 = ipv4_hdr->dst_addr;
+ ipv4_flag[0] &= pkt[1]->packet_type;
+
+ eth_hdr = rte_pktmbuf_mtod(pkt[2], struct ether_hdr *);
+ ipv4_hdr = (struct ipv4_hdr *)(eth_hdr + 1);
+ x2 = ipv4_hdr->dst_addr;
+ ipv4_flag[0] &= pkt[2]->packet_type;
+
+ eth_hdr = rte_pktmbuf_mtod(pkt[3], struct ether_hdr *);
+ ipv4_hdr = (struct ipv4_hdr *)(eth_hdr + 1);
+ x3 = ipv4_hdr->dst_addr;
+ ipv4_flag[0] &= pkt[3]->packet_type;
+
+ dip[0] = _mm_set_epi32(x3, x2, x1, x0);
+}
+#else /* RTE_UNIFIED_PKT_TYPE */
/*
* Read ol_flags and destination IPV4 addresses from 4 mbufs.
*/
@@ -1147,14 +1214,24 @@ processx4_step1(struct rte_mbuf *pkt[FWDSTEP], __m128i *dip, uint32_t *flag)
dip[0] = _mm_set_epi32(x3, x2, x1, x0);
}
+#endif /* RTE_UNIFIED_PKT_TYPE */
/*
* Lookup into LPM for destination port.
* If lookup fails, use incoming port (portid) as destination port.
*/
static inline void
+#ifdef RTE_UNIFIED_PKT_TYPE
+processx4_step2(const struct lcore_conf *qconf,
+ __m128i dip,
+ uint32_t ipv4_flag,
+ uint8_t portid,
+ struct rte_mbuf *pkt[FWDSTEP],
+ uint16_t dprt[FWDSTEP])
+#else
processx4_step2(const struct lcore_conf *qconf, __m128i dip, uint32_t flag,
uint8_t portid, struct rte_mbuf *pkt[FWDSTEP], uint16_t dprt[FWDSTEP])
+#endif /* RTE_UNIFIED_PKT_TYPE */
{
rte_xmm_t dst;
const __m128i bswap_mask = _mm_set_epi8(12, 13, 14, 15, 8, 9, 10, 11,
@@ -1164,7 +1241,11 @@ processx4_step2(const struct lcore_conf *qconf, __m128i dip, uint32_t flag,
dip = _mm_shuffle_epi8(dip, bswap_mask);
/* if all 4 packets are IPV4. */
+#ifdef RTE_UNIFIED_PKT_TYPE
+ if (likely(ipv4_flag)) {
+#else
if (likely(flag != 0)) {
+#endif
rte_lpm_lookupx4(qconf->ipv4_lookup_struct, dip, dprt, portid);
} else {
dst.x = dip;
@@ -1214,6 +1295,16 @@ processx4_step3(struct rte_mbuf *pkt[FWDSTEP], uint16_t dst_port[FWDSTEP])
_mm_store_si128(p[2], te[2]);
_mm_store_si128(p[3], te[3]);
+#ifdef RTE_UNIFIED_PKT_TYPE
+ rfc1812_process((struct ipv4_hdr *)((struct ether_hdr *)p[0] + 1),
+ &dst_port[0], pkt[0]->packet_type);
+ rfc1812_process((struct ipv4_hdr *)((struct ether_hdr *)p[1] + 1),
+ &dst_port[1], pkt[1]->packet_type);
+ rfc1812_process((struct ipv4_hdr *)((struct ether_hdr *)p[2] + 1),
+ &dst_port[2], pkt[2]->packet_type);
+ rfc1812_process((struct ipv4_hdr *)((struct ether_hdr *)p[3] + 1),
+ &dst_port[3], pkt[3]->packet_type);
+#else /* RTE_UNIFIED_PKT_TYPE */
rfc1812_process((struct ipv4_hdr *)((struct ether_hdr *)p[0] + 1),
&dst_port[0], pkt[0]->ol_flags);
rfc1812_process((struct ipv4_hdr *)((struct ether_hdr *)p[1] + 1),
@@ -1222,6 +1313,7 @@ processx4_step3(struct rte_mbuf *pkt[FWDSTEP], uint16_t dst_port[FWDSTEP])
&dst_port[2], pkt[2]->ol_flags);
rfc1812_process((struct ipv4_hdr *)((struct ether_hdr *)p[3] + 1),
&dst_port[3], pkt[3]->ol_flags);
+#endif /* RTE_UNIFIED_PKT_TYPE */
}
/*
@@ -1408,7 +1500,11 @@ main_loop(__attribute__((unused)) void *dummy)
uint16_t *lp;
uint16_t dst_port[MAX_PKT_BURST];
__m128i dip[MAX_PKT_BURST / FWDSTEP];
+#ifdef RTE_UNIFIED_PKT_TYPE
+ uint32_t ipv4_flag[MAX_PKT_BURST / FWDSTEP];
+#else
uint32_t flag[MAX_PKT_BURST / FWDSTEP];
+#endif
uint16_t pnum[MAX_PKT_BURST + 1];
#endif
@@ -1478,6 +1574,18 @@ main_loop(__attribute__((unused)) void *dummy)
*/
int32_t n = RTE_ALIGN_FLOOR(nb_rx, 4);
for (j = 0; j < n ; j+=4) {
+#ifdef RTE_UNIFIED_PKT_TYPE
+ uint32_t pkt_type =
+ pkts_burst[j]->packet_type &
+ pkts_burst[j+1]->packet_type &
+ pkts_burst[j+2]->packet_type &
+ pkts_burst[j+3]->packet_type;
+ if (pkt_type & RTE_PTYPE_L3_IPV4) {
+ simple_ipv4_fwd_4pkts(
+ &pkts_burst[j], portid, qconf);
+ } else if (pkt_type &
+ RTE_PTYPE_L3_IPV6) {
+#else /* RTE_UNIFIED_PKT_TYPE */
uint32_t ol_flag = pkts_burst[j]->ol_flags
& pkts_burst[j+1]->ol_flags
& pkts_burst[j+2]->ol_flags
@@ -1486,6 +1594,7 @@ main_loop(__attribute__((unused)) void *dummy)
simple_ipv4_fwd_4pkts(&pkts_burst[j],
portid, qconf);
} else if (ol_flag & PKT_RX_IPV6_HDR) {
+#endif /* RTE_UNIFIED_PKT_TYPE */
simple_ipv6_fwd_4pkts(&pkts_burst[j],
portid, qconf);
} else {
@@ -1510,13 +1619,21 @@ main_loop(__attribute__((unused)) void *dummy)
for (j = 0; j != k; j += FWDSTEP) {
processx4_step1(&pkts_burst[j],
&dip[j / FWDSTEP],
+#ifdef RTE_UNIFIED_PKT_TYPE
+ &ipv4_flag[j / FWDSTEP]);
+#else
&flag[j / FWDSTEP]);
+#endif
}
k = RTE_ALIGN_FLOOR(nb_rx, FWDSTEP);
for (j = 0; j != k; j += FWDSTEP) {
processx4_step2(qconf, dip[j / FWDSTEP],
+#ifdef RTE_UNIFIED_PKT_TYPE
+ ipv4_flag[j / FWDSTEP], portid,
+#else
flag[j / FWDSTEP], portid,
+#endif
&pkts_burst[j], &dst_port[j]);
}
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v6 18/18] mbuf: remove old packet type bit masks
2015-06-01 7:33 ` [dpdk-dev] [PATCH v6 00/18] unified packet type Helin Zhang
` (16 preceding siblings ...)
2015-06-01 7:34 ` [dpdk-dev] [PATCH v6 17/18] examples/l3fwd: " Helin Zhang
@ 2015-06-01 7:34 ` Helin Zhang
2015-06-19 8:14 ` [dpdk-dev] [PATCH v7 00/18] unified packet type Helin Zhang
18 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-06-01 7:34 UTC (permalink / raw)
To: dev
As unified packet types are used instead, those old bit masks and
the relevant macros for packet type indication need to be removed.
To avoid breaking ABI compatibility, all the changes would be
enabled by RTE_UNIFIED_PKT_TYPE, which is disabled by default.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
lib/librte_mbuf/rte_mbuf.c | 4 ++++
lib/librte_mbuf/rte_mbuf.h | 4 ++++
2 files changed, 8 insertions(+)
v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.
* Redefined the bit masks for packet RX offload flags.
v5 changes:
* Rolled back the bit masks of RX flags, for ABI compatibility.
v6 changes:
* Disabled the code changes for unified packet type by default, to
avoid breaking ABI compatibility.
diff --git a/lib/librte_mbuf/rte_mbuf.c b/lib/librte_mbuf/rte_mbuf.c
index f506517..0b3a4fc 100644
--- a/lib/librte_mbuf/rte_mbuf.c
+++ b/lib/librte_mbuf/rte_mbuf.c
@@ -251,14 +251,18 @@ const char *rte_get_rx_ol_flag_name(uint64_t mask)
/* case PKT_RX_HBUF_OVERFLOW: return "PKT_RX_HBUF_OVERFLOW"; */
/* case PKT_RX_RECIP_ERR: return "PKT_RX_RECIP_ERR"; */
/* case PKT_RX_MAC_ERR: return "PKT_RX_MAC_ERR"; */
+#ifndef RTE_UNIFIED_PKT_TYPE
case PKT_RX_IPV4_HDR: return "PKT_RX_IPV4_HDR";
case PKT_RX_IPV4_HDR_EXT: return "PKT_RX_IPV4_HDR_EXT";
case PKT_RX_IPV6_HDR: return "PKT_RX_IPV6_HDR";
case PKT_RX_IPV6_HDR_EXT: return "PKT_RX_IPV6_HDR_EXT";
+#endif /* RTE_UNIFIED_PKT_TYPE */
case PKT_RX_IEEE1588_PTP: return "PKT_RX_IEEE1588_PTP";
case PKT_RX_IEEE1588_TMST: return "PKT_RX_IEEE1588_TMST";
+#ifndef RTE_UNIFIED_PKT_TYPE
case PKT_RX_TUNNEL_IPV4_HDR: return "PKT_RX_TUNNEL_IPV4_HDR";
case PKT_RX_TUNNEL_IPV6_HDR: return "PKT_RX_TUNNEL_IPV6_HDR";
+#endif /* RTE_UNIFIED_PKT_TYPE */
default: return NULL;
}
}
diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
index 94e51cd..d82fc8e 100644
--- a/lib/librte_mbuf/rte_mbuf.h
+++ b/lib/librte_mbuf/rte_mbuf.h
@@ -91,14 +91,18 @@ extern "C" {
#define PKT_RX_HBUF_OVERFLOW (0ULL << 0) /**< Header buffer overflow. */
#define PKT_RX_RECIP_ERR (0ULL << 0) /**< Hardware processing error. */
#define PKT_RX_MAC_ERR (0ULL << 0) /**< MAC error. */
+#ifndef RTE_UNIFIED_PKT_TYPE
#define PKT_RX_IPV4_HDR (1ULL << 5) /**< RX packet with IPv4 header. */
#define PKT_RX_IPV4_HDR_EXT (1ULL << 6) /**< RX packet with extended IPv4 header. */
#define PKT_RX_IPV6_HDR (1ULL << 7) /**< RX packet with IPv6 header. */
#define PKT_RX_IPV6_HDR_EXT (1ULL << 8) /**< RX packet with extended IPv6 header. */
+#endif /* RTE_UNIFIED_PKT_TYPE */
#define PKT_RX_IEEE1588_PTP (1ULL << 9) /**< RX IEEE1588 L2 Ethernet PT Packet. */
#define PKT_RX_IEEE1588_TMST (1ULL << 10) /**< RX IEEE1588 L2/L4 timestamped packet.*/
+#ifndef RTE_UNIFIED_PKT_TYPE
#define PKT_RX_TUNNEL_IPV4_HDR (1ULL << 11) /**< RX tunnel packet with IPv4 header.*/
#define PKT_RX_TUNNEL_IPV6_HDR (1ULL << 12) /**< RX tunnel packet with IPv6 header. */
+#endif /* RTE_UNIFIED_PKT_TYPE */
#define PKT_RX_FDIR_ID (1ULL << 13) /**< FD id reported if FDIR match. */
#define PKT_RX_FDIR_FLX (1ULL << 14) /**< Flexible bytes reported if FDIR match. */
/* add new RX flags here */
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* Re: [dpdk-dev] [PATCH v6 01/18] mbuf: redefine packet_type in rte_mbuf
2015-06-01 7:33 ` [dpdk-dev] [PATCH v6 01/18] mbuf: redefine packet_type in rte_mbuf Helin Zhang
@ 2015-06-01 8:14 ` Olivier MATZ
2015-06-02 13:27 ` O'Driscoll, Tim
0 siblings, 1 reply; 257+ messages in thread
From: Olivier MATZ @ 2015-06-01 8:14 UTC (permalink / raw)
To: Helin Zhang, dev
Hi Helin,
+CC Neil
On 06/01/2015 09:33 AM, Helin Zhang wrote:
> In order to unify the packet type, the field of 'packet_type' in
> 'struct rte_mbuf' needs to be extended from 16 to 32 bits.
> Accordingly, some fields in 'struct rte_mbuf' are re-organized to
> support this change for Vector PMD. As 'struct rte_kni_mbuf' for
> KNI should be right mapped to 'struct rte_mbuf', it should be
> modified accordingly. In addition, Vector PMD of ixgbe is disabled
> by default, as 'struct rte_mbuf' changed.
> To avoid breaking ABI compatibility, all the changes would be
> enabled by RTE_UNIFIED_PKT_TYPE, which is disabled by default.
What are the plans for this compile-time option in the future?
I wonder what are the benefits of having this option in terms
of ABI compatibility: when it is disabled, it is ABI-compatible but
the packet-type feature is not present, and when it is enabled we
have the feature but it breaks the compatibility.
In my opinion, the v5 is preferable: for this kind of features, I
don't see how the ABI can be preserved, and I think packet-type
won't be the only feature that will modify the mbuf structure. I think
the process described here should be applied:
http://dpdk.org/browse/dpdk/tree/doc/guides/rel_notes/abi.rst
(starting from "Some ABI changes may be too significant to reasonably
maintain multiple versions of").
Regards,
Olivier
>
> Signed-off-by: Helin Zhang <helin.zhang@intel.com>
> Signed-off-by: Cunming Liang <cunming.liang@intel.com>
> ---
> config/common_linuxapp | 2 +-
> .../linuxapp/eal/include/exec-env/rte_kni_common.h | 6 ++++++
> lib/librte_mbuf/rte_mbuf.h | 23 ++++++++++++++++++++++
> 3 files changed, 30 insertions(+), 1 deletion(-)
>
> v2 changes:
> * Enlarged the packet_type field from 16 bits to 32 bits.
> * Redefined the packet type sub-fields.
> * Updated the 'struct rte_kni_mbuf' for KNI according to the mbuf changes.
>
> v3 changes:
> * Put the mbuf layout changes into a single patch.
> * Disabled vector ixgbe PMD by default, as mbuf layout changed.
>
> v5 changes:
> * Re-worded the commit logs.
>
> v6 changes:
> * Disabled the code changes for unified packet type by default, to
> avoid breaking ABI compatibility.
>
> diff --git a/config/common_linuxapp b/config/common_linuxapp
> index 0078dc9..6b067c7 100644
> --- a/config/common_linuxapp
> +++ b/config/common_linuxapp
> @@ -167,7 +167,7 @@ CONFIG_RTE_LIBRTE_IXGBE_DEBUG_TX_FREE=n
> CONFIG_RTE_LIBRTE_IXGBE_DEBUG_DRIVER=n
> CONFIG_RTE_LIBRTE_IXGBE_PF_DISABLE_STRIP_CRC=n
> CONFIG_RTE_LIBRTE_IXGBE_RX_ALLOW_BULK_ALLOC=y
> -CONFIG_RTE_IXGBE_INC_VECTOR=y
> +CONFIG_RTE_IXGBE_INC_VECTOR=n
> CONFIG_RTE_IXGBE_RX_OLFLAGS_ENABLE=y
>
> #
> diff --git a/lib/librte_eal/linuxapp/eal/include/exec-env/rte_kni_common.h b/lib/librte_eal/linuxapp/eal/include/exec-env/rte_kni_common.h
> index 1e55c2d..7a2abbb 100644
> --- a/lib/librte_eal/linuxapp/eal/include/exec-env/rte_kni_common.h
> +++ b/lib/librte_eal/linuxapp/eal/include/exec-env/rte_kni_common.h
> @@ -117,9 +117,15 @@ struct rte_kni_mbuf {
> uint16_t data_off; /**< Start address of data in segment buffer. */
> char pad1[4];
> uint64_t ol_flags; /**< Offload features. */
> +#ifdef RTE_UNIFIED_PKT_TYPE
> + char pad2[4];
> + uint32_t pkt_len; /**< Total pkt len: sum of all segment data_len. */
> + uint16_t data_len; /**< Amount of data in segment buffer. */
> +#else
> char pad2[2];
> uint16_t data_len; /**< Amount of data in segment buffer. */
> uint32_t pkt_len; /**< Total pkt len: sum of all segment data_len. */
> +#endif
>
> /* fields on second cache line */
> char pad3[8] __attribute__((__aligned__(RTE_CACHE_LINE_SIZE)));
> diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
> index ab6de67..a8662c2 100644
> --- a/lib/librte_mbuf/rte_mbuf.h
> +++ b/lib/librte_mbuf/rte_mbuf.h
> @@ -269,6 +269,28 @@ struct rte_mbuf {
> /* remaining bytes are set on RX when pulling packet from descriptor */
> MARKER rx_descriptor_fields1;
>
> +#ifdef RTE_UNIFIED_PKT_TYPE
> + /*
> + * The packet type, which is the combination of outer/inner L2, L3, L4
> + * and tunnel types.
> + */
> + union {
> + uint32_t packet_type; /**< L2/L3/L4 and tunnel information. */
> + struct {
> + uint32_t l2_type:4; /**< (Outer) L2 type. */
> + uint32_t l3_type:4; /**< (Outer) L3 type. */
> + uint32_t l4_type:4; /**< (Outer) L4 type. */
> + uint32_t tun_type:4; /**< Tunnel type. */
> + uint32_t inner_l2_type:4; /**< Inner L2 type. */
> + uint32_t inner_l3_type:4; /**< Inner L3 type. */
> + uint32_t inner_l4_type:4; /**< Inner L4 type. */
> + };
> + };
> +
> + uint32_t pkt_len; /**< Total pkt len: sum of all segments. */
> + uint16_t data_len; /**< Amount of data in segment buffer. */
> + uint16_t vlan_tci; /**< VLAN Tag Control Identifier (CPU order) */
> +#else
> /**
> * The packet type, which is used to indicate ordinary packet and also
> * tunneled packet format, i.e. each number is represented a type of
> @@ -280,6 +302,7 @@ struct rte_mbuf {
> uint32_t pkt_len; /**< Total pkt len: sum of all segments. */
> uint16_t vlan_tci; /**< VLAN Tag Control Identifier (CPU order) */
> uint16_t reserved;
> +#endif
> union {
> uint32_t rss; /**< RSS hash result if RSS enabled */
> struct {
>
^ permalink raw reply [flat|nested] 257+ messages in thread
* Re: [dpdk-dev] [PATCH v6 01/18] mbuf: redefine packet_type in rte_mbuf
2015-06-01 8:14 ` Olivier MATZ
@ 2015-06-02 13:27 ` O'Driscoll, Tim
2015-06-10 14:32 ` Olivier MATZ
0 siblings, 1 reply; 257+ messages in thread
From: O'Driscoll, Tim @ 2015-06-02 13:27 UTC (permalink / raw)
To: Olivier MATZ, Zhang, Helin, dev
> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Olivier MATZ
> Sent: Monday, June 1, 2015 9:15 AM
> To: Zhang, Helin; dev@dpdk.org
> Subject: Re: [dpdk-dev] [PATCH v6 01/18] mbuf: redefine packet_type in
> rte_mbuf
>
> Hi Helin,
>
> +CC Neil
>
> On 06/01/2015 09:33 AM, Helin Zhang wrote:
> > In order to unify the packet type, the field of 'packet_type' in
> > 'struct rte_mbuf' needs to be extended from 16 to 32 bits.
> > Accordingly, some fields in 'struct rte_mbuf' are re-organized to
> > support this change for Vector PMD. As 'struct rte_kni_mbuf' for
> > KNI should be right mapped to 'struct rte_mbuf', it should be
> > modified accordingly. In addition, Vector PMD of ixgbe is disabled
> > by default, as 'struct rte_mbuf' changed.
> > To avoid breaking ABI compatibility, all the changes would be
> > enabled by RTE_UNIFIED_PKT_TYPE, which is disabled by default.
>
> What are the plans for this compile-time option in the future?
>
> I wonder what are the benefits of having this option in terms
> of ABI compatibility: when it is disabled, it is ABI-compatible but
> the packet-type feature is not present, and when it is enabled we
> have the feature but it breaks the compatibility.
>
> In my opinion, the v5 is preferable: for this kind of features, I
> don't see how the ABI can be preserved, and I think packet-type
> won't be the only feature that will modify the mbuf structure. I think
> the process described here should be applied:
> http://dpdk.org/browse/dpdk/tree/doc/guides/rel_notes/abi.rst
>
> (starting from "Some ABI changes may be too significant to reasonably
> maintain multiple versions of").
>
>
> Regards,
> Olivier
>
This is just like the change that Steve (Cunming) Liang submitted for Interrupt Mode. We have the same problem in both cases: we want to find a way to get the features included, but need to comply with our ABI policy. So, in both cases, the proposal is to add a config option to enable the change by default, so we maintain backward compatibility. Users that want these changes, and are willing to accept the associated ABI change, have to specifically enable them.
We can note in the Deprecation Notices in the Release Notes for 2.1 that these config options will be removed in 2.2. The features will then be enabled by default.
This seems like a good compromise which allows us to get these changes into 2.1 but avoids breaking the ABI policy.
Tim
>
>
> >
> > Signed-off-by: Helin Zhang <helin.zhang@intel.com>
> > Signed-off-by: Cunming Liang <cunming.liang@intel.com>
> > ---
> > config/common_linuxapp | 2 +-
> > .../linuxapp/eal/include/exec-env/rte_kni_common.h | 6 ++++++
> > lib/librte_mbuf/rte_mbuf.h | 23
> ++++++++++++++++++++++
> > 3 files changed, 30 insertions(+), 1 deletion(-)
> >
> > v2 changes:
> > * Enlarged the packet_type field from 16 bits to 32 bits.
> > * Redefined the packet type sub-fields.
> > * Updated the 'struct rte_kni_mbuf' for KNI according to the mbuf
> changes.
> >
> > v3 changes:
> > * Put the mbuf layout changes into a single patch.
> > * Disabled vector ixgbe PMD by default, as mbuf layout changed.
> >
> > v5 changes:
> > * Re-worded the commit logs.
> >
> > v6 changes:
> > * Disabled the code changes for unified packet type by default, to
> > avoid breaking ABI compatibility.
> >
> > diff --git a/config/common_linuxapp b/config/common_linuxapp
> > index 0078dc9..6b067c7 100644
> > --- a/config/common_linuxapp
> > +++ b/config/common_linuxapp
> > @@ -167,7 +167,7 @@ CONFIG_RTE_LIBRTE_IXGBE_DEBUG_TX_FREE=n
> > CONFIG_RTE_LIBRTE_IXGBE_DEBUG_DRIVER=n
> > CONFIG_RTE_LIBRTE_IXGBE_PF_DISABLE_STRIP_CRC=n
> > CONFIG_RTE_LIBRTE_IXGBE_RX_ALLOW_BULK_ALLOC=y
> > -CONFIG_RTE_IXGBE_INC_VECTOR=y
> > +CONFIG_RTE_IXGBE_INC_VECTOR=n
> > CONFIG_RTE_IXGBE_RX_OLFLAGS_ENABLE=y
> >
> > #
> > diff --git a/lib/librte_eal/linuxapp/eal/include/exec-
> env/rte_kni_common.h b/lib/librte_eal/linuxapp/eal/include/exec-
> env/rte_kni_common.h
> > index 1e55c2d..7a2abbb 100644
> > --- a/lib/librte_eal/linuxapp/eal/include/exec-env/rte_kni_common.h
> > +++ b/lib/librte_eal/linuxapp/eal/include/exec-env/rte_kni_common.h
> > @@ -117,9 +117,15 @@ struct rte_kni_mbuf {
> > uint16_t data_off; /**< Start address of data in segment
> buffer. */
> > char pad1[4];
> > uint64_t ol_flags; /**< Offload features. */
> > +#ifdef RTE_UNIFIED_PKT_TYPE
> > + char pad2[4];
> > + uint32_t pkt_len; /**< Total pkt len: sum of all segment
> data_len. */
> > + uint16_t data_len; /**< Amount of data in segment buffer. */
> > +#else
> > char pad2[2];
> > uint16_t data_len; /**< Amount of data in segment buffer. */
> > uint32_t pkt_len; /**< Total pkt len: sum of all segment
> data_len. */
> > +#endif
> >
> > /* fields on second cache line */
> > char pad3[8] __attribute__((__aligned__(RTE_CACHE_LINE_SIZE)));
> > diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
> > index ab6de67..a8662c2 100644
> > --- a/lib/librte_mbuf/rte_mbuf.h
> > +++ b/lib/librte_mbuf/rte_mbuf.h
> > @@ -269,6 +269,28 @@ struct rte_mbuf {
> > /* remaining bytes are set on RX when pulling packet from
> descriptor */
> > MARKER rx_descriptor_fields1;
> >
> > +#ifdef RTE_UNIFIED_PKT_TYPE
> > + /*
> > + * The packet type, which is the combination of outer/inner L2, L3,
> L4
> > + * and tunnel types.
> > + */
> > + union {
> > + uint32_t packet_type; /**< L2/L3/L4 and tunnel information.
> */
> > + struct {
> > + uint32_t l2_type:4; /**< (Outer) L2 type. */
> > + uint32_t l3_type:4; /**< (Outer) L3 type. */
> > + uint32_t l4_type:4; /**< (Outer) L4 type. */
> > + uint32_t tun_type:4; /**< Tunnel type. */
> > + uint32_t inner_l2_type:4; /**< Inner L2 type. */
> > + uint32_t inner_l3_type:4; /**< Inner L3 type. */
> > + uint32_t inner_l4_type:4; /**< Inner L4 type. */
> > + };
> > + };
> > +
> > + uint32_t pkt_len; /**< Total pkt len: sum of all segments.
> */
> > + uint16_t data_len; /**< Amount of data in segment buffer. */
> > + uint16_t vlan_tci; /**< VLAN Tag Control Identifier (CPU
> order) */
> > +#else
> > /**
> > * The packet type, which is used to indicate ordinary packet and
> also
> > * tunneled packet format, i.e. each number is represented a type
> of
> > @@ -280,6 +302,7 @@ struct rte_mbuf {
> > uint32_t pkt_len; /**< Total pkt len: sum of all segments.
> */
> > uint16_t vlan_tci; /**< VLAN Tag Control Identifier (CPU
> order) */
> > uint16_t reserved;
> > +#endif
> > union {
> > uint32_t rss; /**< RSS hash result if RSS enabled */
> > struct {
> >
^ permalink raw reply [flat|nested] 257+ messages in thread
* Re: [dpdk-dev] [PATCH v6 01/18] mbuf: redefine packet_type in rte_mbuf
2015-06-02 13:27 ` O'Driscoll, Tim
@ 2015-06-10 14:32 ` Olivier MATZ
2015-06-10 14:51 ` Zhang, Helin
` (2 more replies)
0 siblings, 3 replies; 257+ messages in thread
From: Olivier MATZ @ 2015-06-10 14:32 UTC (permalink / raw)
To: O'Driscoll, Tim, Zhang, Helin, dev
Hi Tim, Helin,
On 06/02/2015 03:27 PM, O'Driscoll, Tim wrote:
>
>> -----Original Message-----
>> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Olivier MATZ
>> Sent: Monday, June 1, 2015 9:15 AM
>> To: Zhang, Helin; dev@dpdk.org
>> Subject: Re: [dpdk-dev] [PATCH v6 01/18] mbuf: redefine packet_type in
>> rte_mbuf
>>
>> Hi Helin,
>>
>> +CC Neil
>>
>> On 06/01/2015 09:33 AM, Helin Zhang wrote:
>>> In order to unify the packet type, the field of 'packet_type' in
>>> 'struct rte_mbuf' needs to be extended from 16 to 32 bits.
>>> Accordingly, some fields in 'struct rte_mbuf' are re-organized to
>>> support this change for Vector PMD. As 'struct rte_kni_mbuf' for
>>> KNI should be right mapped to 'struct rte_mbuf', it should be
>>> modified accordingly. In addition, Vector PMD of ixgbe is disabled
>>> by default, as 'struct rte_mbuf' changed.
>>> To avoid breaking ABI compatibility, all the changes would be
>>> enabled by RTE_UNIFIED_PKT_TYPE, which is disabled by default.
>>
>> What are the plans for this compile-time option in the future?
>>
>> I wonder what are the benefits of having this option in terms
>> of ABI compatibility: when it is disabled, it is ABI-compatible but
>> the packet-type feature is not present, and when it is enabled we
>> have the feature but it breaks the compatibility.
>>
>> In my opinion, the v5 is preferable: for this kind of features, I
>> don't see how the ABI can be preserved, and I think packet-type
>> won't be the only feature that will modify the mbuf structure. I think
>> the process described here should be applied:
>> http://dpdk.org/browse/dpdk/tree/doc/guides/rel_notes/abi.rst
>>
>> (starting from "Some ABI changes may be too significant to reasonably
>> maintain multiple versions of").
>>
>>
>> Regards,
>> Olivier
>>
>
> This is just like the change that Steve (Cunming) Liang submitted for Interrupt Mode. We have the same problem in both cases: we want to find a way to get the features included, but need to comply with our ABI policy. So, in both cases, the proposal is to add a config option to enable the change by default, so we maintain backward compatibility. Users that want these changes, and are willing to accept the associated ABI change, have to specifically enable them.
>
> We can note in the Deprecation Notices in the Release Notes for 2.1 that these config options will be removed in 2.2. The features will then be enabled by default.
>
> This seems like a good compromise which allows us to get these changes into 2.1 but avoids breaking the ABI policy.
Sorry for the late answer.
After some thoughts on this topic, I understand that having a
compile-time option is perhaps a good compromise between
keeping compatibility and having new features earlier.
I'm just afraid about having one #ifdef in the code for
each new feature that cannot keep the ABI compatibility.
What do you think about having one option -- let's call
it "CONFIG_RTE_NEXT_ABI" --, that is disabled by default,
and that would surround any new feature that breaks the
ABI?
This would have several advantages:
- only 2 cases (on or off), the combinatorial is smaller than
having one option per feature
- all next features breaking the abi can be identified by a grep
- the code inside the #ifdef can be enabled in a simple operation
by Thomas after each release.
Thomas, any comment?
Regards,
Olivier
^ permalink raw reply [flat|nested] 257+ messages in thread
* Re: [dpdk-dev] [PATCH v6 01/18] mbuf: redefine packet_type in rte_mbuf
2015-06-10 14:32 ` Olivier MATZ
@ 2015-06-10 14:51 ` Zhang, Helin
2015-06-10 15:39 ` Ananyev, Konstantin
2015-06-10 16:14 ` Thomas Monjalon
2 siblings, 0 replies; 257+ messages in thread
From: Zhang, Helin @ 2015-06-10 14:51 UTC (permalink / raw)
To: Olivier MATZ, O'Driscoll, Tim, dev
Hi Oliver
> -----Original Message-----
> From: Olivier MATZ [mailto:olivier.matz@6wind.com]
> Sent: Wednesday, June 10, 2015 10:33 PM
> To: O'Driscoll, Tim; Zhang, Helin; dev@dpdk.org
> Cc: Thomas Monjalon
> Subject: Re: [dpdk-dev] [PATCH v6 01/18] mbuf: redefine packet_type in
> rte_mbuf
>
> Hi Tim, Helin,
>
> On 06/02/2015 03:27 PM, O'Driscoll, Tim wrote:
> >
> >> -----Original Message-----
> >> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Olivier MATZ
> >> Sent: Monday, June 1, 2015 9:15 AM
> >> To: Zhang, Helin; dev@dpdk.org
> >> Subject: Re: [dpdk-dev] [PATCH v6 01/18] mbuf: redefine packet_type
> >> in rte_mbuf
> >>
> >> Hi Helin,
> >>
> >> +CC Neil
> >>
> >> On 06/01/2015 09:33 AM, Helin Zhang wrote:
> >>> In order to unify the packet type, the field of 'packet_type' in
> >>> 'struct rte_mbuf' needs to be extended from 16 to 32 bits.
> >>> Accordingly, some fields in 'struct rte_mbuf' are re-organized to
> >>> support this change for Vector PMD. As 'struct rte_kni_mbuf' for KNI
> >>> should be right mapped to 'struct rte_mbuf', it should be modified
> >>> accordingly. In addition, Vector PMD of ixgbe is disabled by
> >>> default, as 'struct rte_mbuf' changed.
> >>> To avoid breaking ABI compatibility, all the changes would be
> >>> enabled by RTE_UNIFIED_PKT_TYPE, which is disabled by default.
> >>
> >> What are the plans for this compile-time option in the future?
> >>
> >> I wonder what are the benefits of having this option in terms of ABI
> >> compatibility: when it is disabled, it is ABI-compatible but the
> >> packet-type feature is not present, and when it is enabled we have
> >> the feature but it breaks the compatibility.
> >>
> >> In my opinion, the v5 is preferable: for this kind of features, I
> >> don't see how the ABI can be preserved, and I think packet-type won't
> >> be the only feature that will modify the mbuf structure. I think the
> >> process described here should be applied:
> >> http://dpdk.org/browse/dpdk/tree/doc/guides/rel_notes/abi.rst
> >>
> >> (starting from "Some ABI changes may be too significant to reasonably
> >> maintain multiple versions of").
> >>
> >>
> >> Regards,
> >> Olivier
> >>
> >
> > This is just like the change that Steve (Cunming) Liang submitted for Interrupt
> Mode. We have the same problem in both cases: we want to find a way to get
> the features included, but need to comply with our ABI policy. So, in both cases,
> the proposal is to add a config option to enable the change by default, so we
> maintain backward compatibility. Users that want these changes, and are willing
> to accept the associated ABI change, have to specifically enable them.
> >
> > We can note in the Deprecation Notices in the Release Notes for 2.1 that these
> config options will be removed in 2.2. The features will then be enabled by
> default.
> >
> > This seems like a good compromise which allows us to get these changes into
> 2.1 but avoids breaking the ABI policy.
>
> Sorry for the late answer.
>
> After some thoughts on this topic, I understand that having a compile-time
> option is perhaps a good compromise between keeping compatibility and having
> new features earlier.
>
> I'm just afraid about having one #ifdef in the code for each new feature that
> cannot keep the ABI compatibility.
> What do you think about having one option -- let's call it
> "CONFIG_RTE_NEXT_ABI" --, that is disabled by default, and that would surround
> any new feature that breaks the ABI?
Will we allow this type of workaround for a long time? If yes, agree with your good idea.
Regards,
Helin
>
> This would have several advantages:
> - only 2 cases (on or off), the combinatorial is smaller than
> having one option per feature
> - all next features breaking the abi can be identified by a grep
> - the code inside the #ifdef can be enabled in a simple operation
> by Thomas after each release.
>
> Thomas, any comment?
>
> Regards,
> Olivier
>
^ permalink raw reply [flat|nested] 257+ messages in thread
* Re: [dpdk-dev] [PATCH v6 01/18] mbuf: redefine packet_type in rte_mbuf
2015-06-10 14:32 ` Olivier MATZ
2015-06-10 14:51 ` Zhang, Helin
@ 2015-06-10 15:39 ` Ananyev, Konstantin
2015-06-12 3:22 ` Zhang, Helin
2015-06-10 16:14 ` Thomas Monjalon
2 siblings, 1 reply; 257+ messages in thread
From: Ananyev, Konstantin @ 2015-06-10 15:39 UTC (permalink / raw)
To: Olivier MATZ, O'Driscoll, Tim, Zhang, Helin, dev
Hi Olivier,
> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Olivier MATZ
> Sent: Wednesday, June 10, 2015 3:33 PM
> To: O'Driscoll, Tim; Zhang, Helin; dev@dpdk.org
> Subject: Re: [dpdk-dev] [PATCH v6 01/18] mbuf: redefine packet_type in rte_mbuf
>
> Hi Tim, Helin,
>
> On 06/02/2015 03:27 PM, O'Driscoll, Tim wrote:
> >
> >> -----Original Message-----
> >> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Olivier MATZ
> >> Sent: Monday, June 1, 2015 9:15 AM
> >> To: Zhang, Helin; dev@dpdk.org
> >> Subject: Re: [dpdk-dev] [PATCH v6 01/18] mbuf: redefine packet_type in
> >> rte_mbuf
> >>
> >> Hi Helin,
> >>
> >> +CC Neil
> >>
> >> On 06/01/2015 09:33 AM, Helin Zhang wrote:
> >>> In order to unify the packet type, the field of 'packet_type' in
> >>> 'struct rte_mbuf' needs to be extended from 16 to 32 bits.
> >>> Accordingly, some fields in 'struct rte_mbuf' are re-organized to
> >>> support this change for Vector PMD. As 'struct rte_kni_mbuf' for
> >>> KNI should be right mapped to 'struct rte_mbuf', it should be
> >>> modified accordingly. In addition, Vector PMD of ixgbe is disabled
> >>> by default, as 'struct rte_mbuf' changed.
> >>> To avoid breaking ABI compatibility, all the changes would be
> >>> enabled by RTE_UNIFIED_PKT_TYPE, which is disabled by default.
> >>
> >> What are the plans for this compile-time option in the future?
> >>
> >> I wonder what are the benefits of having this option in terms
> >> of ABI compatibility: when it is disabled, it is ABI-compatible but
> >> the packet-type feature is not present, and when it is enabled we
> >> have the feature but it breaks the compatibility.
> >>
> >> In my opinion, the v5 is preferable: for this kind of features, I
> >> don't see how the ABI can be preserved, and I think packet-type
> >> won't be the only feature that will modify the mbuf structure. I think
> >> the process described here should be applied:
> >> http://dpdk.org/browse/dpdk/tree/doc/guides/rel_notes/abi.rst
> >>
> >> (starting from "Some ABI changes may be too significant to reasonably
> >> maintain multiple versions of").
> >>
> >>
> >> Regards,
> >> Olivier
> >>
> >
> > This is just like the change that Steve (Cunming) Liang submitted for Interrupt Mode. We have the same problem in both cases: we
> want to find a way to get the features included, but need to comply with our ABI policy. So, in both cases, the proposal is to add a
> config option to enable the change by default, so we maintain backward compatibility. Users that want these changes, and are willing
> to accept the associated ABI change, have to specifically enable them.
> >
> > We can note in the Deprecation Notices in the Release Notes for 2.1 that these config options will be removed in 2.2. The features
> will then be enabled by default.
> >
> > This seems like a good compromise which allows us to get these changes into 2.1 but avoids breaking the ABI policy.
>
> Sorry for the late answer.
>
> After some thoughts on this topic, I understand that having a
> compile-time option is perhaps a good compromise between
> keeping compatibility and having new features earlier.
>
> I'm just afraid about having one #ifdef in the code for
> each new feature that cannot keep the ABI compatibility.
> What do you think about having one option -- let's call
> it "CONFIG_RTE_NEXT_ABI" --, that is disabled by default,
> and that would surround any new feature that breaks the
> ABI?
I am not Tim/Helin, but really like that idea :)
Konstantin
>
> This would have several advantages:
> - only 2 cases (on or off), the combinatorial is smaller than
> having one option per feature
> - all next features breaking the abi can be identified by a grep
> - the code inside the #ifdef can be enabled in a simple operation
> by Thomas after each release.
>
> Thomas, any comment?
>
> Regards,
> Olivier
>
^ permalink raw reply [flat|nested] 257+ messages in thread
* Re: [dpdk-dev] [PATCH v6 01/18] mbuf: redefine packet_type in rte_mbuf
2015-06-10 14:32 ` Olivier MATZ
2015-06-10 14:51 ` Zhang, Helin
2015-06-10 15:39 ` Ananyev, Konstantin
@ 2015-06-10 16:14 ` Thomas Monjalon
2015-06-12 7:24 ` Panu Matilainen
2 siblings, 1 reply; 257+ messages in thread
From: Thomas Monjalon @ 2015-06-10 16:14 UTC (permalink / raw)
To: Olivier MATZ, O'Driscoll, Tim, Zhang, Helin, nhorman; +Cc: dev
2015-06-10 16:32, Olivier MATZ:
> On 06/02/2015 03:27 PM, O'Driscoll, Tim wrote:
> > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Olivier MATZ
> >> On 06/01/2015 09:33 AM, Helin Zhang wrote:
> >>> In order to unify the packet type, the field of 'packet_type' in
> >>> 'struct rte_mbuf' needs to be extended from 16 to 32 bits.
> >>> Accordingly, some fields in 'struct rte_mbuf' are re-organized to
> >>> support this change for Vector PMD. As 'struct rte_kni_mbuf' for
> >>> KNI should be right mapped to 'struct rte_mbuf', it should be
> >>> modified accordingly. In addition, Vector PMD of ixgbe is disabled
> >>> by default, as 'struct rte_mbuf' changed.
> >>> To avoid breaking ABI compatibility, all the changes would be
> >>> enabled by RTE_UNIFIED_PKT_TYPE, which is disabled by default.
> >>
> >> What are the plans for this compile-time option in the future?
> >>
> >> I wonder what are the benefits of having this option in terms
> >> of ABI compatibility: when it is disabled, it is ABI-compatible but
> >> the packet-type feature is not present, and when it is enabled we
> >> have the feature but it breaks the compatibility.
> >>
> >> In my opinion, the v5 is preferable: for this kind of features, I
> >> don't see how the ABI can be preserved, and I think packet-type
> >> won't be the only feature that will modify the mbuf structure. I think
> >> the process described here should be applied:
> >> http://dpdk.org/browse/dpdk/tree/doc/guides/rel_notes/abi.rst
> >>
> >> (starting from "Some ABI changes may be too significant to reasonably
> >> maintain multiple versions of").
> >
> > This is just like the change that Steve (Cunming) Liang submitted for
> > Interrupt Mode. We have the same problem in both cases: we want to find
> > a way to get the features included, but need to comply with our ABI
> > policy. So, in both cases, the proposal is to add a config option to
> > enable the change by default, so we maintain backward compatibility.
> > Users that want these changes, and are willing to accept the
> > associated ABI change, have to specifically enable them.
> >
> > We can note in the Deprecation Notices in the Release Notes for 2.1
> > that these config options will be removed in 2.2. The features will
> > then be enabled by default.
> >
> > This seems like a good compromise which allows us to get these changes
> > into 2.1 but avoids breaking the ABI policy.
>
> Sorry for the late answer.
>
> After some thoughts on this topic, I understand that having a
> compile-time option is perhaps a good compromise between
> keeping compatibility and having new features earlier.
>
> I'm just afraid about having one #ifdef in the code for
> each new feature that cannot keep the ABI compatibility.
> What do you think about having one option -- let's call
> it "CONFIG_RTE_NEXT_ABI" --, that is disabled by default,
> and that would surround any new feature that breaks the
> ABI?
>
> This would have several advantages:
> - only 2 cases (on or off), the combinatorial is smaller than
> having one option per feature
> - all next features breaking the abi can be identified by a grep
> - the code inside the #ifdef can be enabled in a simple operation
> by Thomas after each release.
>
> Thomas, any comment?
As previously discussed (1to1) with Olivier, I think that's a good proposal
to introduce changes breaking deeply the ABI.
Let's sum up the current policy:
1/ For changes which have a limited impact on the ABI, the backward compatibility
must be kept during 1 release including the notice in doc/guides/rel_notes/abi.rst.
2/ For important changes like mbuf rework, there was an agreement on skipping the
backward compatibility after having 3 acknowledgements and an 1-release long notice.
Then the ABI numbering must be incremented.
This CONFIG_RTE_NEXT_ABI proposal would change the rules for the second case.
In order to be adopted, a patch for the file doc/guides/rel_notes/abi.rst must
be submitted and strongly acknowledged.
The ABI numbering must be also clearly explained:
1/ Should we have different libraries version number depending of CONFIG_RTE_NEXT_ABI?
It seems straightforward to use "ifeq" when LIBABIVER in the Makefiles
2/ Are we able to have some "if CONFIG_RTE_NEXT_ABI" statement in the .map files?
Maybe we should remove these files and generate them with some preprocessing.
Neil, as the ABI policy author, what is your opinion?
^ permalink raw reply [flat|nested] 257+ messages in thread
* Re: [dpdk-dev] [PATCH v6 01/18] mbuf: redefine packet_type in rte_mbuf
2015-06-10 15:39 ` Ananyev, Konstantin
@ 2015-06-12 3:22 ` Zhang, Helin
0 siblings, 0 replies; 257+ messages in thread
From: Zhang, Helin @ 2015-06-12 3:22 UTC (permalink / raw)
To: Ananyev, Konstantin, Olivier MATZ, O'Driscoll, Tim, Thomas Monjalon
Cc: dev
> -----Original Message-----
> From: Ananyev, Konstantin
> Sent: Wednesday, June 10, 2015 11:40 PM
> To: Olivier MATZ; O'Driscoll, Tim; Zhang, Helin; dev@dpdk.org
> Subject: RE: [dpdk-dev] [PATCH v6 01/18] mbuf: redefine packet_type in
> rte_mbuf
>
> Hi Olivier,
>
> > -----Original Message-----
> > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Olivier MATZ
> > Sent: Wednesday, June 10, 2015 3:33 PM
> > To: O'Driscoll, Tim; Zhang, Helin; dev@dpdk.org
> > Subject: Re: [dpdk-dev] [PATCH v6 01/18] mbuf: redefine packet_type in
> > rte_mbuf
> >
> > Hi Tim, Helin,
> >
> > On 06/02/2015 03:27 PM, O'Driscoll, Tim wrote:
> > >
> > >> -----Original Message-----
> > >> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Olivier MATZ
> > >> Sent: Monday, June 1, 2015 9:15 AM
> > >> To: Zhang, Helin; dev@dpdk.org
> > >> Subject: Re: [dpdk-dev] [PATCH v6 01/18] mbuf: redefine packet_type
> > >> in rte_mbuf
> > >>
> > >> Hi Helin,
> > >>
> > >> +CC Neil
> > >>
> > >> On 06/01/2015 09:33 AM, Helin Zhang wrote:
> > >>> In order to unify the packet type, the field of 'packet_type' in
> > >>> 'struct rte_mbuf' needs to be extended from 16 to 32 bits.
> > >>> Accordingly, some fields in 'struct rte_mbuf' are re-organized to
> > >>> support this change for Vector PMD. As 'struct rte_kni_mbuf' for
> > >>> KNI should be right mapped to 'struct rte_mbuf', it should be
> > >>> modified accordingly. In addition, Vector PMD of ixgbe is disabled
> > >>> by default, as 'struct rte_mbuf' changed.
> > >>> To avoid breaking ABI compatibility, all the changes would be
> > >>> enabled by RTE_UNIFIED_PKT_TYPE, which is disabled by default.
> > >>
> > >> What are the plans for this compile-time option in the future?
> > >>
> > >> I wonder what are the benefits of having this option in terms of
> > >> ABI compatibility: when it is disabled, it is ABI-compatible but
> > >> the packet-type feature is not present, and when it is enabled we
> > >> have the feature but it breaks the compatibility.
> > >>
> > >> In my opinion, the v5 is preferable: for this kind of features, I
> > >> don't see how the ABI can be preserved, and I think packet-type
> > >> won't be the only feature that will modify the mbuf structure. I
> > >> think the process described here should be applied:
> > >> http://dpdk.org/browse/dpdk/tree/doc/guides/rel_notes/abi.rst
> > >>
> > >> (starting from "Some ABI changes may be too significant to
> > >> reasonably maintain multiple versions of").
> > >>
> > >>
> > >> Regards,
> > >> Olivier
> > >>
> > >
> > > This is just like the change that Steve (Cunming) Liang submitted
> > > for Interrupt Mode. We have the same problem in both cases: we
> > want to find a way to get the features included, but need to comply
> > with our ABI policy. So, in both cases, the proposal is to add a
> > config option to enable the change by default, so we maintain backward
> compatibility. Users that want these changes, and are willing to accept the
> associated ABI change, have to specifically enable them.
> > >
> > > We can note in the Deprecation Notices in the Release Notes for 2.1
> > > that these config options will be removed in 2.2. The features
> > will then be enabled by default.
> > >
> > > This seems like a good compromise which allows us to get these changes into
> 2.1 but avoids breaking the ABI policy.
> >
> > Sorry for the late answer.
> >
> > After some thoughts on this topic, I understand that having a
> > compile-time option is perhaps a good compromise between keeping
> > compatibility and having new features earlier.
> >
> > I'm just afraid about having one #ifdef in the code for each new
> > feature that cannot keep the ABI compatibility.
> > What do you think about having one option -- let's call it
> > "CONFIG_RTE_NEXT_ABI" --, that is disabled by default, and that would
> > surround any new feature that breaks the ABI?
>
> I am not Tim/Helin, but really like that idea :) Konstantin
It seems more guys like Oliver's idea of introducing CONFIG_RTE_NEXT_ABI. Any objections?
If none, I will rework my patches with that.
- Helin
>
>
> >
> > This would have several advantages:
> > - only 2 cases (on or off), the combinatorial is smaller than
> > having one option per feature
> > - all next features breaking the abi can be identified by a grep
> > - the code inside the #ifdef can be enabled in a simple operation
> > by Thomas after each release.
> >
> > Thomas, any comment?
> >
> > Regards,
> > Olivier
> >
^ permalink raw reply [flat|nested] 257+ messages in thread
* Re: [dpdk-dev] [PATCH v6 01/18] mbuf: redefine packet_type in rte_mbuf
2015-06-10 16:14 ` Thomas Monjalon
@ 2015-06-12 7:24 ` Panu Matilainen
2015-06-12 7:43 ` Zhang, Helin
0 siblings, 1 reply; 257+ messages in thread
From: Panu Matilainen @ 2015-06-12 7:24 UTC (permalink / raw)
To: Thomas Monjalon, Olivier MATZ, O'Driscoll, Tim, Zhang, Helin,
nhorman
Cc: dev
On 06/10/2015 07:14 PM, Thomas Monjalon wrote:
> 2015-06-10 16:32, Olivier MATZ:
>> On 06/02/2015 03:27 PM, O'Driscoll, Tim wrote:
>>> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Olivier MATZ
>>>> On 06/01/2015 09:33 AM, Helin Zhang wrote:
>>>>> In order to unify the packet type, the field of 'packet_type' in
>>>>> 'struct rte_mbuf' needs to be extended from 16 to 32 bits.
>>>>> Accordingly, some fields in 'struct rte_mbuf' are re-organized to
>>>>> support this change for Vector PMD. As 'struct rte_kni_mbuf' for
>>>>> KNI should be right mapped to 'struct rte_mbuf', it should be
>>>>> modified accordingly. In addition, Vector PMD of ixgbe is disabled
>>>>> by default, as 'struct rte_mbuf' changed.
>>>>> To avoid breaking ABI compatibility, all the changes would be
>>>>> enabled by RTE_UNIFIED_PKT_TYPE, which is disabled by default.
>>>>
>>>> What are the plans for this compile-time option in the future?
>>>>
>>>> I wonder what are the benefits of having this option in terms
>>>> of ABI compatibility: when it is disabled, it is ABI-compatible but
>>>> the packet-type feature is not present, and when it is enabled we
>>>> have the feature but it breaks the compatibility.
>>>>
>>>> In my opinion, the v5 is preferable: for this kind of features, I
>>>> don't see how the ABI can be preserved, and I think packet-type
>>>> won't be the only feature that will modify the mbuf structure. I think
>>>> the process described here should be applied:
>>>> http://dpdk.org/browse/dpdk/tree/doc/guides/rel_notes/abi.rst
>>>>
>>>> (starting from "Some ABI changes may be too significant to reasonably
>>>> maintain multiple versions of").
>>>
>>> This is just like the change that Steve (Cunming) Liang submitted for
>>> Interrupt Mode. We have the same problem in both cases: we want to find
>>> a way to get the features included, but need to comply with our ABI
>>> policy. So, in both cases, the proposal is to add a config option to
>>> enable the change by default, so we maintain backward compatibility.
>>> Users that want these changes, and are willing to accept the
>>> associated ABI change, have to specifically enable them.
>>>
>>> We can note in the Deprecation Notices in the Release Notes for 2.1
>>> that these config options will be removed in 2.2. The features will
>>> then be enabled by default.
>>>
>>> This seems like a good compromise which allows us to get these changes
>>> into 2.1 but avoids breaking the ABI policy.
>>
>> Sorry for the late answer.
>>
>> After some thoughts on this topic, I understand that having a
>> compile-time option is perhaps a good compromise between
>> keeping compatibility and having new features earlier.
>>
>> I'm just afraid about having one #ifdef in the code for
>> each new feature that cannot keep the ABI compatibility.
>> What do you think about having one option -- let's call
>> it "CONFIG_RTE_NEXT_ABI" --, that is disabled by default,
>> and that would surround any new feature that breaks the
>> ABI?
>>
>> This would have several advantages:
>> - only 2 cases (on or off), the combinatorial is smaller than
>> having one option per feature
>> - all next features breaking the abi can be identified by a grep
>> - the code inside the #ifdef can be enabled in a simple operation
>> by Thomas after each release.
>>
>> Thomas, any comment?
>
> As previously discussed (1to1) with Olivier, I think that's a good proposal
> to introduce changes breaking deeply the ABI.
>
> Let's sum up the current policy:
> 1/ For changes which have a limited impact on the ABI, the backward compatibility
> must be kept during 1 release including the notice in doc/guides/rel_notes/abi.rst.
> 2/ For important changes like mbuf rework, there was an agreement on skipping the
> backward compatibility after having 3 acknowledgements and an 1-release long notice.
> Then the ABI numbering must be incremented.
>
> This CONFIG_RTE_NEXT_ABI proposal would change the rules for the second case.
> In order to be adopted, a patch for the file doc/guides/rel_notes/abi.rst must
> be submitted and strongly acknowledged.
>
> The ABI numbering must be also clearly explained:
> 1/ Should we have different libraries version number depending of CONFIG_RTE_NEXT_ABI?
> It seems straightforward to use "ifeq" when LIBABIVER in the Makefiles
An incompatible ABI must be reflected by a soname change, otherwise the
whole library versioning is irrelevant.
> 2/ Are we able to have some "if CONFIG_RTE_NEXT_ABI" statement in the .map files?
> Maybe we should remove these files and generate them with some preprocessing.
>
> Neil, as the ABI policy author, what is your opinion?
I'm not Neil but my 5c...
Working around ABI compatibility policy via config options seems like a
slippery slope. Going forward this will likely mean there are always two
different ABIs for any given version, and the thought of keeping track
of it all in a truly compatible manner makes my head hurt.
That said its easy to understand the desire to move faster than the ABI
policy allows. In a project where so many structs are in the open it
gets hard to do much anything at all without breaking the ABI.
The issue could be mitigated somewhat by reserving some space at the end
of the structs eg when the ABI needs to be changed anyway, but it has
obvious downsides as well. The other options I see tend to revolve
around changing release policies one way or the other: releasing ABI
compatible micro versions between minor versions and relaxing the ABI
policy a bit, or just releasing new minor versions more often than the
current cycle.
- Panu -
^ permalink raw reply [flat|nested] 257+ messages in thread
* Re: [dpdk-dev] [PATCH v6 01/18] mbuf: redefine packet_type in rte_mbuf
2015-06-12 7:24 ` Panu Matilainen
@ 2015-06-12 7:43 ` Zhang, Helin
2015-06-12 8:15 ` Panu Matilainen
0 siblings, 1 reply; 257+ messages in thread
From: Zhang, Helin @ 2015-06-12 7:43 UTC (permalink / raw)
To: Panu Matilainen, Thomas Monjalon, Olivier MATZ, O'Driscoll,
Tim, nhorman
Cc: dev
> -----Original Message-----
> From: Panu Matilainen [mailto:pmatilai@redhat.com]
> Sent: Friday, June 12, 2015 3:24 PM
> To: Thomas Monjalon; Olivier MATZ; O'Driscoll, Tim; Zhang, Helin;
> nhorman@tuxdriver.com
> Cc: dev@dpdk.org
> Subject: Re: [dpdk-dev] [PATCH v6 01/18] mbuf: redefine packet_type in
> rte_mbuf
>
> On 06/10/2015 07:14 PM, Thomas Monjalon wrote:
> > 2015-06-10 16:32, Olivier MATZ:
> >> On 06/02/2015 03:27 PM, O'Driscoll, Tim wrote:
> >>> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Olivier MATZ
> >>>> On 06/01/2015 09:33 AM, Helin Zhang wrote:
> >>>>> In order to unify the packet type, the field of 'packet_type' in
> >>>>> 'struct rte_mbuf' needs to be extended from 16 to 32 bits.
> >>>>> Accordingly, some fields in 'struct rte_mbuf' are re-organized to
> >>>>> support this change for Vector PMD. As 'struct rte_kni_mbuf' for
> >>>>> KNI should be right mapped to 'struct rte_mbuf', it should be
> >>>>> modified accordingly. In addition, Vector PMD of ixgbe is disabled
> >>>>> by default, as 'struct rte_mbuf' changed.
> >>>>> To avoid breaking ABI compatibility, all the changes would be
> >>>>> enabled by RTE_UNIFIED_PKT_TYPE, which is disabled by default.
> >>>>
> >>>> What are the plans for this compile-time option in the future?
> >>>>
> >>>> I wonder what are the benefits of having this option in terms of
> >>>> ABI compatibility: when it is disabled, it is ABI-compatible but
> >>>> the packet-type feature is not present, and when it is enabled we
> >>>> have the feature but it breaks the compatibility.
> >>>>
> >>>> In my opinion, the v5 is preferable: for this kind of features, I
> >>>> don't see how the ABI can be preserved, and I think packet-type
> >>>> won't be the only feature that will modify the mbuf structure. I
> >>>> think the process described here should be applied:
> >>>> http://dpdk.org/browse/dpdk/tree/doc/guides/rel_notes/abi.rst
> >>>>
> >>>> (starting from "Some ABI changes may be too significant to
> >>>> reasonably maintain multiple versions of").
> >>>
> >>> This is just like the change that Steve (Cunming) Liang submitted
> >>> for Interrupt Mode. We have the same problem in both cases: we want
> >>> to find a way to get the features included, but need to comply with
> >>> our ABI policy. So, in both cases, the proposal is to add a config
> >>> option to enable the change by default, so we maintain backward
> compatibility.
> >>> Users that want these changes, and are willing to accept the
> >>> associated ABI change, have to specifically enable them.
> >>>
> >>> We can note in the Deprecation Notices in the Release Notes for 2.1
> >>> that these config options will be removed in 2.2. The features will
> >>> then be enabled by default.
> >>>
> >>> This seems like a good compromise which allows us to get these
> >>> changes into 2.1 but avoids breaking the ABI policy.
> >>
> >> Sorry for the late answer.
> >>
> >> After some thoughts on this topic, I understand that having a
> >> compile-time option is perhaps a good compromise between keeping
> >> compatibility and having new features earlier.
> >>
> >> I'm just afraid about having one #ifdef in the code for each new
> >> feature that cannot keep the ABI compatibility.
> >> What do you think about having one option -- let's call it
> >> "CONFIG_RTE_NEXT_ABI" --, that is disabled by default, and that would
> >> surround any new feature that breaks the ABI?
> >>
> >> This would have several advantages:
> >> - only 2 cases (on or off), the combinatorial is smaller than
> >> having one option per feature
> >> - all next features breaking the abi can be identified by a grep
> >> - the code inside the #ifdef can be enabled in a simple operation
> >> by Thomas after each release.
> >>
> >> Thomas, any comment?
> >
> > As previously discussed (1to1) with Olivier, I think that's a good
> > proposal to introduce changes breaking deeply the ABI.
> >
> > Let's sum up the current policy:
> > 1/ For changes which have a limited impact on the ABI, the backward
> > compatibility must be kept during 1 release including the notice in
> doc/guides/rel_notes/abi.rst.
> > 2/ For important changes like mbuf rework, there was an agreement on
> > skipping the backward compatibility after having 3 acknowledgements and an
> 1-release long notice.
> > Then the ABI numbering must be incremented.
> >
> > This CONFIG_RTE_NEXT_ABI proposal would change the rules for the second
> case.
> > In order to be adopted, a patch for the file
> > doc/guides/rel_notes/abi.rst must be submitted and strongly acknowledged.
> >
> > The ABI numbering must be also clearly explained:
> > 1/ Should we have different libraries version number depending of
> CONFIG_RTE_NEXT_ABI?
> > It seems straightforward to use "ifeq" when LIBABIVER in the Makefiles
>
> An incompatible ABI must be reflected by a soname change, otherwise the
> whole library versioning is irrelevant.
>
> > 2/ Are we able to have some "if CONFIG_RTE_NEXT_ABI" statement in
> the .map files?
> > Maybe we should remove these files and generate them with some
> preprocessing.
> >
> > Neil, as the ABI policy author, what is your opinion?
>
> I'm not Neil but my 5c...
>
> Working around ABI compatibility policy via config options seems like a slippery
> slope. Going forward this will likely mean there are always two different ABIs for
> any given version, and the thought of keeping track of it all in a truly compatible
> manner makes my head hurt.
>
> That said its easy to understand the desire to move faster than the ABI policy
> allows. In a project where so many structs are in the open it gets hard to do much
> anything at all without breaking the ABI.
>
> The issue could be mitigated somewhat by reserving some space at the end of
> the structs eg when the ABI needs to be changed anyway, but it has obvious
> downsides as well. The other options I see tend to revolve around changing
> release policies one way or the other: releasing ABI compatible micro versions
> between minor versions and relaxing the ABI policy a bit, or just releasing new
> minor versions more often than the current cycle.
>
> - Panu -
Does it mean releasing R2.01 right now with announcement of all ABI changes, which
based on R2.0 first, and then releasing R2.1 several weeks later with all the code changes?
- Helin
^ permalink raw reply [flat|nested] 257+ messages in thread
* Re: [dpdk-dev] [PATCH v6 01/18] mbuf: redefine packet_type in rte_mbuf
2015-06-12 7:43 ` Zhang, Helin
@ 2015-06-12 8:15 ` Panu Matilainen
2015-06-12 8:28 ` Zhang, Helin
0 siblings, 1 reply; 257+ messages in thread
From: Panu Matilainen @ 2015-06-12 8:15 UTC (permalink / raw)
To: Zhang, Helin, Thomas Monjalon, Olivier MATZ, O'Driscoll, Tim,
nhorman
Cc: dev
On 06/12/2015 10:43 AM, Zhang, Helin wrote:
>
>
>> -----Original Message-----
>> From: Panu Matilainen [mailto:pmatilai@redhat.com]
>> Sent: Friday, June 12, 2015 3:24 PM
>> To: Thomas Monjalon; Olivier MATZ; O'Driscoll, Tim; Zhang, Helin;
>> nhorman@tuxdriver.com
>> Cc: dev@dpdk.org
>> Subject: Re: [dpdk-dev] [PATCH v6 01/18] mbuf: redefine packet_type in
>> rte_mbuf
>>
>> On 06/10/2015 07:14 PM, Thomas Monjalon wrote:
>>> 2015-06-10 16:32, Olivier MATZ:
>>>> On 06/02/2015 03:27 PM, O'Driscoll, Tim wrote:
>>>>> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Olivier MATZ
>>>>>> On 06/01/2015 09:33 AM, Helin Zhang wrote:
>>>>>>> In order to unify the packet type, the field of 'packet_type' in
>>>>>>> 'struct rte_mbuf' needs to be extended from 16 to 32 bits.
>>>>>>> Accordingly, some fields in 'struct rte_mbuf' are re-organized to
>>>>>>> support this change for Vector PMD. As 'struct rte_kni_mbuf' for
>>>>>>> KNI should be right mapped to 'struct rte_mbuf', it should be
>>>>>>> modified accordingly. In addition, Vector PMD of ixgbe is disabled
>>>>>>> by default, as 'struct rte_mbuf' changed.
>>>>>>> To avoid breaking ABI compatibility, all the changes would be
>>>>>>> enabled by RTE_UNIFIED_PKT_TYPE, which is disabled by default.
>>>>>>
>>>>>> What are the plans for this compile-time option in the future?
>>>>>>
>>>>>> I wonder what are the benefits of having this option in terms of
>>>>>> ABI compatibility: when it is disabled, it is ABI-compatible but
>>>>>> the packet-type feature is not present, and when it is enabled we
>>>>>> have the feature but it breaks the compatibility.
>>>>>>
>>>>>> In my opinion, the v5 is preferable: for this kind of features, I
>>>>>> don't see how the ABI can be preserved, and I think packet-type
>>>>>> won't be the only feature that will modify the mbuf structure. I
>>>>>> think the process described here should be applied:
>>>>>> http://dpdk.org/browse/dpdk/tree/doc/guides/rel_notes/abi.rst
>>>>>>
>>>>>> (starting from "Some ABI changes may be too significant to
>>>>>> reasonably maintain multiple versions of").
>>>>>
>>>>> This is just like the change that Steve (Cunming) Liang submitted
>>>>> for Interrupt Mode. We have the same problem in both cases: we want
>>>>> to find a way to get the features included, but need to comply with
>>>>> our ABI policy. So, in both cases, the proposal is to add a config
>>>>> option to enable the change by default, so we maintain backward
>> compatibility.
>>>>> Users that want these changes, and are willing to accept the
>>>>> associated ABI change, have to specifically enable them.
>>>>>
>>>>> We can note in the Deprecation Notices in the Release Notes for 2.1
>>>>> that these config options will be removed in 2.2. The features will
>>>>> then be enabled by default.
>>>>>
>>>>> This seems like a good compromise which allows us to get these
>>>>> changes into 2.1 but avoids breaking the ABI policy.
>>>>
>>>> Sorry for the late answer.
>>>>
>>>> After some thoughts on this topic, I understand that having a
>>>> compile-time option is perhaps a good compromise between keeping
>>>> compatibility and having new features earlier.
>>>>
>>>> I'm just afraid about having one #ifdef in the code for each new
>>>> feature that cannot keep the ABI compatibility.
>>>> What do you think about having one option -- let's call it
>>>> "CONFIG_RTE_NEXT_ABI" --, that is disabled by default, and that would
>>>> surround any new feature that breaks the ABI?
>>>>
>>>> This would have several advantages:
>>>> - only 2 cases (on or off), the combinatorial is smaller than
>>>> having one option per feature
>>>> - all next features breaking the abi can be identified by a grep
>>>> - the code inside the #ifdef can be enabled in a simple operation
>>>> by Thomas after each release.
>>>>
>>>> Thomas, any comment?
>>>
>>> As previously discussed (1to1) with Olivier, I think that's a good
>>> proposal to introduce changes breaking deeply the ABI.
>>>
>>> Let's sum up the current policy:
>>> 1/ For changes which have a limited impact on the ABI, the backward
>>> compatibility must be kept during 1 release including the notice in
>> doc/guides/rel_notes/abi.rst.
>>> 2/ For important changes like mbuf rework, there was an agreement on
>>> skipping the backward compatibility after having 3 acknowledgements and an
>> 1-release long notice.
>>> Then the ABI numbering must be incremented.
>>>
>>> This CONFIG_RTE_NEXT_ABI proposal would change the rules for the second
>> case.
>>> In order to be adopted, a patch for the file
>>> doc/guides/rel_notes/abi.rst must be submitted and strongly acknowledged.
>>>
>>> The ABI numbering must be also clearly explained:
>>> 1/ Should we have different libraries version number depending of
>> CONFIG_RTE_NEXT_ABI?
>>> It seems straightforward to use "ifeq" when LIBABIVER in the Makefiles
>>
>> An incompatible ABI must be reflected by a soname change, otherwise the
>> whole library versioning is irrelevant.
>>
>>> 2/ Are we able to have some "if CONFIG_RTE_NEXT_ABI" statement in
>> the .map files?
>>> Maybe we should remove these files and generate them with some
>> preprocessing.
>>>
>>> Neil, as the ABI policy author, what is your opinion?
>>
>> I'm not Neil but my 5c...
>>
>> Working around ABI compatibility policy via config options seems like a slippery
>> slope. Going forward this will likely mean there are always two different ABIs for
>> any given version, and the thought of keeping track of it all in a truly compatible
>> manner makes my head hurt.
>>
>> That said its easy to understand the desire to move faster than the ABI policy
>> allows. In a project where so many structs are in the open it gets hard to do much
>> anything at all without breaking the ABI.
>>
>> The issue could be mitigated somewhat by reserving some space at the end of
>> the structs eg when the ABI needs to be changed anyway, but it has obvious
>> downsides as well. The other options I see tend to revolve around changing
>> release policies one way or the other: releasing ABI compatible micro versions
>> between minor versions and relaxing the ABI policy a bit, or just releasing new
>> minor versions more often than the current cycle.
>>
>> - Panu -
>
> Does it mean releasing R2.01 right now with announcement of all ABI changes, which
> based on R2.0 first, and then releasing R2.1 several weeks later with all the code changes?
Something like that, but I'd think its too late for any big release
model / policy changes for this particular cycle.
I also do not want to undermine the ABI policy we just got in place, but
since people are actively looking for ways to work around it anyway its
better to map out all the possibilities. One of them is committing to
longer term maintenance of releases (via ABI compatible micro version
updates), another one is shortening the cycles. Both achieve roughly the
same goals with differences in emphasis perhaps, but more releases
requires more resources on maintaining, testing etc so...
- Panu -
^ permalink raw reply [flat|nested] 257+ messages in thread
* Re: [dpdk-dev] [PATCH v6 01/18] mbuf: redefine packet_type in rte_mbuf
2015-06-12 8:15 ` Panu Matilainen
@ 2015-06-12 8:28 ` Zhang, Helin
2015-06-12 9:00 ` Panu Matilainen
2015-06-12 9:07 ` Bruce Richardson
0 siblings, 2 replies; 257+ messages in thread
From: Zhang, Helin @ 2015-06-12 8:28 UTC (permalink / raw)
To: Panu Matilainen, Thomas Monjalon, Olivier MATZ, O'Driscoll,
Tim, nhorman
Cc: dev
> -----Original Message-----
> From: Panu Matilainen [mailto:pmatilai@redhat.com]
> Sent: Friday, June 12, 2015 4:15 PM
> To: Zhang, Helin; Thomas Monjalon; Olivier MATZ; O'Driscoll, Tim;
> nhorman@tuxdriver.com
> Cc: dev@dpdk.org
> Subject: Re: [dpdk-dev] [PATCH v6 01/18] mbuf: redefine packet_type in
> rte_mbuf
>
> On 06/12/2015 10:43 AM, Zhang, Helin wrote:
> >
> >
> >> -----Original Message-----
> >> From: Panu Matilainen [mailto:pmatilai@redhat.com]
> >> Sent: Friday, June 12, 2015 3:24 PM
> >> To: Thomas Monjalon; Olivier MATZ; O'Driscoll, Tim; Zhang, Helin;
> >> nhorman@tuxdriver.com
> >> Cc: dev@dpdk.org
> >> Subject: Re: [dpdk-dev] [PATCH v6 01/18] mbuf: redefine packet_type
> >> in rte_mbuf
> >>
> >> On 06/10/2015 07:14 PM, Thomas Monjalon wrote:
> >>> 2015-06-10 16:32, Olivier MATZ:
> >>>> On 06/02/2015 03:27 PM, O'Driscoll, Tim wrote:
> >>>>> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Olivier MATZ
> >>>>>> On 06/01/2015 09:33 AM, Helin Zhang wrote:
> >>>>>>> In order to unify the packet type, the field of 'packet_type' in
> >>>>>>> 'struct rte_mbuf' needs to be extended from 16 to 32 bits.
> >>>>>>> Accordingly, some fields in 'struct rte_mbuf' are re-organized
> >>>>>>> to support this change for Vector PMD. As 'struct rte_kni_mbuf'
> >>>>>>> for KNI should be right mapped to 'struct rte_mbuf', it should
> >>>>>>> be modified accordingly. In addition, Vector PMD of ixgbe is
> >>>>>>> disabled by default, as 'struct rte_mbuf' changed.
> >>>>>>> To avoid breaking ABI compatibility, all the changes would be
> >>>>>>> enabled by RTE_UNIFIED_PKT_TYPE, which is disabled by default.
> >>>>>>
> >>>>>> What are the plans for this compile-time option in the future?
> >>>>>>
> >>>>>> I wonder what are the benefits of having this option in terms of
> >>>>>> ABI compatibility: when it is disabled, it is ABI-compatible but
> >>>>>> the packet-type feature is not present, and when it is enabled we
> >>>>>> have the feature but it breaks the compatibility.
> >>>>>>
> >>>>>> In my opinion, the v5 is preferable: for this kind of features, I
> >>>>>> don't see how the ABI can be preserved, and I think packet-type
> >>>>>> won't be the only feature that will modify the mbuf structure. I
> >>>>>> think the process described here should be applied:
> >>>>>> http://dpdk.org/browse/dpdk/tree/doc/guides/rel_notes/abi.rst
> >>>>>>
> >>>>>> (starting from "Some ABI changes may be too significant to
> >>>>>> reasonably maintain multiple versions of").
> >>>>>
> >>>>> This is just like the change that Steve (Cunming) Liang submitted
> >>>>> for Interrupt Mode. We have the same problem in both cases: we
> >>>>> want to find a way to get the features included, but need to
> >>>>> comply with our ABI policy. So, in both cases, the proposal is to
> >>>>> add a config option to enable the change by default, so we
> >>>>> maintain backward
> >> compatibility.
> >>>>> Users that want these changes, and are willing to accept the
> >>>>> associated ABI change, have to specifically enable them.
> >>>>>
> >>>>> We can note in the Deprecation Notices in the Release Notes for
> >>>>> 2.1 that these config options will be removed in 2.2. The features
> >>>>> will then be enabled by default.
> >>>>>
> >>>>> This seems like a good compromise which allows us to get these
> >>>>> changes into 2.1 but avoids breaking the ABI policy.
> >>>>
> >>>> Sorry for the late answer.
> >>>>
> >>>> After some thoughts on this topic, I understand that having a
> >>>> compile-time option is perhaps a good compromise between keeping
> >>>> compatibility and having new features earlier.
> >>>>
> >>>> I'm just afraid about having one #ifdef in the code for each new
> >>>> feature that cannot keep the ABI compatibility.
> >>>> What do you think about having one option -- let's call it
> >>>> "CONFIG_RTE_NEXT_ABI" --, that is disabled by default, and that
> >>>> would surround any new feature that breaks the ABI?
> >>>>
> >>>> This would have several advantages:
> >>>> - only 2 cases (on or off), the combinatorial is smaller than
> >>>> having one option per feature
> >>>> - all next features breaking the abi can be identified by a grep
> >>>> - the code inside the #ifdef can be enabled in a simple operation
> >>>> by Thomas after each release.
> >>>>
> >>>> Thomas, any comment?
> >>>
> >>> As previously discussed (1to1) with Olivier, I think that's a good
> >>> proposal to introduce changes breaking deeply the ABI.
> >>>
> >>> Let's sum up the current policy:
> >>> 1/ For changes which have a limited impact on the ABI, the backward
> >>> compatibility must be kept during 1 release including the notice in
> >> doc/guides/rel_notes/abi.rst.
> >>> 2/ For important changes like mbuf rework, there was an agreement on
> >>> skipping the backward compatibility after having 3 acknowledgements
> >>> and an
> >> 1-release long notice.
> >>> Then the ABI numbering must be incremented.
> >>>
> >>> This CONFIG_RTE_NEXT_ABI proposal would change the rules for the
> >>> second
> >> case.
> >>> In order to be adopted, a patch for the file
> >>> doc/guides/rel_notes/abi.rst must be submitted and strongly
> acknowledged.
> >>>
> >>> The ABI numbering must be also clearly explained:
> >>> 1/ Should we have different libraries version number depending of
> >> CONFIG_RTE_NEXT_ABI?
> >>> It seems straightforward to use "ifeq" when LIBABIVER in the
> >>> Makefiles
> >>
> >> An incompatible ABI must be reflected by a soname change, otherwise
> >> the whole library versioning is irrelevant.
> >>
> >>> 2/ Are we able to have some "if CONFIG_RTE_NEXT_ABI" statement in
> >> the .map files?
> >>> Maybe we should remove these files and generate them with some
> >> preprocessing.
> >>>
> >>> Neil, as the ABI policy author, what is your opinion?
> >>
> >> I'm not Neil but my 5c...
> >>
> >> Working around ABI compatibility policy via config options seems like
> >> a slippery slope. Going forward this will likely mean there are
> >> always two different ABIs for any given version, and the thought of
> >> keeping track of it all in a truly compatible manner makes my head hurt.
> >>
> >> That said its easy to understand the desire to move faster than the
> >> ABI policy allows. In a project where so many structs are in the open
> >> it gets hard to do much anything at all without breaking the ABI.
> >>
> >> The issue could be mitigated somewhat by reserving some space at the
> >> end of the structs eg when the ABI needs to be changed anyway, but it
> >> has obvious downsides as well. The other options I see tend to
> >> revolve around changing release policies one way or the other:
> >> releasing ABI compatible micro versions between minor versions and
> >> relaxing the ABI policy a bit, or just releasing new minor versions more often
> than the current cycle.
> >>
> >> - Panu -
> >
> > Does it mean releasing R2.01 right now with announcement of all ABI
> > changes, which based on R2.0 first, and then releasing R2.1 several weeks later
> with all the code changes?
>
> Something like that, but I'd think its too late for any big release model / policy
> changes for this particular cycle.
>
> I also do not want to undermine the ABI policy we just got in place, but since
> people are actively looking for ways to work around it anyway its better to map
> out all the possibilities. One of them is committing to longer term maintenance of
> releases (via ABI compatible micro version updates), another one is shortening
> the cycles. Both achieve roughly the same goals with differences in emphasis
> perhaps, but more releases requires more resources on maintaining, testing etc
> so...
R2.01 could just have all the same of R2.0, with an additional ABI announcement.
Then nothing needs to be tested.
- Helin
>
> - Panu -
^ permalink raw reply [flat|nested] 257+ messages in thread
* Re: [dpdk-dev] [PATCH v6 01/18] mbuf: redefine packet_type in rte_mbuf
2015-06-12 8:28 ` Zhang, Helin
@ 2015-06-12 9:00 ` Panu Matilainen
2015-06-12 9:07 ` Bruce Richardson
1 sibling, 0 replies; 257+ messages in thread
From: Panu Matilainen @ 2015-06-12 9:00 UTC (permalink / raw)
To: Zhang, Helin, Thomas Monjalon, Olivier MATZ, O'Driscoll, Tim,
nhorman
Cc: dev
On 06/12/2015 11:28 AM, Zhang, Helin wrote:
>
>
>> -----Original Message-----
>> From: Panu Matilainen [mailto:pmatilai@redhat.com]
>> Sent: Friday, June 12, 2015 4:15 PM
>> To: Zhang, Helin; Thomas Monjalon; Olivier MATZ; O'Driscoll, Tim;
>> nhorman@tuxdriver.com
>> Cc: dev@dpdk.org
>> Subject: Re: [dpdk-dev] [PATCH v6 01/18] mbuf: redefine packet_type in
>> rte_mbuf
>>
>> On 06/12/2015 10:43 AM, Zhang, Helin wrote:
>>>
>>>
>>>> -----Original Message-----
>>>> From: Panu Matilainen [mailto:pmatilai@redhat.com]
>>>> Sent: Friday, June 12, 2015 3:24 PM
>>>> To: Thomas Monjalon; Olivier MATZ; O'Driscoll, Tim; Zhang, Helin;
>>>> nhorman@tuxdriver.com
>>>> Cc: dev@dpdk.org
>>>> Subject: Re: [dpdk-dev] [PATCH v6 01/18] mbuf: redefine packet_type
>>>> in rte_mbuf
>>>>
>>>> On 06/10/2015 07:14 PM, Thomas Monjalon wrote:
>>>>> 2015-06-10 16:32, Olivier MATZ:
>>>>>> On 06/02/2015 03:27 PM, O'Driscoll, Tim wrote:
>>>>>>> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Olivier MATZ
>>>>>>>> On 06/01/2015 09:33 AM, Helin Zhang wrote:
>>>>>>>>> In order to unify the packet type, the field of 'packet_type' in
>>>>>>>>> 'struct rte_mbuf' needs to be extended from 16 to 32 bits.
>>>>>>>>> Accordingly, some fields in 'struct rte_mbuf' are re-organized
>>>>>>>>> to support this change for Vector PMD. As 'struct rte_kni_mbuf'
>>>>>>>>> for KNI should be right mapped to 'struct rte_mbuf', it should
>>>>>>>>> be modified accordingly. In addition, Vector PMD of ixgbe is
>>>>>>>>> disabled by default, as 'struct rte_mbuf' changed.
>>>>>>>>> To avoid breaking ABI compatibility, all the changes would be
>>>>>>>>> enabled by RTE_UNIFIED_PKT_TYPE, which is disabled by default.
>>>>>>>>
>>>>>>>> What are the plans for this compile-time option in the future?
>>>>>>>>
>>>>>>>> I wonder what are the benefits of having this option in terms of
>>>>>>>> ABI compatibility: when it is disabled, it is ABI-compatible but
>>>>>>>> the packet-type feature is not present, and when it is enabled we
>>>>>>>> have the feature but it breaks the compatibility.
>>>>>>>>
>>>>>>>> In my opinion, the v5 is preferable: for this kind of features, I
>>>>>>>> don't see how the ABI can be preserved, and I think packet-type
>>>>>>>> won't be the only feature that will modify the mbuf structure. I
>>>>>>>> think the process described here should be applied:
>>>>>>>> http://dpdk.org/browse/dpdk/tree/doc/guides/rel_notes/abi.rst
>>>>>>>>
>>>>>>>> (starting from "Some ABI changes may be too significant to
>>>>>>>> reasonably maintain multiple versions of").
>>>>>>>
>>>>>>> This is just like the change that Steve (Cunming) Liang submitted
>>>>>>> for Interrupt Mode. We have the same problem in both cases: we
>>>>>>> want to find a way to get the features included, but need to
>>>>>>> comply with our ABI policy. So, in both cases, the proposal is to
>>>>>>> add a config option to enable the change by default, so we
>>>>>>> maintain backward
>>>> compatibility.
>>>>>>> Users that want these changes, and are willing to accept the
>>>>>>> associated ABI change, have to specifically enable them.
>>>>>>>
>>>>>>> We can note in the Deprecation Notices in the Release Notes for
>>>>>>> 2.1 that these config options will be removed in 2.2. The features
>>>>>>> will then be enabled by default.
>>>>>>>
>>>>>>> This seems like a good compromise which allows us to get these
>>>>>>> changes into 2.1 but avoids breaking the ABI policy.
>>>>>>
>>>>>> Sorry for the late answer.
>>>>>>
>>>>>> After some thoughts on this topic, I understand that having a
>>>>>> compile-time option is perhaps a good compromise between keeping
>>>>>> compatibility and having new features earlier.
>>>>>>
>>>>>> I'm just afraid about having one #ifdef in the code for each new
>>>>>> feature that cannot keep the ABI compatibility.
>>>>>> What do you think about having one option -- let's call it
>>>>>> "CONFIG_RTE_NEXT_ABI" --, that is disabled by default, and that
>>>>>> would surround any new feature that breaks the ABI?
>>>>>>
>>>>>> This would have several advantages:
>>>>>> - only 2 cases (on or off), the combinatorial is smaller than
>>>>>> having one option per feature
>>>>>> - all next features breaking the abi can be identified by a grep
>>>>>> - the code inside the #ifdef can be enabled in a simple operation
>>>>>> by Thomas after each release.
>>>>>>
>>>>>> Thomas, any comment?
>>>>>
>>>>> As previously discussed (1to1) with Olivier, I think that's a good
>>>>> proposal to introduce changes breaking deeply the ABI.
>>>>>
>>>>> Let's sum up the current policy:
>>>>> 1/ For changes which have a limited impact on the ABI, the backward
>>>>> compatibility must be kept during 1 release including the notice in
>>>> doc/guides/rel_notes/abi.rst.
>>>>> 2/ For important changes like mbuf rework, there was an agreement on
>>>>> skipping the backward compatibility after having 3 acknowledgements
>>>>> and an
>>>> 1-release long notice.
>>>>> Then the ABI numbering must be incremented.
>>>>>
>>>>> This CONFIG_RTE_NEXT_ABI proposal would change the rules for the
>>>>> second
>>>> case.
>>>>> In order to be adopted, a patch for the file
>>>>> doc/guides/rel_notes/abi.rst must be submitted and strongly
>> acknowledged.
>>>>>
>>>>> The ABI numbering must be also clearly explained:
>>>>> 1/ Should we have different libraries version number depending of
>>>> CONFIG_RTE_NEXT_ABI?
>>>>> It seems straightforward to use "ifeq" when LIBABIVER in the
>>>>> Makefiles
>>>>
>>>> An incompatible ABI must be reflected by a soname change, otherwise
>>>> the whole library versioning is irrelevant.
>>>>
>>>>> 2/ Are we able to have some "if CONFIG_RTE_NEXT_ABI" statement in
>>>> the .map files?
>>>>> Maybe we should remove these files and generate them with some
>>>> preprocessing.
>>>>>
>>>>> Neil, as the ABI policy author, what is your opinion?
>>>>
>>>> I'm not Neil but my 5c...
>>>>
>>>> Working around ABI compatibility policy via config options seems like
>>>> a slippery slope. Going forward this will likely mean there are
>>>> always two different ABIs for any given version, and the thought of
>>>> keeping track of it all in a truly compatible manner makes my head hurt.
>>>>
>>>> That said its easy to understand the desire to move faster than the
>>>> ABI policy allows. In a project where so many structs are in the open
>>>> it gets hard to do much anything at all without breaking the ABI.
>>>>
>>>> The issue could be mitigated somewhat by reserving some space at the
>>>> end of the structs eg when the ABI needs to be changed anyway, but it
>>>> has obvious downsides as well. The other options I see tend to
>>>> revolve around changing release policies one way or the other:
>>>> releasing ABI compatible micro versions between minor versions and
>>>> relaxing the ABI policy a bit, or just releasing new minor versions more often
>> than the current cycle.
>>>>
>>>> - Panu -
>>>
>>> Does it mean releasing R2.01 right now with announcement of all ABI
>>> changes, which based on R2.0 first, and then releasing R2.1 several weeks later
>> with all the code changes?
>>
>> Something like that, but I'd think its too late for any big release model / policy
>> changes for this particular cycle.
>>
>> I also do not want to undermine the ABI policy we just got in place, but since
>> people are actively looking for ways to work around it anyway its better to map
>> out all the possibilities. One of them is committing to longer term maintenance of
>> releases (via ABI compatible micro version updates), another one is shortening
>> the cycles. Both achieve roughly the same goals with differences in emphasis
>> perhaps, but more releases requires more resources on maintaining, testing etc
>> so...
> R2.01 could just have all the same of R2.0, with an additional ABI announcement.
> Then nothing needs to be tested.
That's also entirely missing the point of having an ABI policy in the
first place. Its purpose is not to force people to find loopholes in the
policy but for the benefit of other developers building apps on top of DPDK.
- Panu -
^ permalink raw reply [flat|nested] 257+ messages in thread
* Re: [dpdk-dev] [PATCH v6 01/18] mbuf: redefine packet_type in rte_mbuf
2015-06-12 8:28 ` Zhang, Helin
2015-06-12 9:00 ` Panu Matilainen
@ 2015-06-12 9:07 ` Bruce Richardson
1 sibling, 0 replies; 257+ messages in thread
From: Bruce Richardson @ 2015-06-12 9:07 UTC (permalink / raw)
To: Zhang, Helin; +Cc: dev
On Fri, Jun 12, 2015 at 08:28:55AM +0000, Zhang, Helin wrote:
>
>
> > -----Original Message-----
> > From: Panu Matilainen [mailto:pmatilai@redhat.com]
> > Sent: Friday, June 12, 2015 4:15 PM
> > To: Zhang, Helin; Thomas Monjalon; Olivier MATZ; O'Driscoll, Tim;
> > nhorman@tuxdriver.com
> > Cc: dev@dpdk.org
> > Subject: Re: [dpdk-dev] [PATCH v6 01/18] mbuf: redefine packet_type in
> > rte_mbuf
> >
> > On 06/12/2015 10:43 AM, Zhang, Helin wrote:
> > >
> > >
> > >> -----Original Message-----
> > >> From: Panu Matilainen [mailto:pmatilai@redhat.com]
> > >> Sent: Friday, June 12, 2015 3:24 PM
> > >> To: Thomas Monjalon; Olivier MATZ; O'Driscoll, Tim; Zhang, Helin;
> > >> nhorman@tuxdriver.com
> > >> Cc: dev@dpdk.org
> > >> Subject: Re: [dpdk-dev] [PATCH v6 01/18] mbuf: redefine packet_type
> > >> in rte_mbuf
> > >>
> > >> On 06/10/2015 07:14 PM, Thomas Monjalon wrote:
> > >>> 2015-06-10 16:32, Olivier MATZ:
> > >>>> On 06/02/2015 03:27 PM, O'Driscoll, Tim wrote:
> > >>>>> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Olivier MATZ
> > >>>>>> On 06/01/2015 09:33 AM, Helin Zhang wrote:
> > >>>>>>> In order to unify the packet type, the field of 'packet_type' in
> > >>>>>>> 'struct rte_mbuf' needs to be extended from 16 to 32 bits.
> > >>>>>>> Accordingly, some fields in 'struct rte_mbuf' are re-organized
> > >>>>>>> to support this change for Vector PMD. As 'struct rte_kni_mbuf'
> > >>>>>>> for KNI should be right mapped to 'struct rte_mbuf', it should
> > >>>>>>> be modified accordingly. In addition, Vector PMD of ixgbe is
> > >>>>>>> disabled by default, as 'struct rte_mbuf' changed.
> > >>>>>>> To avoid breaking ABI compatibility, all the changes would be
> > >>>>>>> enabled by RTE_UNIFIED_PKT_TYPE, which is disabled by default.
> > >>>>>>
> > >>>>>> What are the plans for this compile-time option in the future?
> > >>>>>>
> > >>>>>> I wonder what are the benefits of having this option in terms of
> > >>>>>> ABI compatibility: when it is disabled, it is ABI-compatible but
> > >>>>>> the packet-type feature is not present, and when it is enabled we
> > >>>>>> have the feature but it breaks the compatibility.
> > >>>>>>
> > >>>>>> In my opinion, the v5 is preferable: for this kind of features, I
> > >>>>>> don't see how the ABI can be preserved, and I think packet-type
> > >>>>>> won't be the only feature that will modify the mbuf structure. I
> > >>>>>> think the process described here should be applied:
> > >>>>>> http://dpdk.org/browse/dpdk/tree/doc/guides/rel_notes/abi.rst
> > >>>>>>
> > >>>>>> (starting from "Some ABI changes may be too significant to
> > >>>>>> reasonably maintain multiple versions of").
> > >>>>>
> > >>>>> This is just like the change that Steve (Cunming) Liang submitted
> > >>>>> for Interrupt Mode. We have the same problem in both cases: we
> > >>>>> want to find a way to get the features included, but need to
> > >>>>> comply with our ABI policy. So, in both cases, the proposal is to
> > >>>>> add a config option to enable the change by default, so we
> > >>>>> maintain backward
> > >> compatibility.
> > >>>>> Users that want these changes, and are willing to accept the
> > >>>>> associated ABI change, have to specifically enable them.
> > >>>>>
> > >>>>> We can note in the Deprecation Notices in the Release Notes for
> > >>>>> 2.1 that these config options will be removed in 2.2. The features
> > >>>>> will then be enabled by default.
> > >>>>>
> > >>>>> This seems like a good compromise which allows us to get these
> > >>>>> changes into 2.1 but avoids breaking the ABI policy.
> > >>>>
> > >>>> Sorry for the late answer.
> > >>>>
> > >>>> After some thoughts on this topic, I understand that having a
> > >>>> compile-time option is perhaps a good compromise between keeping
> > >>>> compatibility and having new features earlier.
> > >>>>
> > >>>> I'm just afraid about having one #ifdef in the code for each new
> > >>>> feature that cannot keep the ABI compatibility.
> > >>>> What do you think about having one option -- let's call it
> > >>>> "CONFIG_RTE_NEXT_ABI" --, that is disabled by default, and that
> > >>>> would surround any new feature that breaks the ABI?
> > >>>>
> > >>>> This would have several advantages:
> > >>>> - only 2 cases (on or off), the combinatorial is smaller than
> > >>>> having one option per feature
> > >>>> - all next features breaking the abi can be identified by a grep
> > >>>> - the code inside the #ifdef can be enabled in a simple operation
> > >>>> by Thomas after each release.
> > >>>>
> > >>>> Thomas, any comment?
> > >>>
> > >>> As previously discussed (1to1) with Olivier, I think that's a good
> > >>> proposal to introduce changes breaking deeply the ABI.
> > >>>
> > >>> Let's sum up the current policy:
> > >>> 1/ For changes which have a limited impact on the ABI, the backward
> > >>> compatibility must be kept during 1 release including the notice in
> > >> doc/guides/rel_notes/abi.rst.
> > >>> 2/ For important changes like mbuf rework, there was an agreement on
> > >>> skipping the backward compatibility after having 3 acknowledgements
> > >>> and an
> > >> 1-release long notice.
> > >>> Then the ABI numbering must be incremented.
> > >>>
> > >>> This CONFIG_RTE_NEXT_ABI proposal would change the rules for the
> > >>> second
> > >> case.
> > >>> In order to be adopted, a patch for the file
> > >>> doc/guides/rel_notes/abi.rst must be submitted and strongly
> > acknowledged.
> > >>>
> > >>> The ABI numbering must be also clearly explained:
> > >>> 1/ Should we have different libraries version number depending of
> > >> CONFIG_RTE_NEXT_ABI?
> > >>> It seems straightforward to use "ifeq" when LIBABIVER in the
> > >>> Makefiles
> > >>
> > >> An incompatible ABI must be reflected by a soname change, otherwise
> > >> the whole library versioning is irrelevant.
> > >>
> > >>> 2/ Are we able to have some "if CONFIG_RTE_NEXT_ABI" statement in
> > >> the .map files?
> > >>> Maybe we should remove these files and generate them with some
> > >> preprocessing.
> > >>>
> > >>> Neil, as the ABI policy author, what is your opinion?
> > >>
> > >> I'm not Neil but my 5c...
> > >>
> > >> Working around ABI compatibility policy via config options seems like
> > >> a slippery slope. Going forward this will likely mean there are
> > >> always two different ABIs for any given version, and the thought of
> > >> keeping track of it all in a truly compatible manner makes my head hurt.
> > >>
> > >> That said its easy to understand the desire to move faster than the
> > >> ABI policy allows. In a project where so many structs are in the open
> > >> it gets hard to do much anything at all without breaking the ABI.
> > >>
> > >> The issue could be mitigated somewhat by reserving some space at the
> > >> end of the structs eg when the ABI needs to be changed anyway, but it
> > >> has obvious downsides as well. The other options I see tend to
> > >> revolve around changing release policies one way or the other:
> > >> releasing ABI compatible micro versions between minor versions and
> > >> relaxing the ABI policy a bit, or just releasing new minor versions more often
> > than the current cycle.
> > >>
> > >> - Panu -
> > >
> > > Does it mean releasing R2.01 right now with announcement of all ABI
> > > changes, which based on R2.0 first, and then releasing R2.1 several weeks later
> > with all the code changes?
> >
> > Something like that, but I'd think its too late for any big release model / policy
> > changes for this particular cycle.
> >
> > I also do not want to undermine the ABI policy we just got in place, but since
> > people are actively looking for ways to work around it anyway its better to map
> > out all the possibilities. One of them is committing to longer term maintenance of
> > releases (via ABI compatible micro version updates), another one is shortening
> > the cycles. Both achieve roughly the same goals with differences in emphasis
> > perhaps, but more releases requires more resources on maintaining, testing etc
> > so...
> R2.01 could just have all the same of R2.0, with an additional ABI announcement.
> Then nothing needs to be tested.
>
> - Helin
>
Then it would be a paper exercise just to bypass an ABI policy, so NACK to that idea.
If (and it's a fairly big if) we do decide we need longer-term maintenance
branches for maintaining ABI, then we need to do it properly.
This may including doing things liek back-porting relevant (maybe all) features from
later releases that don't break the ABI to the supported version. Bug fixes would
obviously have to be backported.
However, the overhead of this is obvious, since we would now have multiple development
lines to be maintained.
Regards,
/Bruce
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v7 00/18] unified packet type
2015-06-01 7:33 ` [dpdk-dev] [PATCH v6 00/18] unified packet type Helin Zhang
` (17 preceding siblings ...)
2015-06-01 7:34 ` [dpdk-dev] [PATCH v6 18/18] mbuf: remove old packet type bit masks Helin Zhang
@ 2015-06-19 8:14 ` Helin Zhang
2015-06-19 8:14 ` [dpdk-dev] [PATCH v7 01/18] mbuf: redefine packet_type in rte_mbuf Helin Zhang
` (18 more replies)
18 siblings, 19 replies; 257+ messages in thread
From: Helin Zhang @ 2015-06-19 8:14 UTC (permalink / raw)
To: dev
Currently only 6 bits which are stored in ol_flags are used to indicate
the packet types. This is not enough, as some NIC hardware can recognize
quite a lot of packet types, e.g i40e hardware can recognize more than 150
packet types. Hiding those packet types hides hardware offload capabilities
which could be quite useful for improving performance and for end users.
So an unified packet types are needed to support all possible PMDs. A 16
bits packet_type in mbuf structure can be changed to 32 bits and used for
this purpose. In addition, all packet types stored in ol_flag field should
be deleted at all, and 6 bits of ol_flags can be save as the benifit.
Initially, 32 bits of packet_type can be divided into several sub fields
to indicate different packet type information of a packet. The initial
design is to divide those bits into fields for L2 types, L3 types, L4 types,
tunnel types, inner L2 types, inner L3 types and inner L4 types. All PMDs
should translate the offloaded packet types into these 7 fields of
information, for user applications.
To avoid breaking ABI compatibility, currently all the code changes for
unified packet type are disabled at compile time by default. Users can
enable it manually by defining the macro of RTE_NEXT_ABI. The code changes
will be valid by default in a future release, and the old version will be
deleted accordingly, after the ABI change process is done.
Note that this patch set should be integrated after another patch set for
'[PATCH v3 0/7] support i40e QinQ stripping and insertion', to clearly
solve the conflict during integration. As both patch sets modified
'struct rte_mbuf', and the final layout of the 'struct rte_mbuf' is key to
vectorized ixgbe PMD.
v2 changes:
* Enlarged the packet_type field from 16 bits to 32 bits.
* Redefined the packet type sub-fields.
* Updated the 'struct rte_kni_mbuf' for KNI according to the mbuf changes.
* Used redefined packet types and enlarged packet_type field for all PMDs
and corresponding applications.
* Removed changes in bond and its relevant application, as there is no need
at all according to the recent bond changes.
v3 changes:
* Put the mbuf layout changes into a single patch.
* Put vector ixgbe changes right after mbuf changes.
* Disabled vector ixgbe PMD by default, as mbuf layout changed, and then
re-enabled it after vector ixgbe PMD updated.
* Put the definitions of unified packet type into a single patch.
* Minor bug fixes and enhancements in l3fwd example.
v4 changes:
* Added detailed description of each packet types.
* Supported unified packet type of fm10k.
* Added printing logs of packet types of each received packet for rxonly
mode in testpmd.
* Removed several useless code lines which block packet type unification from
app/test/packet_burst_generator.c.
v5 changes:
* Added more detailed description for each packet types, together with examples.
* Rolled back the macro definitions of RX packet flags, for ABI compitability.
v6 changes:
* Disabled the code changes for unified packet type by default, to
avoid breaking ABI compatibility.
v7 changes:
* Renamed RTE_UNIFIED_PKT_TYPE to RTE_NEXT_ABI.
* Integrated with patch set for '[PATCH v3 0/7] support i40e QinQ stripping
and insertion', to clearly solve the conflicts during merging.
Helin Zhang (18):
mbuf: redefine packet_type in rte_mbuf
ixgbe: support unified packet type in vectorized PMD
mbuf: add definitions of unified packet types
e1000: replace bit mask based packet type with unified packet type
ixgbe: replace bit mask based packet type with unified packet type
i40e: replace bit mask based packet type with unified packet type
enic: replace bit mask based packet type with unified packet type
vmxnet3: replace bit mask based packet type with unified packet type
fm10k: replace bit mask based packet type with unified packet type
app/test-pipeline: replace bit mask based packet type with unified
packet type
app/testpmd: replace bit mask based packet type with unified packet
type
app/test: Remove useless code
examples/ip_fragmentation: replace bit mask based packet type with
unified packet type
examples/ip_reassembly: replace bit mask based packet type with
unified packet type
examples/l3fwd-acl: replace bit mask based packet type with unified
packet type
examples/l3fwd-power: replace bit mask based packet type with unified
packet type
examples/l3fwd: replace bit mask based packet type with unified packet
type
mbuf: remove old packet type bit masks
app/test-pipeline/pipeline_hash.c | 13 +
app/test-pmd/csumonly.c | 14 +
app/test-pmd/rxonly.c | 183 +++++++
app/test/packet_burst_generator.c | 6 +-
drivers/net/e1000/igb_rxtx.c | 102 ++++
drivers/net/enic/enic_main.c | 26 +
drivers/net/fm10k/fm10k_rxtx.c | 27 ++
drivers/net/i40e/i40e_rxtx.c | 528 +++++++++++++++++++++
drivers/net/ixgbe/ixgbe_rxtx.c | 163 +++++++
drivers/net/ixgbe/ixgbe_rxtx_vec.c | 75 ++-
drivers/net/vmxnet3/vmxnet3_rxtx.c | 8 +
examples/ip_fragmentation/main.c | 9 +
examples/ip_reassembly/main.c | 9 +
examples/l3fwd-acl/main.c | 29 +-
examples/l3fwd-power/main.c | 8 +
examples/l3fwd/main.c | 123 ++++-
.../linuxapp/eal/include/exec-env/rte_kni_common.h | 6 +
lib/librte_mbuf/rte_mbuf.c | 4 +
lib/librte_mbuf/rte_mbuf.h | 514 ++++++++++++++++++++
19 files changed, 1834 insertions(+), 13 deletions(-)
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v7 01/18] mbuf: redefine packet_type in rte_mbuf
2015-06-19 8:14 ` [dpdk-dev] [PATCH v7 00/18] unified packet type Helin Zhang
@ 2015-06-19 8:14 ` Helin Zhang
2015-06-19 8:14 ` [dpdk-dev] [PATCH v7 02/18] ixgbe: support unified packet type in vectorized PMD Helin Zhang
` (17 subsequent siblings)
18 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-06-19 8:14 UTC (permalink / raw)
To: dev
In order to unify the packet type, the field of 'packet_type' in
'struct rte_mbuf' needs to be extended from 16 to 32 bits.
Accordingly, some fields in 'struct rte_mbuf' are re-organized to
support this change for Vector PMD. As 'struct rte_kni_mbuf' for
KNI should be right mapped to 'struct rte_mbuf', it should be
modified accordingly. In addition, Vector PMD of ixgbe is disabled
by default, as 'struct rte_mbuf' changed.
To avoid breaking ABI compatibility, all the changes would be
enabled by RTE_NEXT_ABI, which is disabled by default.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Signed-off-by: Cunming Liang <cunming.liang@intel.com>
---
config/common_linuxapp | 2 +-
.../linuxapp/eal/include/exec-env/rte_kni_common.h | 6 ++++++
lib/librte_mbuf/rte_mbuf.h | 23 ++++++++++++++++++++++
3 files changed, 30 insertions(+), 1 deletion(-)
v2 changes:
* Enlarged the packet_type field from 16 bits to 32 bits.
* Redefined the packet type sub-fields.
* Updated the 'struct rte_kni_mbuf' for KNI according to the mbuf changes.
v3 changes:
* Put the mbuf layout changes into a single patch.
* Disabled vector ixgbe PMD by default, as mbuf layout changed.
v5 changes:
* Re-worded the commit logs.
v6 changes:
* Disabled the code changes for unified packet type by default, to
avoid breaking ABI compatibility.
v7 changes:
* Renamed RTE_UNIFIED_PKT_TYPE to RTE_NEXT_ABI.
* Integrated with changes of QinQ stripping/insertion.
diff --git a/config/common_linuxapp b/config/common_linuxapp
index 5deb55a..617d4a1 100644
--- a/config/common_linuxapp
+++ b/config/common_linuxapp
@@ -167,7 +167,7 @@ CONFIG_RTE_LIBRTE_IXGBE_DEBUG_TX_FREE=n
CONFIG_RTE_LIBRTE_IXGBE_DEBUG_DRIVER=n
CONFIG_RTE_LIBRTE_IXGBE_PF_DISABLE_STRIP_CRC=n
CONFIG_RTE_LIBRTE_IXGBE_RX_ALLOW_BULK_ALLOC=y
-CONFIG_RTE_IXGBE_INC_VECTOR=y
+CONFIG_RTE_IXGBE_INC_VECTOR=n
CONFIG_RTE_IXGBE_RX_OLFLAGS_ENABLE=y
#
diff --git a/lib/librte_eal/linuxapp/eal/include/exec-env/rte_kni_common.h b/lib/librte_eal/linuxapp/eal/include/exec-env/rte_kni_common.h
index 1e55c2d..e9f38bd 100644
--- a/lib/librte_eal/linuxapp/eal/include/exec-env/rte_kni_common.h
+++ b/lib/librte_eal/linuxapp/eal/include/exec-env/rte_kni_common.h
@@ -117,9 +117,15 @@ struct rte_kni_mbuf {
uint16_t data_off; /**< Start address of data in segment buffer. */
char pad1[4];
uint64_t ol_flags; /**< Offload features. */
+#ifdef RTE_NEXT_ABI
+ char pad2[4];
+ uint32_t pkt_len; /**< Total pkt len: sum of all segment data_len. */
+ uint16_t data_len; /**< Amount of data in segment buffer. */
+#else
char pad2[2];
uint16_t data_len; /**< Amount of data in segment buffer. */
uint32_t pkt_len; /**< Total pkt len: sum of all segment data_len. */
+#endif
/* fields on second cache line */
char pad3[8] __attribute__((__aligned__(RTE_CACHE_LINE_SIZE)));
diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
index a0f3d3b..aa55769 100644
--- a/lib/librte_mbuf/rte_mbuf.h
+++ b/lib/librte_mbuf/rte_mbuf.h
@@ -275,6 +275,28 @@ struct rte_mbuf {
/* remaining bytes are set on RX when pulling packet from descriptor */
MARKER rx_descriptor_fields1;
+#ifdef RTE_NEXT_ABI
+ /*
+ * The packet type, which is the combination of outer/inner L2, L3, L4
+ * and tunnel types.
+ */
+ union {
+ uint32_t packet_type; /**< L2/L3/L4 and tunnel information. */
+ struct {
+ uint32_t l2_type:4; /**< (Outer) L2 type. */
+ uint32_t l3_type:4; /**< (Outer) L3 type. */
+ uint32_t l4_type:4; /**< (Outer) L4 type. */
+ uint32_t tun_type:4; /**< Tunnel type. */
+ uint32_t inner_l2_type:4; /**< Inner L2 type. */
+ uint32_t inner_l3_type:4; /**< Inner L3 type. */
+ uint32_t inner_l4_type:4; /**< Inner L4 type. */
+ };
+ };
+
+ uint32_t pkt_len; /**< Total pkt len: sum of all segments. */
+ uint16_t data_len; /**< Amount of data in segment buffer. */
+ uint16_t vlan_tci; /**< VLAN Tag Control Identifier (CPU order) */
+#else
/**
* The packet type, which is used to indicate ordinary packet and also
* tunneled packet format, i.e. each number is represented a type of
@@ -285,6 +307,7 @@ struct rte_mbuf {
uint16_t data_len; /**< Amount of data in segment buffer. */
uint32_t pkt_len; /**< Total pkt len: sum of all segments. */
uint16_t vlan_tci; /**< VLAN Tag Control Identifier (CPU order) */
+#endif
uint16_t vlan_tci_outer; /**< Outer VLAN Tag Control Identifier (CPU order) */
union {
uint32_t rss; /**< RSS hash result if RSS enabled */
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v7 02/18] ixgbe: support unified packet type in vectorized PMD
2015-06-19 8:14 ` [dpdk-dev] [PATCH v7 00/18] unified packet type Helin Zhang
2015-06-19 8:14 ` [dpdk-dev] [PATCH v7 01/18] mbuf: redefine packet_type in rte_mbuf Helin Zhang
@ 2015-06-19 8:14 ` Helin Zhang
2015-06-19 8:14 ` [dpdk-dev] [PATCH v7 03/18] mbuf: add definitions of unified packet types Helin Zhang
` (16 subsequent siblings)
18 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-06-19 8:14 UTC (permalink / raw)
To: dev
To unify the packet type, bit masks of packet type for ol_flags are
replaced. In addition, more packet types (UDP, TCP and SCTP) are
supported in vectorized ixgbe PMD.
To avoid breaking ABI compatibility, all the changes would be
enabled by RTE_NEXT_ABI, which is disabled by default.
Note that around 2% performance drop (64B) was observed of doing 4
ports (1 port per 82599 card) IO forwarding on the same SNB core.
Signed-off-by: Cunming Liang <cunming.liang@intel.com>
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
config/common_linuxapp | 2 +-
drivers/net/ixgbe/ixgbe_rxtx_vec.c | 75 +++++++++++++++++++++++++++++++++++++-
2 files changed, 74 insertions(+), 3 deletions(-)
v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.
v3 changes:
* Put vector ixgbe changes right after mbuf changes.
* Enabled vector ixgbe PMD by default together with changes for updated
vector PMD.
v5 changes:
* Re-worded the commit logs.
v6 changes:
* Disabled the code changes for unified packet type by default, to
avoid breaking ABI compatibility.
v7 changes:
* Renamed RTE_UNIFIED_PKT_TYPE to RTE_NEXT_ABI.
diff --git a/config/common_linuxapp b/config/common_linuxapp
index 617d4a1..5deb55a 100644
--- a/config/common_linuxapp
+++ b/config/common_linuxapp
@@ -167,7 +167,7 @@ CONFIG_RTE_LIBRTE_IXGBE_DEBUG_TX_FREE=n
CONFIG_RTE_LIBRTE_IXGBE_DEBUG_DRIVER=n
CONFIG_RTE_LIBRTE_IXGBE_PF_DISABLE_STRIP_CRC=n
CONFIG_RTE_LIBRTE_IXGBE_RX_ALLOW_BULK_ALLOC=y
-CONFIG_RTE_IXGBE_INC_VECTOR=n
+CONFIG_RTE_IXGBE_INC_VECTOR=y
CONFIG_RTE_IXGBE_RX_OLFLAGS_ENABLE=y
#
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec.c b/drivers/net/ixgbe/ixgbe_rxtx_vec.c
index abd10f6..ccea7cd 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec.c
@@ -134,6 +134,12 @@ ixgbe_rxq_rearm(struct ixgbe_rx_queue *rxq)
*/
#ifdef RTE_IXGBE_RX_OLFLAGS_ENABLE
+#ifdef RTE_NEXT_ABI
+#define OLFLAGS_MASK_V (((uint64_t)PKT_RX_VLAN_PKT << 48) | \
+ ((uint64_t)PKT_RX_VLAN_PKT << 32) | \
+ ((uint64_t)PKT_RX_VLAN_PKT << 16) | \
+ ((uint64_t)PKT_RX_VLAN_PKT))
+#else
#define OLFLAGS_MASK ((uint16_t)(PKT_RX_VLAN_PKT | PKT_RX_IPV4_HDR |\
PKT_RX_IPV4_HDR_EXT | PKT_RX_IPV6_HDR |\
PKT_RX_IPV6_HDR_EXT))
@@ -142,11 +148,26 @@ ixgbe_rxq_rearm(struct ixgbe_rx_queue *rxq)
((uint64_t)OLFLAGS_MASK << 16) | \
((uint64_t)OLFLAGS_MASK))
#define PTYPE_SHIFT (1)
+#endif /* RTE_NEXT_ABI */
+
#define VTAG_SHIFT (3)
static inline void
desc_to_olflags_v(__m128i descs[4], struct rte_mbuf **rx_pkts)
{
+#ifdef RTE_NEXT_ABI
+ __m128i vtag0, vtag1;
+ union {
+ uint16_t e[4];
+ uint64_t dword;
+ } vol;
+
+ vtag0 = _mm_unpackhi_epi16(descs[0], descs[1]);
+ vtag1 = _mm_unpackhi_epi16(descs[2], descs[3]);
+ vtag1 = _mm_unpacklo_epi32(vtag0, vtag1);
+ vtag1 = _mm_srli_epi16(vtag1, VTAG_SHIFT);
+ vol.dword = _mm_cvtsi128_si64(vtag1) & OLFLAGS_MASK_V;
+#else
__m128i ptype0, ptype1, vtag0, vtag1;
union {
uint16_t e[4];
@@ -166,6 +187,7 @@ desc_to_olflags_v(__m128i descs[4], struct rte_mbuf **rx_pkts)
ptype1 = _mm_or_si128(ptype1, vtag1);
vol.dword = _mm_cvtsi128_si64(ptype1) & OLFLAGS_MASK_V;
+#endif /* RTE_NEXT_ABI */
rx_pkts[0]->ol_flags = vol.e[0];
rx_pkts[1]->ol_flags = vol.e[1];
@@ -196,6 +218,18 @@ _recv_raw_pkts_vec(struct ixgbe_rx_queue *rxq, struct rte_mbuf **rx_pkts,
int pos;
uint64_t var;
__m128i shuf_msk;
+#ifdef RTE_NEXT_ABI
+ __m128i crc_adjust = _mm_set_epi16(
+ 0, 0, 0, /* ignore non-length fields */
+ -rxq->crc_len, /* sub crc on data_len */
+ 0, /* ignore high-16bits of pkt_len */
+ -rxq->crc_len, /* sub crc on pkt_len */
+ 0, 0 /* ignore pkt_type field */
+ );
+ __m128i dd_check, eop_check;
+ __m128i desc_mask = _mm_set_epi32(0xFFFFFFFF, 0xFFFFFFFF,
+ 0xFFFFFFFF, 0xFFFF07F0);
+#else
__m128i crc_adjust = _mm_set_epi16(
0, 0, 0, 0, /* ignore non-length fields */
0, /* ignore high-16bits of pkt_len */
@@ -204,6 +238,7 @@ _recv_raw_pkts_vec(struct ixgbe_rx_queue *rxq, struct rte_mbuf **rx_pkts,
0 /* ignore pkt_type field */
);
__m128i dd_check, eop_check;
+#endif /* RTE_NEXT_ABI */
if (unlikely(nb_pkts < RTE_IXGBE_VPMD_RX_BURST))
return 0;
@@ -232,6 +267,18 @@ _recv_raw_pkts_vec(struct ixgbe_rx_queue *rxq, struct rte_mbuf **rx_pkts,
eop_check = _mm_set_epi64x(0x0000000200000002LL, 0x0000000200000002LL);
/* mask to shuffle from desc. to mbuf */
+#ifdef RTE_NEXT_ABI
+ shuf_msk = _mm_set_epi8(
+ 7, 6, 5, 4, /* octet 4~7, 32bits rss */
+ 15, 14, /* octet 14~15, low 16 bits vlan_macip */
+ 13, 12, /* octet 12~13, 16 bits data_len */
+ 0xFF, 0xFF, /* skip high 16 bits pkt_len, zero out */
+ 13, 12, /* octet 12~13, low 16 bits pkt_len */
+ 0xFF, 0xFF, /* skip high 16 bits pkt_type */
+ 1, /* octet 1, 8 bits pkt_type field */
+ 0 /* octet 0, 4 bits offset 4 pkt_type field */
+ );
+#else
shuf_msk = _mm_set_epi8(
7, 6, 5, 4, /* octet 4~7, 32bits rss */
0xFF, 0xFF, /* skip high 16 bits vlan_macip, zero out */
@@ -241,18 +288,28 @@ _recv_raw_pkts_vec(struct ixgbe_rx_queue *rxq, struct rte_mbuf **rx_pkts,
13, 12, /* octet 12~13, 16 bits data_len */
0xFF, 0xFF /* skip pkt_type field */
);
+#endif /* RTE_NEXT_ABI */
/* Cache is empty -> need to scan the buffer rings, but first move
* the next 'n' mbufs into the cache */
sw_ring = &rxq->sw_ring[rxq->rx_tail];
- /*
- * A. load 4 packet in one loop
+#ifdef RTE_NEXT_ABI
+ /* A. load 4 packet in one loop
+ * [A*. mask out 4 unused dirty field in desc]
* B. copy 4 mbuf point from swring to rx_pkts
* C. calc the number of DD bits among the 4 packets
* [C*. extract the end-of-packet bit, if requested]
* D. fill info. from desc to mbuf
*/
+#else
+ /* A. load 4 packet in one loop
+ * B. copy 4 mbuf point from swring to rx_pkts
+ * C. calc the number of DD bits among the 4 packets
+ * [C*. extract the end-of-packet bit, if requested]
+ * D. fill info. from desc to mbuf
+ */
+#endif /* RTE_NEXT_ABI */
for (pos = 0, nb_pkts_recd = 0; pos < RTE_IXGBE_VPMD_RX_BURST;
pos += RTE_IXGBE_DESCS_PER_LOOP,
rxdp += RTE_IXGBE_DESCS_PER_LOOP) {
@@ -289,6 +346,16 @@ _recv_raw_pkts_vec(struct ixgbe_rx_queue *rxq, struct rte_mbuf **rx_pkts,
/* B.2 copy 2 mbuf point into rx_pkts */
_mm_storeu_si128((__m128i *)&rx_pkts[pos+2], mbp2);
+#ifdef RTE_NEXT_ABI
+ /* A* mask out 0~3 bits RSS type */
+ descs[3] = _mm_and_si128(descs[3], desc_mask);
+ descs[2] = _mm_and_si128(descs[2], desc_mask);
+
+ /* A* mask out 0~3 bits RSS type */
+ descs[1] = _mm_and_si128(descs[1], desc_mask);
+ descs[0] = _mm_and_si128(descs[0], desc_mask);
+#endif /* RTE_NEXT_ABI */
+
/* avoid compiler reorder optimization */
rte_compiler_barrier();
@@ -301,7 +368,11 @@ _recv_raw_pkts_vec(struct ixgbe_rx_queue *rxq, struct rte_mbuf **rx_pkts,
/* C.1 4=>2 filter staterr info only */
sterr_tmp1 = _mm_unpackhi_epi32(descs[1], descs[0]);
+#ifdef RTE_NEXT_ABI
+ /* set ol_flags with vlan packet type */
+#else
/* set ol_flags with packet type and vlan tag */
+#endif /* RTE_NEXT_ABI */
desc_to_olflags_v(descs, &rx_pkts[pos]);
/* D.2 pkt 3,4 set in_port/nb_seg and remove crc */
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v7 03/18] mbuf: add definitions of unified packet types
2015-06-19 8:14 ` [dpdk-dev] [PATCH v7 00/18] unified packet type Helin Zhang
2015-06-19 8:14 ` [dpdk-dev] [PATCH v7 01/18] mbuf: redefine packet_type in rte_mbuf Helin Zhang
2015-06-19 8:14 ` [dpdk-dev] [PATCH v7 02/18] ixgbe: support unified packet type in vectorized PMD Helin Zhang
@ 2015-06-19 8:14 ` Helin Zhang
2015-06-19 8:14 ` [dpdk-dev] [PATCH v7 04/18] e1000: replace bit mask based packet type with unified packet type Helin Zhang
` (15 subsequent siblings)
18 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-06-19 8:14 UTC (permalink / raw)
To: dev
As there are only 6 bit flags in ol_flags for indicating packet
types, which is not enough to describe all the possible packet
types hardware can recognize. For example, i40e hardware can
recognize more than 150 packet types. Unified packet type is
composed of L2 type, L3 type, L4 type, tunnel type, inner L2 type,
inner L3 type and inner L4 type fields, and can be stored in
'struct rte_mbuf' of 32 bits field 'packet_type'.
To avoid breaking ABI compatibility, all the changes would be
enabled by RTE_NEXT_ABI, which is disabled by default.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
lib/librte_mbuf/rte_mbuf.h | 487 +++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 487 insertions(+)
v3 changes:
* Put the definitions of unified packet type into a single patch.
v4 changes:
* Added detailed description of each packet types.
v5 changes:
* Re-worded the commit logs.
* Added more detailed description for all packet types, together with examples.
v6 changes:
* Disabled the code changes for unified packet type by default, to
avoid breaking ABI compatibility.
v7 changes:
* Renamed RTE_UNIFIED_PKT_TYPE to RTE_NEXT_ABI.
diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
index aa55769..5e7cc26 100644
--- a/lib/librte_mbuf/rte_mbuf.h
+++ b/lib/librte_mbuf/rte_mbuf.h
@@ -201,6 +201,493 @@ extern "C" {
/* Use final bit of flags to indicate a control mbuf */
#define CTRL_MBUF_FLAG (1ULL << 63) /**< Mbuf contains control data */
+#ifdef RTE_NEXT_ABI
+/*
+ * 32 bits are divided into several fields to mark packet types. Note that
+ * each field is indexical.
+ * - Bit 3:0 is for L2 types.
+ * - Bit 7:4 is for L3 or outer L3 (for tunneling case) types.
+ * - Bit 11:8 is for L4 or outer L4 (for tunneling case) types.
+ * - Bit 15:12 is for tunnel types.
+ * - Bit 19:16 is for inner L2 types.
+ * - Bit 23:20 is for inner L3 types.
+ * - Bit 27:24 is for inner L4 types.
+ * - Bit 31:28 is reserved.
+ *
+ * To be compatible with Vector PMD, RTE_PTYPE_L3_IPV4, RTE_PTYPE_L3_IPV4_EXT,
+ * RTE_PTYPE_L3_IPV6, RTE_PTYPE_L3_IPV6_EXT, RTE_PTYPE_L4_TCP, RTE_PTYPE_L4_UDP
+ * and RTE_PTYPE_L4_SCTP should be kept as below in a contiguous 7 bits.
+ *
+ * Note that L3 types values are selected for checking IPV4/IPV6 header from
+ * performance point of view. Reading annotations of RTE_ETH_IS_IPV4_HDR and
+ * RTE_ETH_IS_IPV6_HDR is needed for any future changes of L3 type values.
+ *
+ * Note that the packet types of the same packet recognized by different
+ * hardware may be different, as different hardware may have different
+ * capability of packet type recognition.
+ *
+ * examples:
+ * <'ether type'=0x0800
+ * | 'version'=4, 'protocol'=0x29
+ * | 'version'=6, 'next header'=0x3A
+ * | 'ICMPv6 header'>
+ * will be recognized on i40e hardware as packet type combination of,
+ * RTE_PTYPE_L2_MAC |
+ * RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ * RTE_PTYPE_TUNNEL_IP |
+ * RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ * RTE_PTYPE_INNER_L4_ICMP.
+ *
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=0x2F
+ * | 'GRE header'
+ * | 'version'=6, 'next header'=0x11
+ * | 'UDP header'>
+ * will be recognized on i40e hardware as packet type combination of,
+ * RTE_PTYPE_L2_MAC |
+ * RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ * RTE_PTYPE_TUNNEL_GRENAT |
+ * RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ * RTE_PTYPE_INNER_L4_UDP.
+ */
+#define RTE_PTYPE_UNKNOWN 0x00000000
+/**
+ * MAC (Media Access Control) packet type.
+ * It is used for outer packet for tunneling cases.
+ *
+ * Packet format:
+ * <'ether type'=[0x0800|0x86DD|others]>
+ */
+#define RTE_PTYPE_L2_MAC 0x00000001
+/**
+ * MAC (Media Access Control) packet type for time sync.
+ *
+ * Packet format:
+ * <'ether type'=0x88F7>
+ */
+#define RTE_PTYPE_L2_MAC_TIMESYNC 0x00000002
+/**
+ * ARP (Address Resolution Protocol) packet type.
+ *
+ * Packet format:
+ * <'ether type'=0x0806>
+ */
+#define RTE_PTYPE_L2_ARP 0x00000003
+/**
+ * LLDP (Link Layer Discovery Protocol) packet type.
+ *
+ * Packet format:
+ * <'ether type'=0x88CC>
+ */
+#define RTE_PTYPE_L2_LLDP 0x00000004
+/**
+ * Mask of layer 2 packet types.
+ * It is used for outer packet for tunneling cases.
+ */
+#define RTE_PTYPE_L2_MASK 0x0000000f
+/**
+ * IP (Internet Protocol) version 4 packet type.
+ * It is used for outer packet for tunneling cases, and does not contain any
+ * header option.
+ *
+ * Packet format:
+ * <'ether type'=0x0800
+ * | 'version'=4, 'ihl'=5>
+ */
+#define RTE_PTYPE_L3_IPV4 0x00000010
+/**
+ * IP (Internet Protocol) version 4 packet type.
+ * It is used for outer packet for tunneling cases, and contains header
+ * options.
+ *
+ * Packet format:
+ * <'ether type'=0x0800
+ * | 'version'=4, 'ihl'=[6-15], 'options'>
+ */
+#define RTE_PTYPE_L3_IPV4_EXT 0x00000030
+/**
+ * IP (Internet Protocol) version 6 packet type.
+ * It is used for outer packet for tunneling cases, and does not contain any
+ * extension header.
+ *
+ * Packet format:
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=0x3B>
+ */
+#define RTE_PTYPE_L3_IPV6 0x00000040
+/**
+ * IP (Internet Protocol) version 4 packet type.
+ * It is used for outer packet for tunneling cases, and may or maynot contain
+ * header options.
+ *
+ * Packet format:
+ * <'ether type'=0x0800
+ * | 'version'=4, 'ihl'=[5-15], <'options'>>
+ */
+#define RTE_PTYPE_L3_IPV4_EXT_UNKNOWN 0x00000090
+/**
+ * IP (Internet Protocol) version 6 packet type.
+ * It is used for outer packet for tunneling cases, and contains extension
+ * headers.
+ *
+ * Packet format:
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=[0x0|0x2B|0x2C|0x32|0x33|0x3C|0x87],
+ * 'extension headers'>
+ */
+#define RTE_PTYPE_L3_IPV6_EXT 0x000000c0
+/**
+ * IP (Internet Protocol) version 6 packet type.
+ * It is used for outer packet for tunneling cases, and may or maynot contain
+ * extension headers.
+ *
+ * Packet format:
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=[0x3B|0x0|0x2B|0x2C|0x32|0x33|0x3C|0x87],
+ * <'extension headers'>>
+ */
+#define RTE_PTYPE_L3_IPV6_EXT_UNKNOWN 0x000000e0
+/**
+ * Mask of layer 3 packet types.
+ * It is used for outer packet for tunneling cases.
+ */
+#define RTE_PTYPE_L3_MASK 0x000000f0
+/**
+ * TCP (Transmission Control Protocol) packet type.
+ * It is used for outer packet for tunneling cases.
+ *
+ * Packet format:
+ * <'ether type'=0x0800
+ * | 'version'=4, 'protocol'=6, 'MF'=0>
+ * or,
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=6>
+ */
+#define RTE_PTYPE_L4_TCP 0x00000100
+/**
+ * UDP (User Datagram Protocol) packet type.
+ * It is used for outer packet for tunneling cases.
+ *
+ * Packet format:
+ * <'ether type'=0x0800
+ * | 'version'=4, 'protocol'=17, 'MF'=0>
+ * or,
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=17>
+ */
+#define RTE_PTYPE_L4_UDP 0x00000200
+/**
+ * Fragmented IP (Internet Protocol) packet type.
+ * It is used for outer packet for tunneling cases.
+ *
+ * It refers to those packets of any IP types, which can be recognized as
+ * fragmented. A fragmented packet cannot be recognized as any other L4 types
+ * (RTE_PTYPE_L4_TCP, RTE_PTYPE_L4_UDP, RTE_PTYPE_L4_SCTP, RTE_PTYPE_L4_ICMP,
+ * RTE_PTYPE_L4_NONFRAG).
+ *
+ * Packet format:
+ * <'ether type'=0x0800
+ * | 'version'=4, 'MF'=1>
+ * or,
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=44>
+ */
+#define RTE_PTYPE_L4_FRAG 0x00000300
+/**
+ * SCTP (Stream Control Transmission Protocol) packet type.
+ * It is used for outer packet for tunneling cases.
+ *
+ * Packet format:
+ * <'ether type'=0x0800
+ * | 'version'=4, 'protocol'=132, 'MF'=0>
+ * or,
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=132>
+ */
+#define RTE_PTYPE_L4_SCTP 0x00000400
+/**
+ * ICMP (Internet Control Message Protocol) packet type.
+ * It is used for outer packet for tunneling cases.
+ *
+ * Packet format:
+ * <'ether type'=0x0800
+ * | 'version'=4, 'protocol'=1, 'MF'=0>
+ * or,
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=1>
+ */
+#define RTE_PTYPE_L4_ICMP 0x00000500
+/**
+ * Non-fragmented IP (Internet Protocol) packet type.
+ * It is used for outer packet for tunneling cases.
+ *
+ * It refers to those packets of any IP types, while cannot be recognized as
+ * any of above L4 types (RTE_PTYPE_L4_TCP, RTE_PTYPE_L4_UDP,
+ * RTE_PTYPE_L4_FRAG, RTE_PTYPE_L4_SCTP, RTE_PTYPE_L4_ICMP).
+ *
+ * Packet format:
+ * <'ether type'=0x0800
+ * | 'version'=4, 'protocol'!=[6|17|132|1], 'MF'=0>
+ * or,
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'!=[6|17|44|132|1]>
+ */
+#define RTE_PTYPE_L4_NONFRAG 0x00000600
+/**
+ * Mask of layer 4 packet types.
+ * It is used for outer packet for tunneling cases.
+ */
+#define RTE_PTYPE_L4_MASK 0x00000f00
+/**
+ * IP (Internet Protocol) in IP (Internet Protocol) tunneling packet type.
+ *
+ * Packet format:
+ * <'ether type'=0x0800
+ * | 'version'=4, 'protocol'=[4|41]>
+ * or,
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=[4|41]>
+ */
+#define RTE_PTYPE_TUNNEL_IP 0x00001000
+/**
+ * GRE (Generic Routing Encapsulation) tunneling packet type.
+ *
+ * Packet format:
+ * <'ether type'=0x0800
+ * | 'version'=4, 'protocol'=47>
+ * or,
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=47>
+ */
+#define RTE_PTYPE_TUNNEL_GRE 0x00002000
+/**
+ * VXLAN (Virtual eXtensible Local Area Network) tunneling packet type.
+ *
+ * Packet format:
+ * <'ether type'=0x0800
+ * | 'version'=4, 'protocol'=17
+ * | 'destination port'=4798>
+ * or,
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=17
+ * | 'destination port'=4798>
+ */
+#define RTE_PTYPE_TUNNEL_VXLAN 0x00003000
+/**
+ * NVGRE (Network Virtualization using Generic Routing Encapsulation) tunneling
+ * packet type.
+ *
+ * Packet format:
+ * <'ether type'=0x0800
+ * | 'version'=4, 'protocol'=47
+ * | 'protocol type'=0x6558>
+ * or,
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=47
+ * | 'protocol type'=0x6558'>
+ */
+#define RTE_PTYPE_TUNNEL_NVGRE 0x00004000
+/**
+ * GENEVE (Generic Network Virtualization Encapsulation) tunneling packet type.
+ *
+ * Packet format:
+ * <'ether type'=0x0800
+ * | 'version'=4, 'protocol'=17
+ * | 'destination port'=6081>
+ * or,
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=17
+ * | 'destination port'=6081>
+ */
+#define RTE_PTYPE_TUNNEL_GENEVE 0x00005000
+/**
+ * Tunneling packet type of Teredo, VXLAN (Virtual eXtensible Local Area
+ * Network) or GRE (Generic Routing Encapsulation) could be recognized as this
+ * packet type, if they can not be recognized independently as of hardware
+ * capability.
+ */
+#define RTE_PTYPE_TUNNEL_GRENAT 0x00006000
+/**
+ * Mask of tunneling packet types.
+ */
+#define RTE_PTYPE_TUNNEL_MASK 0x0000f000
+/**
+ * MAC (Media Access Control) packet type.
+ * It is used for inner packet type only.
+ *
+ * Packet format (inner only):
+ * <'ether type'=[0x800|0x86DD]>
+ */
+#define RTE_PTYPE_INNER_L2_MAC 0x00010000
+/**
+ * MAC (Media Access Control) packet type with VLAN (Virtual Local Area
+ * Network) tag.
+ *
+ * Packet format (inner only):
+ * <'ether type'=[0x800|0x86DD], vlan=[1-4095]>
+ */
+#define RTE_PTYPE_INNER_L2_MAC_VLAN 0x00020000
+/**
+ * Mask of inner layer 2 packet types.
+ */
+#define RTE_PTYPE_INNER_L2_MASK 0x000f0000
+/**
+ * IP (Internet Protocol) version 4 packet type.
+ * It is used for inner packet only, and does not contain any header option.
+ *
+ * Packet format (inner only):
+ * <'ether type'=0x0800
+ * | 'version'=4, 'ihl'=5>
+ */
+#define RTE_PTYPE_INNER_L3_IPV4 0x00100000
+/**
+ * IP (Internet Protocol) version 4 packet type.
+ * It is used for inner packet only, and contains header options.
+ *
+ * Packet format (inner only):
+ * <'ether type'=0x0800
+ * | 'version'=4, 'ihl'=[6-15], 'options'>
+ */
+#define RTE_PTYPE_INNER_L3_IPV4_EXT 0x00200000
+/**
+ * IP (Internet Protocol) version 6 packet type.
+ * It is used for inner packet only, and does not contain any extension header.
+ *
+ * Packet format (inner only):
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=0x3B>
+ */
+#define RTE_PTYPE_INNER_L3_IPV6 0x00300000
+/**
+ * IP (Internet Protocol) version 4 packet type.
+ * It is used for inner packet only, and may or maynot contain header options.
+ *
+ * Packet format (inner only):
+ * <'ether type'=0x0800
+ * | 'version'=4, 'ihl'=[5-15], <'options'>>
+ */
+#define RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN 0x00400000
+/**
+ * IP (Internet Protocol) version 6 packet type.
+ * It is used for inner packet only, and contains extension headers.
+ *
+ * Packet format (inner only):
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=[0x0|0x2B|0x2C|0x32|0x33|0x3C|0x87],
+ * 'extension headers'>
+ */
+#define RTE_PTYPE_INNER_L3_IPV6_EXT 0x00500000
+/**
+ * IP (Internet Protocol) version 6 packet type.
+ * It is used for inner packet only, and may or maynot contain extension
+ * headers.
+ *
+ * Packet format (inner only):
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=[0x3B|0x0|0x2B|0x2C|0x32|0x33|0x3C|0x87],
+ * <'extension headers'>>
+ */
+#define RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN 0x00600000
+/**
+ * Mask of inner layer 3 packet types.
+ */
+#define RTE_PTYPE_INNER_INNER_L3_MASK 0x00f00000
+/**
+ * TCP (Transmission Control Protocol) packet type.
+ * It is used for inner packet only.
+ *
+ * Packet format (inner only):
+ * <'ether type'=0x0800
+ * | 'version'=4, 'protocol'=6, 'MF'=0>
+ * or,
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=6>
+ */
+#define RTE_PTYPE_INNER_L4_TCP 0x01000000
+/**
+ * UDP (User Datagram Protocol) packet type.
+ * It is used for inner packet only.
+ *
+ * Packet format (inner only):
+ * <'ether type'=0x0800
+ * | 'version'=4, 'protocol'=17, 'MF'=0>
+ * or,
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=17>
+ */
+#define RTE_PTYPE_INNER_L4_UDP 0x02000000
+/**
+ * Fragmented IP (Internet Protocol) packet type.
+ * It is used for inner packet only, and may or maynot have layer 4 packet.
+ *
+ * Packet format (inner only):
+ * <'ether type'=0x0800
+ * | 'version'=4, 'MF'=1>
+ * or,
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=44>
+ */
+#define RTE_PTYPE_INNER_L4_FRAG 0x03000000
+/**
+ * SCTP (Stream Control Transmission Protocol) packet type.
+ * It is used for inner packet only.
+ *
+ * Packet format (inner only):
+ * <'ether type'=0x0800
+ * | 'version'=4, 'protocol'=132, 'MF'=0>
+ * or,
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=132>
+ */
+#define RTE_PTYPE_INNER_L4_SCTP 0x04000000
+/**
+ * ICMP (Internet Control Message Protocol) packet type.
+ * It is used for inner packet only.
+ *
+ * Packet format (inner only):
+ * <'ether type'=0x0800
+ * | 'version'=4, 'protocol'=1, 'MF'=0>
+ * or,
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=1>
+ */
+#define RTE_PTYPE_INNER_L4_ICMP 0x05000000
+/**
+ * Non-fragmented IP (Internet Protocol) packet type.
+ * It is used for inner packet only, and may or maynot have other unknown layer
+ * 4 packet types.
+ *
+ * Packet format (inner only):
+ * <'ether type'=0x0800
+ * | 'version'=4, 'protocol'!=[6|17|132|1], 'MF'=0>
+ * or,
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'!=[6|17|44|132|1]>
+ */
+#define RTE_PTYPE_INNER_L4_NONFRAG 0x06000000
+/**
+ * Mask of inner layer 4 packet types.
+ */
+#define RTE_PTYPE_INNER_L4_MASK 0x0f000000
+
+/**
+ * Check if the (outer) L3 header is IPv4. To avoid comparing IPv4 types one by
+ * one, bit 4 is selected to be used for IPv4 only. Then checking bit 4 can
+ * determin if it is an IPV4 packet.
+ */
+#define RTE_ETH_IS_IPV4_HDR(ptype) ((ptype) & RTE_PTYPE_L3_IPV4)
+
+/**
+ * Check if the (outer) L3 header is IPv4. To avoid comparing IPv4 types one by
+ * one, bit 6 is selected to be used for IPv4 only. Then checking bit 6 can
+ * determin if it is an IPV4 packet.
+ */
+#define RTE_ETH_IS_IPV6_HDR(ptype) ((ptype) & RTE_PTYPE_L3_IPV6)
+
+/* Check if it is a tunneling packet */
+#define RTE_ETH_IS_TUNNEL_PKT(ptype) ((ptype) & RTE_PTYPE_TUNNEL_MASK)
+#endif /* RTE_NEXT_ABI */
+
/**
* Get the name of a RX offload flag
*
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v7 04/18] e1000: replace bit mask based packet type with unified packet type
2015-06-19 8:14 ` [dpdk-dev] [PATCH v7 00/18] unified packet type Helin Zhang
` (2 preceding siblings ...)
2015-06-19 8:14 ` [dpdk-dev] [PATCH v7 03/18] mbuf: add definitions of unified packet types Helin Zhang
@ 2015-06-19 8:14 ` Helin Zhang
2015-06-19 8:14 ` [dpdk-dev] [PATCH v7 05/18] ixgbe: " Helin Zhang
` (14 subsequent siblings)
18 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-06-19 8:14 UTC (permalink / raw)
To: dev
To unify packet types among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.
To avoid breaking ABI compatibility, all the changes would be
enabled by RTE_NEXT_ABI, which is disabled by default.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
drivers/net/e1000/igb_rxtx.c | 102 +++++++++++++++++++++++++++++++++++++++++++
1 file changed, 102 insertions(+)
v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.
v5 changes:
* Re-worded the commit logs.
v6 changes:
* Disabled the code changes for unified packet type by default, to
avoid breaking ABI compatibility.
v7 changes:
* Renamed RTE_UNIFIED_PKT_TYPE to RTE_NEXT_ABI.
diff --git a/drivers/net/e1000/igb_rxtx.c b/drivers/net/e1000/igb_rxtx.c
index 43d6703..d1c2ef8 100644
--- a/drivers/net/e1000/igb_rxtx.c
+++ b/drivers/net/e1000/igb_rxtx.c
@@ -590,6 +590,99 @@ eth_igb_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
* RX functions
*
**********************************************************************/
+#ifdef RTE_NEXT_ABI
+#define IGB_PACKET_TYPE_IPV4 0X01
+#define IGB_PACKET_TYPE_IPV4_TCP 0X11
+#define IGB_PACKET_TYPE_IPV4_UDP 0X21
+#define IGB_PACKET_TYPE_IPV4_SCTP 0X41
+#define IGB_PACKET_TYPE_IPV4_EXT 0X03
+#define IGB_PACKET_TYPE_IPV4_EXT_SCTP 0X43
+#define IGB_PACKET_TYPE_IPV6 0X04
+#define IGB_PACKET_TYPE_IPV6_TCP 0X14
+#define IGB_PACKET_TYPE_IPV6_UDP 0X24
+#define IGB_PACKET_TYPE_IPV6_EXT 0X0C
+#define IGB_PACKET_TYPE_IPV6_EXT_TCP 0X1C
+#define IGB_PACKET_TYPE_IPV6_EXT_UDP 0X2C
+#define IGB_PACKET_TYPE_IPV4_IPV6 0X05
+#define IGB_PACKET_TYPE_IPV4_IPV6_TCP 0X15
+#define IGB_PACKET_TYPE_IPV4_IPV6_UDP 0X25
+#define IGB_PACKET_TYPE_IPV4_IPV6_EXT 0X0D
+#define IGB_PACKET_TYPE_IPV4_IPV6_EXT_TCP 0X1D
+#define IGB_PACKET_TYPE_IPV4_IPV6_EXT_UDP 0X2D
+#define IGB_PACKET_TYPE_MAX 0X80
+#define IGB_PACKET_TYPE_MASK 0X7F
+#define IGB_PACKET_TYPE_SHIFT 0X04
+static inline uint32_t
+igb_rxd_pkt_info_to_pkt_type(uint16_t pkt_info)
+{
+ static const uint32_t
+ ptype_table[IGB_PACKET_TYPE_MAX] __rte_cache_aligned = {
+ [IGB_PACKET_TYPE_IPV4] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4,
+ [IGB_PACKET_TYPE_IPV4_EXT] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4_EXT,
+ [IGB_PACKET_TYPE_IPV6] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6,
+ [IGB_PACKET_TYPE_IPV4_IPV6] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6,
+ [IGB_PACKET_TYPE_IPV6_EXT] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV6_EXT,
+ [IGB_PACKET_TYPE_IPV4_IPV6_EXT] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT,
+ [IGB_PACKET_TYPE_IPV4_TCP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_TCP,
+ [IGB_PACKET_TYPE_IPV6_TCP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_TCP,
+ [IGB_PACKET_TYPE_IPV4_IPV6_TCP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6 | RTE_PTYPE_INNER_L4_TCP,
+ [IGB_PACKET_TYPE_IPV6_EXT_TCP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV6_EXT | RTE_PTYPE_L4_TCP,
+ [IGB_PACKET_TYPE_IPV4_IPV6_EXT_TCP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT | RTE_PTYPE_INNER_L4_TCP,
+ [IGB_PACKET_TYPE_IPV4_UDP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_UDP,
+ [IGB_PACKET_TYPE_IPV6_UDP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_UDP,
+ [IGB_PACKET_TYPE_IPV4_IPV6_UDP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6 | RTE_PTYPE_INNER_L4_UDP,
+ [IGB_PACKET_TYPE_IPV6_EXT_UDP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV6_EXT | RTE_PTYPE_L4_UDP,
+ [IGB_PACKET_TYPE_IPV4_IPV6_EXT_UDP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT | RTE_PTYPE_INNER_L4_UDP,
+ [IGB_PACKET_TYPE_IPV4_SCTP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_SCTP,
+ [IGB_PACKET_TYPE_IPV4_EXT_SCTP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4_EXT | RTE_PTYPE_L4_SCTP,
+ };
+ if (unlikely(pkt_info & E1000_RXDADV_PKTTYPE_ETQF))
+ return RTE_PTYPE_UNKNOWN;
+
+ pkt_info = (pkt_info >> IGB_PACKET_TYPE_SHIFT) & IGB_PACKET_TYPE_MASK;
+
+ return ptype_table[pkt_info];
+}
+
+static inline uint64_t
+rx_desc_hlen_type_rss_to_pkt_flags(uint32_t hl_tp_rs)
+{
+ uint64_t pkt_flags = ((hl_tp_rs & 0x0F) == 0) ? 0 : PKT_RX_RSS_HASH;
+
+#if defined(RTE_LIBRTE_IEEE1588)
+ static uint32_t ip_pkt_etqf_map[8] = {
+ 0, 0, 0, PKT_RX_IEEE1588_PTP,
+ 0, 0, 0, 0,
+ };
+
+ pkt_flags |= ip_pkt_etqf_map[(hl_tp_rs >> 4) & 0x07];
+#endif
+
+ return pkt_flags;
+}
+#else /* RTE_NEXT_ABI */
static inline uint64_t
rx_desc_hlen_type_rss_to_pkt_flags(uint32_t hl_tp_rs)
{
@@ -617,6 +710,7 @@ rx_desc_hlen_type_rss_to_pkt_flags(uint32_t hl_tp_rs)
#endif
return pkt_flags | (((hl_tp_rs & 0x0F) == 0) ? 0 : PKT_RX_RSS_HASH);
}
+#endif /* RTE_NEXT_ABI */
static inline uint64_t
rx_desc_status_to_pkt_flags(uint32_t rx_status)
@@ -790,6 +884,10 @@ eth_igb_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
pkt_flags = pkt_flags | rx_desc_status_to_pkt_flags(staterr);
pkt_flags = pkt_flags | rx_desc_error_to_pkt_flags(staterr);
rxm->ol_flags = pkt_flags;
+#ifdef RTE_NEXT_ABI
+ rxm->packet_type = igb_rxd_pkt_info_to_pkt_type(rxd.wb.lower.
+ lo_dword.hs_rss.pkt_info);
+#endif
/*
* Store the mbuf address into the next entry of the array
@@ -1024,6 +1122,10 @@ eth_igb_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
pkt_flags = pkt_flags | rx_desc_status_to_pkt_flags(staterr);
pkt_flags = pkt_flags | rx_desc_error_to_pkt_flags(staterr);
first_seg->ol_flags = pkt_flags;
+#ifdef RTE_NEXT_ABI
+ first_seg->packet_type = igb_rxd_pkt_info_to_pkt_type(rxd.wb.
+ lower.lo_dword.hs_rss.pkt_info);
+#endif
/* Prefetch data of first segment, if configured to do so. */
rte_packet_prefetch((char *)first_seg->buf_addr +
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v7 05/18] ixgbe: replace bit mask based packet type with unified packet type
2015-06-19 8:14 ` [dpdk-dev] [PATCH v7 00/18] unified packet type Helin Zhang
` (3 preceding siblings ...)
2015-06-19 8:14 ` [dpdk-dev] [PATCH v7 04/18] e1000: replace bit mask based packet type with unified packet type Helin Zhang
@ 2015-06-19 8:14 ` Helin Zhang
2015-06-19 8:14 ` [dpdk-dev] [PATCH v7 06/18] i40e: " Helin Zhang
` (13 subsequent siblings)
18 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-06-19 8:14 UTC (permalink / raw)
To: dev
To unify packet type among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.
To avoid breaking ABI compatibility, all the changes would be
enabled by RTE_NEXT_ABI, which is disabled by default.
Note that around 2.5% performance drop (64B) was observed of doing
4 ports (1 port per 82599 card) IO forwarding on the same SNB core.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
drivers/net/ixgbe/ixgbe_rxtx.c | 163 +++++++++++++++++++++++++++++++++++++++++
1 file changed, 163 insertions(+)
v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.
v5 changes:
* Re-worded the commit logs.
v6 changes:
* Disabled the code changes for unified packet type by default, to
avoid breaking ABI compatibility.
v7 changes:
* Renamed RTE_UNIFIED_PKT_TYPE to RTE_NEXT_ABI.
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c
index 041c544..7b5792b 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx.c
@@ -855,6 +855,110 @@ end_of_tx:
* RX functions
*
**********************************************************************/
+#ifdef RTE_NEXT_ABI
+#define IXGBE_PACKET_TYPE_IPV4 0X01
+#define IXGBE_PACKET_TYPE_IPV4_TCP 0X11
+#define IXGBE_PACKET_TYPE_IPV4_UDP 0X21
+#define IXGBE_PACKET_TYPE_IPV4_SCTP 0X41
+#define IXGBE_PACKET_TYPE_IPV4_EXT 0X03
+#define IXGBE_PACKET_TYPE_IPV4_EXT_SCTP 0X43
+#define IXGBE_PACKET_TYPE_IPV6 0X04
+#define IXGBE_PACKET_TYPE_IPV6_TCP 0X14
+#define IXGBE_PACKET_TYPE_IPV6_UDP 0X24
+#define IXGBE_PACKET_TYPE_IPV6_EXT 0X0C
+#define IXGBE_PACKET_TYPE_IPV6_EXT_TCP 0X1C
+#define IXGBE_PACKET_TYPE_IPV6_EXT_UDP 0X2C
+#define IXGBE_PACKET_TYPE_IPV4_IPV6 0X05
+#define IXGBE_PACKET_TYPE_IPV4_IPV6_TCP 0X15
+#define IXGBE_PACKET_TYPE_IPV4_IPV6_UDP 0X25
+#define IXGBE_PACKET_TYPE_IPV4_IPV6_EXT 0X0D
+#define IXGBE_PACKET_TYPE_IPV4_IPV6_EXT_TCP 0X1D
+#define IXGBE_PACKET_TYPE_IPV4_IPV6_EXT_UDP 0X2D
+#define IXGBE_PACKET_TYPE_MAX 0X80
+#define IXGBE_PACKET_TYPE_MASK 0X7F
+#define IXGBE_PACKET_TYPE_SHIFT 0X04
+static inline uint32_t
+ixgbe_rxd_pkt_info_to_pkt_type(uint16_t pkt_info)
+{
+ static const uint32_t
+ ptype_table[IXGBE_PACKET_TYPE_MAX] __rte_cache_aligned = {
+ [IXGBE_PACKET_TYPE_IPV4] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4,
+ [IXGBE_PACKET_TYPE_IPV4_EXT] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4_EXT,
+ [IXGBE_PACKET_TYPE_IPV6] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV6,
+ [IXGBE_PACKET_TYPE_IPV4_IPV6] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6,
+ [IXGBE_PACKET_TYPE_IPV6_EXT] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV6_EXT,
+ [IXGBE_PACKET_TYPE_IPV4_IPV6_EXT] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT,
+ [IXGBE_PACKET_TYPE_IPV4_TCP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_TCP,
+ [IXGBE_PACKET_TYPE_IPV6_TCP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_TCP,
+ [IXGBE_PACKET_TYPE_IPV4_IPV6_TCP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6 | RTE_PTYPE_INNER_L4_TCP,
+ [IXGBE_PACKET_TYPE_IPV6_EXT_TCP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV6_EXT | RTE_PTYPE_L4_TCP,
+ [IXGBE_PACKET_TYPE_IPV4_IPV6_EXT_TCP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT | RTE_PTYPE_INNER_L4_TCP,
+ [IXGBE_PACKET_TYPE_IPV4_UDP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_UDP,
+ [IXGBE_PACKET_TYPE_IPV6_UDP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_UDP,
+ [IXGBE_PACKET_TYPE_IPV4_IPV6_UDP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6 | RTE_PTYPE_INNER_L4_UDP,
+ [IXGBE_PACKET_TYPE_IPV6_EXT_UDP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV6_EXT | RTE_PTYPE_L4_UDP,
+ [IXGBE_PACKET_TYPE_IPV4_IPV6_EXT_UDP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT | RTE_PTYPE_INNER_L4_UDP,
+ [IXGBE_PACKET_TYPE_IPV4_SCTP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_SCTP,
+ [IXGBE_PACKET_TYPE_IPV4_EXT_SCTP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4_EXT | RTE_PTYPE_L4_SCTP,
+ };
+ if (unlikely(pkt_info & IXGBE_RXDADV_PKTTYPE_ETQF))
+ return RTE_PTYPE_UNKNOWN;
+
+ pkt_info = (pkt_info >> IXGBE_PACKET_TYPE_SHIFT) &
+ IXGBE_PACKET_TYPE_MASK;
+
+ return ptype_table[pkt_info];
+}
+
+static inline uint64_t
+ixgbe_rxd_pkt_info_to_pkt_flags(uint16_t pkt_info)
+{
+ static uint64_t ip_rss_types_map[16] __rte_cache_aligned = {
+ 0, PKT_RX_RSS_HASH, PKT_RX_RSS_HASH, PKT_RX_RSS_HASH,
+ 0, PKT_RX_RSS_HASH, 0, PKT_RX_RSS_HASH,
+ PKT_RX_RSS_HASH, 0, 0, 0,
+ 0, 0, 0, PKT_RX_FDIR,
+ };
+#ifdef RTE_LIBRTE_IEEE1588
+ static uint64_t ip_pkt_etqf_map[8] = {
+ 0, 0, 0, PKT_RX_IEEE1588_PTP,
+ 0, 0, 0, 0,
+ };
+
+ if (likely(pkt_info & IXGBE_RXDADV_PKTTYPE_ETQF))
+ return ip_pkt_etqf_map[(pkt_info >> 4) & 0X07] |
+ ip_rss_types_map[pkt_info & 0XF];
+ else
+ return ip_rss_types_map[pkt_info & 0XF];
+#else
+ return ip_rss_types_map[pkt_info & 0XF];
+#endif
+}
+#else /* RTE_NEXT_ABI */
static inline uint64_t
rx_desc_hlen_type_rss_to_pkt_flags(uint32_t hl_tp_rs)
{
@@ -890,6 +994,7 @@ rx_desc_hlen_type_rss_to_pkt_flags(uint32_t hl_tp_rs)
#endif
return pkt_flags | ip_rss_types_map[hl_tp_rs & 0xF];
}
+#endif /* RTE_NEXT_ABI */
static inline uint64_t
rx_desc_status_to_pkt_flags(uint32_t rx_status)
@@ -945,7 +1050,13 @@ ixgbe_rx_scan_hw_ring(struct ixgbe_rx_queue *rxq)
struct rte_mbuf *mb;
uint16_t pkt_len;
uint64_t pkt_flags;
+#ifdef RTE_NEXT_ABI
+ int nb_dd;
+ uint32_t s[LOOK_AHEAD];
+ uint16_t pkt_info[LOOK_AHEAD];
+#else
int s[LOOK_AHEAD], nb_dd;
+#endif /* RTE_NEXT_ABI */
int i, j, nb_rx = 0;
@@ -968,6 +1079,12 @@ ixgbe_rx_scan_hw_ring(struct ixgbe_rx_queue *rxq)
for (j = LOOK_AHEAD-1; j >= 0; --j)
s[j] = rxdp[j].wb.upper.status_error;
+#ifdef RTE_NEXT_ABI
+ for (j = LOOK_AHEAD-1; j >= 0; --j)
+ pkt_info[j] = rxdp[j].wb.lower.lo_dword.
+ hs_rss.pkt_info;
+#endif /* RTE_NEXT_ABI */
+
/* Compute how many status bits were set */
nb_dd = 0;
for (j = 0; j < LOOK_AHEAD; ++j)
@@ -984,12 +1101,22 @@ ixgbe_rx_scan_hw_ring(struct ixgbe_rx_queue *rxq)
mb->vlan_tci = rte_le_to_cpu_16(rxdp[j].wb.upper.vlan);
/* convert descriptor fields to rte mbuf flags */
+#ifdef RTE_NEXT_ABI
+ pkt_flags = rx_desc_status_to_pkt_flags(s[j]);
+ pkt_flags |= rx_desc_error_to_pkt_flags(s[j]);
+ pkt_flags |=
+ ixgbe_rxd_pkt_info_to_pkt_flags(pkt_info[j]);
+ mb->ol_flags = pkt_flags;
+ mb->packet_type =
+ ixgbe_rxd_pkt_info_to_pkt_type(pkt_info[j]);
+#else /* RTE_NEXT_ABI */
pkt_flags = rx_desc_hlen_type_rss_to_pkt_flags(
rxdp[j].wb.lower.lo_dword.data);
/* reuse status field from scan list */
pkt_flags |= rx_desc_status_to_pkt_flags(s[j]);
pkt_flags |= rx_desc_error_to_pkt_flags(s[j]);
mb->ol_flags = pkt_flags;
+#endif /* RTE_NEXT_ABI */
if (likely(pkt_flags & PKT_RX_RSS_HASH))
mb->hash.rss = rxdp[j].wb.lower.hi_dword.rss;
@@ -1206,7 +1333,11 @@ ixgbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
union ixgbe_adv_rx_desc rxd;
uint64_t dma_addr;
uint32_t staterr;
+#ifdef RTE_NEXT_ABI
+ uint32_t pkt_info;
+#else
uint32_t hlen_type_rss;
+#endif
uint16_t pkt_len;
uint16_t rx_id;
uint16_t nb_rx;
@@ -1324,6 +1455,19 @@ ixgbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
rxm->data_len = pkt_len;
rxm->port = rxq->port_id;
+#ifdef RTE_NEXT_ABI
+ pkt_info = rte_le_to_cpu_32(rxd.wb.lower.lo_dword.hs_rss.
+ pkt_info);
+ /* Only valid if PKT_RX_VLAN_PKT set in pkt_flags */
+ rxm->vlan_tci = rte_le_to_cpu_16(rxd.wb.upper.vlan);
+
+ pkt_flags = rx_desc_status_to_pkt_flags(staterr);
+ pkt_flags = pkt_flags | rx_desc_error_to_pkt_flags(staterr);
+ pkt_flags = pkt_flags |
+ ixgbe_rxd_pkt_info_to_pkt_flags(pkt_info);
+ rxm->ol_flags = pkt_flags;
+ rxm->packet_type = ixgbe_rxd_pkt_info_to_pkt_type(pkt_info);
+#else /* RTE_NEXT_ABI */
hlen_type_rss = rte_le_to_cpu_32(rxd.wb.lower.lo_dword.data);
/* Only valid if PKT_RX_VLAN_PKT set in pkt_flags */
rxm->vlan_tci = rte_le_to_cpu_16(rxd.wb.upper.vlan);
@@ -1332,6 +1476,7 @@ ixgbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
pkt_flags = pkt_flags | rx_desc_status_to_pkt_flags(staterr);
pkt_flags = pkt_flags | rx_desc_error_to_pkt_flags(staterr);
rxm->ol_flags = pkt_flags;
+#endif /* RTE_NEXT_ABI */
if (likely(pkt_flags & PKT_RX_RSS_HASH))
rxm->hash.rss = rxd.wb.lower.hi_dword.rss;
@@ -1405,6 +1550,23 @@ ixgbe_fill_cluster_head_buf(
uint8_t port_id,
uint32_t staterr)
{
+#ifdef RTE_NEXT_ABI
+ uint16_t pkt_info;
+ uint64_t pkt_flags;
+
+ head->port = port_id;
+
+ /* The vlan_tci field is only valid when PKT_RX_VLAN_PKT is
+ * set in the pkt_flags field.
+ */
+ head->vlan_tci = rte_le_to_cpu_16(desc->wb.upper.vlan);
+ pkt_info = rte_le_to_cpu_32(desc->wb.lower.lo_dword.hs_rss.pkt_info);
+ pkt_flags = rx_desc_status_to_pkt_flags(staterr);
+ pkt_flags |= rx_desc_error_to_pkt_flags(staterr);
+ pkt_flags |= ixgbe_rxd_pkt_info_to_pkt_flags(pkt_info);
+ head->ol_flags = pkt_flags;
+ head->packet_type = ixgbe_rxd_pkt_info_to_pkt_type(pkt_info);
+#else /* RTE_NEXT_ABI */
uint32_t hlen_type_rss;
uint64_t pkt_flags;
@@ -1420,6 +1582,7 @@ ixgbe_fill_cluster_head_buf(
pkt_flags |= rx_desc_status_to_pkt_flags(staterr);
pkt_flags |= rx_desc_error_to_pkt_flags(staterr);
head->ol_flags = pkt_flags;
+#endif /* RTE_NEXT_ABI */
if (likely(pkt_flags & PKT_RX_RSS_HASH))
head->hash.rss = rte_le_to_cpu_32(desc->wb.lower.hi_dword.rss);
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v7 06/18] i40e: replace bit mask based packet type with unified packet type
2015-06-19 8:14 ` [dpdk-dev] [PATCH v7 00/18] unified packet type Helin Zhang
` (4 preceding siblings ...)
2015-06-19 8:14 ` [dpdk-dev] [PATCH v7 05/18] ixgbe: " Helin Zhang
@ 2015-06-19 8:14 ` Helin Zhang
2015-06-19 8:14 ` [dpdk-dev] [PATCH v7 07/18] enic: " Helin Zhang
` (12 subsequent siblings)
18 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-06-19 8:14 UTC (permalink / raw)
To: dev
To unify packet types among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.
To avoid breaking ABI compatibility, all the changes would be
enabled by RTE_NEXT_ABI, which is disabled by default.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
drivers/net/i40e/i40e_rxtx.c | 528 +++++++++++++++++++++++++++++++++++++++++++
1 file changed, 528 insertions(+)
v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.
v5 changes:
* Re-worded the commit logs.
v6 changes:
* Disabled the code changes for unified packet type by default, to
avoid breaking ABI compatibility.
v7 changes:
* Renamed RTE_UNIFIED_PKT_TYPE to RTE_NEXT_ABI.
diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
index b2e1d6d..b951da0 100644
--- a/drivers/net/i40e/i40e_rxtx.c
+++ b/drivers/net/i40e/i40e_rxtx.c
@@ -176,6 +176,514 @@ i40e_rxd_error_to_pkt_flags(uint64_t qword)
return flags;
}
+#ifdef RTE_NEXT_ABI
+/* For each value it means, datasheet of hardware can tell more details */
+static inline uint32_t
+i40e_rxd_pkt_type_mapping(uint8_t ptype)
+{
+ static const uint32_t ptype_table[UINT8_MAX] __rte_cache_aligned = {
+ /* L2 types */
+ /* [0] reserved */
+ [1] = RTE_PTYPE_L2_MAC,
+ [2] = RTE_PTYPE_L2_MAC_TIMESYNC,
+ /* [3] - [5] reserved */
+ [6] = RTE_PTYPE_L2_LLDP,
+ /* [7] - [10] reserved */
+ [11] = RTE_PTYPE_L2_ARP,
+ /* [12] - [21] reserved */
+
+ /* Non tunneled IPv4 */
+ [22] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_FRAG,
+ [23] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_NONFRAG,
+ [24] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_UDP,
+ /* [25] reserved */
+ [26] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_TCP,
+ [27] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_SCTP,
+ [28] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_ICMP,
+
+ /* IPv4 --> IPv4 */
+ [29] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [30] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [31] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [32] reserved */
+ [33] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [34] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [35] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv4 --> IPv6 */
+ [36] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [37] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [38] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [39] reserved */
+ [40] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [41] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [42] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv4 --> GRE/Teredo/VXLAN */
+ [43] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT,
+
+ /* IPv4 --> GRE/Teredo/VXLAN --> IPv4 */
+ [44] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [45] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [46] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [47] reserved */
+ [48] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [49] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [50] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv4 --> GRE/Teredo/VXLAN --> IPv6 */
+ [51] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [52] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [53] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [54] reserved */
+ [55] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [56] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [57] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv4 --> GRE/Teredo/VXLAN --> MAC */
+ [58] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC,
+
+ /* IPv4 --> GRE/Teredo/VXLAN --> MAC --> IPv4 */
+ [59] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [60] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [61] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [62] reserved */
+ [63] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [64] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [65] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv4 --> GRE/Teredo/VXLAN --> MAC --> IPv6 */
+ [66] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [67] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [68] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [69] reserved */
+ [70] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [71] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [72] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv4 --> GRE/Teredo/VXLAN --> MAC/VLAN */
+ [73] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN,
+
+ /* IPv4 --> GRE/Teredo/VXLAN --> MAC/VLAN --> IPv4 */
+ [74] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [75] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [76] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [77] reserved */
+ [78] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [79] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [80] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv4 --> GRE/Teredo/VXLAN --> MAC/VLAN --> IPv6 */
+ [81] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [82] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [83] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [84] reserved */
+ [85] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [86] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [87] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* Non tunneled IPv6 */
+ [88] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_FRAG,
+ [89] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_NONFRAG,
+ [90] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_UDP,
+ /* [91] reserved */
+ [92] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_TCP,
+ [93] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_SCTP,
+ [94] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_ICMP,
+
+ /* IPv6 --> IPv4 */
+ [95] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [96] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [97] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [98] reserved */
+ [99] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [100] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [101] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv6 --> IPv6 */
+ [102] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [103] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [104] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [105] reserved */
+ [106] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [107] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [108] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv6 --> GRE/Teredo/VXLAN */
+ [109] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT,
+
+ /* IPv6 --> GRE/Teredo/VXLAN --> IPv4 */
+ [110] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [111] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [112] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [113] reserved */
+ [114] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [115] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [116] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv6 --> GRE/Teredo/VXLAN --> IPv6 */
+ [117] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [118] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [119] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [120] reserved */
+ [121] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [122] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [123] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv6 --> GRE/Teredo/VXLAN --> MAC */
+ [124] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC,
+
+ /* IPv6 --> GRE/Teredo/VXLAN --> MAC --> IPv4 */
+ [125] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [126] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [127] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [128] reserved */
+ [129] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [130] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [131] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv6 --> GRE/Teredo/VXLAN --> MAC --> IPv6 */
+ [132] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [133] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [134] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [135] reserved */
+ [136] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [137] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [138] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv6 --> GRE/Teredo/VXLAN --> MAC/VLAN */
+ [139] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN,
+
+ /* IPv6 --> GRE/Teredo/VXLAN --> MAC/VLAN --> IPv4 */
+ [140] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [141] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [142] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [143] reserved */
+ [144] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [145] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [146] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv6 --> GRE/Teredo/VXLAN --> MAC/VLAN --> IPv6 */
+ [147] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [148] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [149] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [150] reserved */
+ [151] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [152] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [153] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* All others reserved */
+ };
+
+ return ptype_table[ptype];
+}
+#else /* RTE_NEXT_ABI */
/* Translate pkt types to pkt flags */
static inline uint64_t
i40e_rxd_ptype_to_pkt_flags(uint64_t qword)
@@ -443,6 +951,7 @@ i40e_rxd_ptype_to_pkt_flags(uint64_t qword)
return ip_ptype_map[ptype];
}
+#endif /* RTE_NEXT_ABI */
#define I40E_RX_DESC_EXT_STATUS_FLEXBH_MASK 0x03
#define I40E_RX_DESC_EXT_STATUS_FLEXBH_FD_ID 0x01
@@ -730,11 +1239,18 @@ i40e_rx_scan_hw_ring(struct i40e_rx_queue *rxq)
i40e_rxd_to_vlan_tci(mb, &rxdp[j]);
pkt_flags = i40e_rxd_status_to_pkt_flags(qword1);
pkt_flags |= i40e_rxd_error_to_pkt_flags(qword1);
+#ifdef RTE_NEXT_ABI
+ mb->packet_type =
+ i40e_rxd_pkt_type_mapping((uint8_t)((qword1 &
+ I40E_RXD_QW1_PTYPE_MASK) >>
+ I40E_RXD_QW1_PTYPE_SHIFT));
+#else
pkt_flags |= i40e_rxd_ptype_to_pkt_flags(qword1);
mb->packet_type = (uint16_t)((qword1 &
I40E_RXD_QW1_PTYPE_MASK) >>
I40E_RXD_QW1_PTYPE_SHIFT);
+#endif /* RTE_NEXT_ABI */
if (pkt_flags & PKT_RX_RSS_HASH)
mb->hash.rss = rte_le_to_cpu_32(\
rxdp[j].wb.qword0.hi_dword.rss);
@@ -971,9 +1487,15 @@ i40e_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
i40e_rxd_to_vlan_tci(rxm, &rxd);
pkt_flags = i40e_rxd_status_to_pkt_flags(qword1);
pkt_flags |= i40e_rxd_error_to_pkt_flags(qword1);
+#ifdef RTE_NEXT_ABI
+ rxm->packet_type =
+ i40e_rxd_pkt_type_mapping((uint8_t)((qword1 &
+ I40E_RXD_QW1_PTYPE_MASK) >> I40E_RXD_QW1_PTYPE_SHIFT));
+#else
pkt_flags |= i40e_rxd_ptype_to_pkt_flags(qword1);
rxm->packet_type = (uint16_t)((qword1 & I40E_RXD_QW1_PTYPE_MASK) >>
I40E_RXD_QW1_PTYPE_SHIFT);
+#endif /* RTE_NEXT_ABI */
if (pkt_flags & PKT_RX_RSS_HASH)
rxm->hash.rss =
rte_le_to_cpu_32(rxd.wb.qword0.hi_dword.rss);
@@ -1129,10 +1651,16 @@ i40e_recv_scattered_pkts(void *rx_queue,
i40e_rxd_to_vlan_tci(first_seg, &rxd);
pkt_flags = i40e_rxd_status_to_pkt_flags(qword1);
pkt_flags |= i40e_rxd_error_to_pkt_flags(qword1);
+#ifdef RTE_NEXT_ABI
+ first_seg->packet_type =
+ i40e_rxd_pkt_type_mapping((uint8_t)((qword1 &
+ I40E_RXD_QW1_PTYPE_MASK) >> I40E_RXD_QW1_PTYPE_SHIFT));
+#else
pkt_flags |= i40e_rxd_ptype_to_pkt_flags(qword1);
first_seg->packet_type = (uint16_t)((qword1 &
I40E_RXD_QW1_PTYPE_MASK) >>
I40E_RXD_QW1_PTYPE_SHIFT);
+#endif /* RTE_NEXT_ABI */
if (pkt_flags & PKT_RX_RSS_HASH)
rxm->hash.rss =
rte_le_to_cpu_32(rxd.wb.qword0.hi_dword.rss);
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v7 07/18] enic: replace bit mask based packet type with unified packet type
2015-06-19 8:14 ` [dpdk-dev] [PATCH v7 00/18] unified packet type Helin Zhang
` (5 preceding siblings ...)
2015-06-19 8:14 ` [dpdk-dev] [PATCH v7 06/18] i40e: " Helin Zhang
@ 2015-06-19 8:14 ` Helin Zhang
2015-06-19 8:14 ` [dpdk-dev] [PATCH v7 08/18] vmxnet3: " Helin Zhang
` (11 subsequent siblings)
18 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-06-19 8:14 UTC (permalink / raw)
To: dev
To unify packet types among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.
To avoid breaking ABI compatibility, all the changes would be
enabled by RTE_NEXT_ABI, which is disabled by default.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
drivers/net/enic/enic_main.c | 26 ++++++++++++++++++++++++++
1 file changed, 26 insertions(+)
v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.
v5 changes:
* Re-worded the commit logs.
v6 changes:
* Disabled the code changes for unified packet type by default, to
avoid breaking ABI compatibility.
v7 changes:
* Renamed RTE_UNIFIED_PKT_TYPE to RTE_NEXT_ABI.
diff --git a/drivers/net/enic/enic_main.c b/drivers/net/enic/enic_main.c
index 15313c2..f47e96c 100644
--- a/drivers/net/enic/enic_main.c
+++ b/drivers/net/enic/enic_main.c
@@ -423,7 +423,11 @@ static int enic_rq_indicate_buf(struct vnic_rq *rq,
rx_pkt->pkt_len = bytes_written;
if (ipv4) {
+#ifdef RTE_NEXT_ABI
+ rx_pkt->packet_type = RTE_PTYPE_L3_IPV4;
+#else
rx_pkt->ol_flags |= PKT_RX_IPV4_HDR;
+#endif
if (!csum_not_calc) {
if (unlikely(!ipv4_csum_ok))
rx_pkt->ol_flags |= PKT_RX_IP_CKSUM_BAD;
@@ -432,7 +436,11 @@ static int enic_rq_indicate_buf(struct vnic_rq *rq,
rx_pkt->ol_flags |= PKT_RX_L4_CKSUM_BAD;
}
} else if (ipv6)
+#ifdef RTE_NEXT_ABI
+ rx_pkt->packet_type = RTE_PTYPE_L3_IPV6;
+#else
rx_pkt->ol_flags |= PKT_RX_IPV6_HDR;
+#endif
} else {
/* Header split */
if (sop && !eop) {
@@ -445,7 +453,11 @@ static int enic_rq_indicate_buf(struct vnic_rq *rq,
*rx_pkt_bucket = rx_pkt;
rx_pkt->pkt_len = bytes_written;
if (ipv4) {
+#ifdef RTE_NEXT_ABI
+ rx_pkt->packet_type = RTE_PTYPE_L3_IPV4;
+#else
rx_pkt->ol_flags |= PKT_RX_IPV4_HDR;
+#endif
if (!csum_not_calc) {
if (unlikely(!ipv4_csum_ok))
rx_pkt->ol_flags |=
@@ -457,13 +469,22 @@ static int enic_rq_indicate_buf(struct vnic_rq *rq,
PKT_RX_L4_CKSUM_BAD;
}
} else if (ipv6)
+#ifdef RTE_NEXT_ABI
+ rx_pkt->packet_type = RTE_PTYPE_L3_IPV6;
+#else
rx_pkt->ol_flags |= PKT_RX_IPV6_HDR;
+#endif
} else {
/* Payload */
hdr_rx_pkt = *rx_pkt_bucket;
hdr_rx_pkt->pkt_len += bytes_written;
if (ipv4) {
+#ifdef RTE_NEXT_ABI
+ hdr_rx_pkt->packet_type =
+ RTE_PTYPE_L3_IPV4;
+#else
hdr_rx_pkt->ol_flags |= PKT_RX_IPV4_HDR;
+#endif
if (!csum_not_calc) {
if (unlikely(!ipv4_csum_ok))
hdr_rx_pkt->ol_flags |=
@@ -475,7 +496,12 @@ static int enic_rq_indicate_buf(struct vnic_rq *rq,
PKT_RX_L4_CKSUM_BAD;
}
} else if (ipv6)
+#ifdef RTE_NEXT_ABI
+ hdr_rx_pkt->packet_type =
+ RTE_PTYPE_L3_IPV6;
+#else
hdr_rx_pkt->ol_flags |= PKT_RX_IPV6_HDR;
+#endif
}
}
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v7 08/18] vmxnet3: replace bit mask based packet type with unified packet type
2015-06-19 8:14 ` [dpdk-dev] [PATCH v7 00/18] unified packet type Helin Zhang
` (6 preceding siblings ...)
2015-06-19 8:14 ` [dpdk-dev] [PATCH v7 07/18] enic: " Helin Zhang
@ 2015-06-19 8:14 ` Helin Zhang
2015-06-19 8:14 ` [dpdk-dev] [PATCH v7 09/18] fm10k: " Helin Zhang
` (10 subsequent siblings)
18 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-06-19 8:14 UTC (permalink / raw)
To: dev
To unify packet types among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.
To avoid breaking ABI compatibility, all the changes would be
enabled by RTE_NEXT_ABI, which is disabled by default.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
drivers/net/vmxnet3/vmxnet3_rxtx.c | 8 ++++++++
1 file changed, 8 insertions(+)
v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.
v5 changes:
* Re-worded the commit logs.
v6 changes:
* Disabled the code changes for unified packet type by default, to
avoid breaking ABI compatibility.
v7 changes:
* Renamed RTE_UNIFIED_PKT_TYPE to RTE_NEXT_ABI.
diff --git a/drivers/net/vmxnet3/vmxnet3_rxtx.c b/drivers/net/vmxnet3/vmxnet3_rxtx.c
index a1eac45..25ae2f6 100644
--- a/drivers/net/vmxnet3/vmxnet3_rxtx.c
+++ b/drivers/net/vmxnet3/vmxnet3_rxtx.c
@@ -649,9 +649,17 @@ vmxnet3_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
struct ipv4_hdr *ip = (struct ipv4_hdr *)(eth + 1);
if (((ip->version_ihl & 0xf) << 2) > (int)sizeof(struct ipv4_hdr))
+#ifdef RTE_NEXT_ABI
+ rxm->packet_type = RTE_PTYPE_L3_IPV4_EXT;
+#else
rxm->ol_flags |= PKT_RX_IPV4_HDR_EXT;
+#endif
else
+#ifdef RTE_NEXT_ABI
+ rxm->packet_type = RTE_PTYPE_L3_IPV4;
+#else
rxm->ol_flags |= PKT_RX_IPV4_HDR;
+#endif
if (!rcd->cnc) {
if (!rcd->ipc)
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v7 09/18] fm10k: replace bit mask based packet type with unified packet type
2015-06-19 8:14 ` [dpdk-dev] [PATCH v7 00/18] unified packet type Helin Zhang
` (7 preceding siblings ...)
2015-06-19 8:14 ` [dpdk-dev] [PATCH v7 08/18] vmxnet3: " Helin Zhang
@ 2015-06-19 8:14 ` Helin Zhang
2015-06-19 8:14 ` [dpdk-dev] [PATCH v7 10/18] app/test-pipeline: " Helin Zhang
` (9 subsequent siblings)
18 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-06-19 8:14 UTC (permalink / raw)
To: dev
To unify packet types among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.
To avoid breaking ABI compatibility, all the changes would be
enabled by RTE_NEXT_ABI, which is disabled by default.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
drivers/net/fm10k/fm10k_rxtx.c | 27 +++++++++++++++++++++++++++
1 file changed, 27 insertions(+)
v4 changes:
* Supported unified packet type of fm10k from v4.
v5 changes:
* Re-worded the commit logs.
v6 changes:
* Disabled the code changes for unified packet type by default, to
avoid breaking ABI compatibility.
v7 changes:
* Renamed RTE_UNIFIED_PKT_TYPE to RTE_NEXT_ABI.
diff --git a/drivers/net/fm10k/fm10k_rxtx.c b/drivers/net/fm10k/fm10k_rxtx.c
index 56df6cd..45005c2 100644
--- a/drivers/net/fm10k/fm10k_rxtx.c
+++ b/drivers/net/fm10k/fm10k_rxtx.c
@@ -68,12 +68,37 @@ static inline void dump_rxd(union fm10k_rx_desc *rxd)
static inline void
rx_desc_to_ol_flags(struct rte_mbuf *m, const union fm10k_rx_desc *d)
{
+#ifdef RTE_NEXT_ABI
+ static const uint32_t
+ ptype_table[FM10K_RXD_PKTTYPE_MASK >> FM10K_RXD_PKTTYPE_SHIFT]
+ __rte_cache_aligned = {
+ [FM10K_PKTTYPE_OTHER] = RTE_PTYPE_L2_MAC,
+ [FM10K_PKTTYPE_IPV4] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4,
+ [FM10K_PKTTYPE_IPV4_EX] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4_EXT,
+ [FM10K_PKTTYPE_IPV6] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6,
+ [FM10K_PKTTYPE_IPV6_EX] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV6_EXT,
+ [FM10K_PKTTYPE_IPV4 | FM10K_PKTTYPE_TCP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_TCP,
+ [FM10K_PKTTYPE_IPV6 | FM10K_PKTTYPE_TCP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_TCP,
+ [FM10K_PKTTYPE_IPV4 | FM10K_PKTTYPE_UDP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_UDP,
+ [FM10K_PKTTYPE_IPV6 | FM10K_PKTTYPE_UDP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_UDP,
+ };
+
+ m->packet_type = ptype_table[(d->w.pkt_info & FM10K_RXD_PKTTYPE_MASK)
+ >> FM10K_RXD_PKTTYPE_SHIFT];
+#else /* RTE_NEXT_ABI */
uint16_t ptype;
static const uint16_t pt_lut[] = { 0,
PKT_RX_IPV4_HDR, PKT_RX_IPV4_HDR_EXT,
PKT_RX_IPV6_HDR, PKT_RX_IPV6_HDR_EXT,
0, 0, 0
};
+#endif /* RTE_NEXT_ABI */
if (d->w.pkt_info & FM10K_RXD_RSSTYPE_MASK)
m->ol_flags |= PKT_RX_RSS_HASH;
@@ -97,9 +122,11 @@ rx_desc_to_ol_flags(struct rte_mbuf *m, const union fm10k_rx_desc *d)
if (unlikely(d->d.staterr & FM10K_RXD_STATUS_RXE))
m->ol_flags |= PKT_RX_RECIP_ERR;
+#ifndef RTE_NEXT_ABI
ptype = (d->d.data & FM10K_RXD_PKTTYPE_MASK_L3) >>
FM10K_RXD_PKTTYPE_SHIFT;
m->ol_flags |= pt_lut[(uint8_t)ptype];
+#endif
}
uint16_t
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v7 10/18] app/test-pipeline: replace bit mask based packet type with unified packet type
2015-06-19 8:14 ` [dpdk-dev] [PATCH v7 00/18] unified packet type Helin Zhang
` (8 preceding siblings ...)
2015-06-19 8:14 ` [dpdk-dev] [PATCH v7 09/18] fm10k: " Helin Zhang
@ 2015-06-19 8:14 ` Helin Zhang
2015-06-19 8:14 ` [dpdk-dev] [PATCH v7 11/18] app/testpmd: " Helin Zhang
` (8 subsequent siblings)
18 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-06-19 8:14 UTC (permalink / raw)
To: dev
To unify packet types among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.
To avoid breaking ABI compatibility, all the changes would be
enabled by RTE_NEXT_ABI, which is disabled by default.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
app/test-pipeline/pipeline_hash.c | 13 +++++++++++++
1 file changed, 13 insertions(+)
v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.
v5 changes:
* Re-worded the commit logs.
v6 changes:
* Disabled the code changes for unified packet type by default, to
avoid breaking ABI compatibility.
v7 changes:
* Renamed RTE_UNIFIED_PKT_TYPE to RTE_NEXT_ABI.
diff --git a/app/test-pipeline/pipeline_hash.c b/app/test-pipeline/pipeline_hash.c
index 4598ad4..aa3f9e5 100644
--- a/app/test-pipeline/pipeline_hash.c
+++ b/app/test-pipeline/pipeline_hash.c
@@ -459,20 +459,33 @@ app_main_loop_rx_metadata(void) {
signature = RTE_MBUF_METADATA_UINT32_PTR(m, 0);
key = RTE_MBUF_METADATA_UINT8_PTR(m, 32);
+#ifdef RTE_NEXT_ABI
+ if (RTE_ETH_IS_IPV4_HDR(m->packet_type)) {
+#else
if (m->ol_flags & PKT_RX_IPV4_HDR) {
+#endif
ip_hdr = (struct ipv4_hdr *)
&m_data[sizeof(struct ether_hdr)];
ip_dst = ip_hdr->dst_addr;
k32 = (uint32_t *) key;
k32[0] = ip_dst & 0xFFFFFF00;
+#ifdef RTE_NEXT_ABI
+ } else if (RTE_ETH_IS_IPV6_HDR(m->packet_type)) {
+#else
} else {
+#endif
ipv6_hdr = (struct ipv6_hdr *)
&m_data[sizeof(struct ether_hdr)];
ipv6_dst = ipv6_hdr->dst_addr;
memcpy(key, ipv6_dst, 16);
+#ifdef RTE_NEXT_ABI
+ } else
+ continue;
+#else
}
+#endif
*signature = test_hash(key, 0, 0);
}
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v7 11/18] app/testpmd: replace bit mask based packet type with unified packet type
2015-06-19 8:14 ` [dpdk-dev] [PATCH v7 00/18] unified packet type Helin Zhang
` (9 preceding siblings ...)
2015-06-19 8:14 ` [dpdk-dev] [PATCH v7 10/18] app/test-pipeline: " Helin Zhang
@ 2015-06-19 8:14 ` Helin Zhang
2015-06-19 8:14 ` [dpdk-dev] [PATCH v7 12/18] app/test: Remove useless code Helin Zhang
` (7 subsequent siblings)
18 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-06-19 8:14 UTC (permalink / raw)
To: dev
To unify packet types among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.
To avoid breaking ABI compatibility, all the changes would be
enabled by RTE_NEXT_ABI, which is disabled by default.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Signed-off-by: Jijiang Liu <jijiang.liu@intel.com>
---
app/test-pmd/csumonly.c | 14 ++++
app/test-pmd/rxonly.c | 183 ++++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 197 insertions(+)
v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.
v4 changes:
* Added printing logs of packet types of each received packet in rxonly mode.
v5 changes:
* Re-worded the commit logs.
v6 changes:
* Disabled the code changes for unified packet type by default, to
avoid breaking ABI compatibility.
v7 changes:
* Renamed RTE_UNIFIED_PKT_TYPE to RTE_NEXT_ABI.
diff --git a/app/test-pmd/csumonly.c b/app/test-pmd/csumonly.c
index 950ea82..fab9600 100644
--- a/app/test-pmd/csumonly.c
+++ b/app/test-pmd/csumonly.c
@@ -202,8 +202,14 @@ parse_ethernet(struct ether_hdr *eth_hdr, struct testpmd_offload_info *info)
/* Parse a vxlan header */
static void
+#ifdef RTE_NEXT_ABI
+parse_vxlan(struct udp_hdr *udp_hdr,
+ struct testpmd_offload_info *info,
+ uint32_t pkt_type)
+#else
parse_vxlan(struct udp_hdr *udp_hdr, struct testpmd_offload_info *info,
uint64_t mbuf_olflags)
+#endif
{
struct ether_hdr *eth_hdr;
@@ -211,8 +217,12 @@ parse_vxlan(struct udp_hdr *udp_hdr, struct testpmd_offload_info *info,
* (rfc7348) or that the rx offload flag is set (i40e only
* currently) */
if (udp_hdr->dst_port != _htons(4789) &&
+#ifdef RTE_NEXT_ABI
+ RTE_ETH_IS_TUNNEL_PKT(pkt_type) == 0)
+#else
(mbuf_olflags & (PKT_RX_TUNNEL_IPV4_HDR |
PKT_RX_TUNNEL_IPV6_HDR)) == 0)
+#endif
return;
info->is_tunnel = 1;
@@ -549,7 +559,11 @@ pkt_burst_checksum_forward(struct fwd_stream *fs)
struct udp_hdr *udp_hdr;
udp_hdr = (struct udp_hdr *)((char *)l3_hdr +
info.l3_len);
+#ifdef RTE_NEXT_ABI
+ parse_vxlan(udp_hdr, &info, m->packet_type);
+#else
parse_vxlan(udp_hdr, &info, m->ol_flags);
+#endif
} else if (info.l4_proto == IPPROTO_GRE) {
struct simple_gre_hdr *gre_hdr;
gre_hdr = (struct simple_gre_hdr *)
diff --git a/app/test-pmd/rxonly.c b/app/test-pmd/rxonly.c
index f6a2f84..5a30347 100644
--- a/app/test-pmd/rxonly.c
+++ b/app/test-pmd/rxonly.c
@@ -91,7 +91,11 @@ pkt_burst_receive(struct fwd_stream *fs)
uint64_t ol_flags;
uint16_t nb_rx;
uint16_t i, packet_type;
+#ifdef RTE_NEXT_ABI
+ uint16_t is_encapsulation;
+#else
uint64_t is_encapsulation;
+#endif
#ifdef RTE_TEST_PMD_RECORD_CORE_CYCLES
uint64_t start_tsc;
@@ -135,8 +139,12 @@ pkt_burst_receive(struct fwd_stream *fs)
ol_flags = mb->ol_flags;
packet_type = mb->packet_type;
+#ifdef RTE_NEXT_ABI
+ is_encapsulation = RTE_ETH_IS_TUNNEL_PKT(packet_type);
+#else
is_encapsulation = ol_flags & (PKT_RX_TUNNEL_IPV4_HDR |
PKT_RX_TUNNEL_IPV6_HDR);
+#endif
print_ether_addr(" src=", ð_hdr->s_addr);
print_ether_addr(" - dst=", ð_hdr->d_addr);
@@ -163,6 +171,177 @@ pkt_burst_receive(struct fwd_stream *fs)
if (ol_flags & PKT_RX_QINQ_PKT)
printf(" - QinQ VLAN tci=0x%x, VLAN tci outer=0x%x",
mb->vlan_tci, mb->vlan_tci_outer);
+#ifdef RTE_NEXT_ABI
+ if (mb->packet_type) {
+ uint32_t ptype;
+
+ /* (outer) L2 packet type */
+ ptype = mb->packet_type & RTE_PTYPE_L2_MASK;
+ switch (ptype) {
+ case RTE_PTYPE_L2_MAC:
+ printf(" - (outer) L2 type: MAC");
+ break;
+ case RTE_PTYPE_L2_MAC_TIMESYNC:
+ printf(" - (outer) L2 type: MAC Timesync");
+ break;
+ case RTE_PTYPE_L2_ARP:
+ printf(" - (outer) L2 type: ARP");
+ break;
+ case RTE_PTYPE_L2_LLDP:
+ printf(" - (outer) L2 type: LLDP");
+ break;
+ default:
+ printf(" - (outer) L2 type: Unknown");
+ break;
+ }
+
+ /* (outer) L3 packet type */
+ ptype = mb->packet_type & RTE_PTYPE_L3_MASK;
+ switch (ptype) {
+ case RTE_PTYPE_L3_IPV4:
+ printf(" - (outer) L3 type: IPV4");
+ break;
+ case RTE_PTYPE_L3_IPV4_EXT:
+ printf(" - (outer) L3 type: IPV4_EXT");
+ break;
+ case RTE_PTYPE_L3_IPV6:
+ printf(" - (outer) L3 type: IPV6");
+ break;
+ case RTE_PTYPE_L3_IPV4_EXT_UNKNOWN:
+ printf(" - (outer) L3 type: IPV4_EXT_UNKNOWN");
+ break;
+ case RTE_PTYPE_L3_IPV6_EXT:
+ printf(" - (outer) L3 type: IPV6_EXT");
+ break;
+ case RTE_PTYPE_L3_IPV6_EXT_UNKNOWN:
+ printf(" - (outer) L3 type: IPV6_EXT_UNKNOWN");
+ break;
+ default:
+ printf(" - (outer) L3 type: Unknown");
+ break;
+ }
+
+ /* (outer) L4 packet type */
+ ptype = mb->packet_type & RTE_PTYPE_L4_MASK;
+ switch (ptype) {
+ case RTE_PTYPE_L4_TCP:
+ printf(" - (outer) L4 type: TCP");
+ break;
+ case RTE_PTYPE_L4_UDP:
+ printf(" - (outer) L4 type: UDP");
+ break;
+ case RTE_PTYPE_L4_FRAG:
+ printf(" - (outer) L4 type: L4_FRAG");
+ break;
+ case RTE_PTYPE_L4_SCTP:
+ printf(" - (outer) L4 type: SCTP");
+ break;
+ case RTE_PTYPE_L4_ICMP:
+ printf(" - (outer) L4 type: ICMP");
+ break;
+ case RTE_PTYPE_L4_NONFRAG:
+ printf(" - (outer) L4 type: L4_NONFRAG");
+ break;
+ default:
+ printf(" - (outer) L4 type: Unknown");
+ break;
+ }
+
+ /* packet tunnel type */
+ ptype = mb->packet_type & RTE_PTYPE_TUNNEL_MASK;
+ switch (ptype) {
+ case RTE_PTYPE_TUNNEL_IP:
+ printf(" - Tunnel type: IP");
+ break;
+ case RTE_PTYPE_TUNNEL_GRE:
+ printf(" - Tunnel type: GRE");
+ break;
+ case RTE_PTYPE_TUNNEL_VXLAN:
+ printf(" - Tunnel type: VXLAN");
+ break;
+ case RTE_PTYPE_TUNNEL_NVGRE:
+ printf(" - Tunnel type: NVGRE");
+ break;
+ case RTE_PTYPE_TUNNEL_GENEVE:
+ printf(" - Tunnel type: GENEVE");
+ break;
+ case RTE_PTYPE_TUNNEL_GRENAT:
+ printf(" - Tunnel type: GRENAT");
+ break;
+ default:
+ printf(" - Tunnel type: Unkown");
+ break;
+ }
+
+ /* inner L2 packet type */
+ ptype = mb->packet_type & RTE_PTYPE_INNER_L2_MASK;
+ switch (ptype) {
+ case RTE_PTYPE_INNER_L2_MAC:
+ printf(" - Inner L2 type: MAC");
+ break;
+ case RTE_PTYPE_INNER_L2_MAC_VLAN:
+ printf(" - Inner L2 type: MAC_VLAN");
+ break;
+ default:
+ printf(" - Inner L2 type: Unknown");
+ break;
+ }
+
+ /* inner L3 packet type */
+ ptype = mb->packet_type & RTE_PTYPE_INNER_INNER_L3_MASK;
+ switch (ptype) {
+ case RTE_PTYPE_INNER_L3_IPV4:
+ printf(" - Inner L3 type: IPV4");
+ break;
+ case RTE_PTYPE_INNER_L3_IPV4_EXT:
+ printf(" - Inner L3 type: IPV4_EXT");
+ break;
+ case RTE_PTYPE_INNER_L3_IPV6:
+ printf(" - Inner L3 type: IPV6");
+ break;
+ case RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN:
+ printf(" - Inner L3 type: IPV4_EXT_UNKNOWN");
+ break;
+ case RTE_PTYPE_INNER_L3_IPV6_EXT:
+ printf(" - Inner L3 type: IPV6_EXT");
+ break;
+ case RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN:
+ printf(" - Inner L3 type: IPV6_EXT_UNKOWN");
+ break;
+ default:
+ printf(" - Inner L3 type: Unkown");
+ break;
+ }
+
+ /* inner L4 packet type */
+ ptype = mb->packet_type & RTE_PTYPE_INNER_L4_MASK;
+ switch (ptype) {
+ case RTE_PTYPE_INNER_L4_TCP:
+ printf(" - Inner L4 type: TCP");
+ break;
+ case RTE_PTYPE_INNER_L4_UDP:
+ printf(" - Inner L4 type: UDP");
+ break;
+ case RTE_PTYPE_INNER_L4_FRAG:
+ printf(" - Inner L4 type: L4_FRAG");
+ break;
+ case RTE_PTYPE_INNER_L4_SCTP:
+ printf(" - Inner L4 type: SCTP");
+ break;
+ case RTE_PTYPE_INNER_L4_ICMP:
+ printf(" - Inner L4 type: ICMP");
+ break;
+ case RTE_PTYPE_INNER_L4_NONFRAG:
+ printf(" - Inner L4 type: L4_NONFRAG");
+ break;
+ default:
+ printf(" - Inner L4 type: Unknown");
+ break;
+ }
+ printf("\n");
+ } else
+ printf("Unknown packet type\n");
+#endif /* RTE_NEXT_ABI */
if (is_encapsulation) {
struct ipv4_hdr *ipv4_hdr;
struct ipv6_hdr *ipv6_hdr;
@@ -176,7 +355,11 @@ pkt_burst_receive(struct fwd_stream *fs)
l2_len = sizeof(struct ether_hdr);
/* Do not support ipv4 option field */
+#ifdef RTE_NEXT_ABI
+ if (RTE_ETH_IS_IPV4_HDR(packet_type)) {
+#else
if (ol_flags & PKT_RX_TUNNEL_IPV4_HDR) {
+#endif
l3_len = sizeof(struct ipv4_hdr);
ipv4_hdr = (struct ipv4_hdr *) (rte_pktmbuf_mtod(mb,
unsigned char *) + l2_len);
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v7 12/18] app/test: Remove useless code
2015-06-19 8:14 ` [dpdk-dev] [PATCH v7 00/18] unified packet type Helin Zhang
` (10 preceding siblings ...)
2015-06-19 8:14 ` [dpdk-dev] [PATCH v7 11/18] app/testpmd: " Helin Zhang
@ 2015-06-19 8:14 ` Helin Zhang
2015-06-19 8:14 ` [dpdk-dev] [PATCH v7 13/18] examples/ip_fragmentation: replace bit mask based packet type with unified packet type Helin Zhang
` (6 subsequent siblings)
18 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-06-19 8:14 UTC (permalink / raw)
To: dev
Severl useless code lines are added accidently, which blocks packet
type unification. They should be deleted at all.
To avoid breaking ABI compatibility, all the changes would be
enabled by RTE_NEXT_ABI, which is disabled by default.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
app/test/packet_burst_generator.c | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
v4 changes:
* Removed several useless code lines which block packet type unification.
v5 changes:
* Re-worded the commit logs.
v6 changes:
* Disabled the code changes for unified packet type by default, to
avoid breaking ABI compatibility.
v7 changes:
* Renamed RTE_UNIFIED_PKT_TYPE to RTE_NEXT_ABI.
diff --git a/app/test/packet_burst_generator.c b/app/test/packet_burst_generator.c
index b46eed7..61e6340 100644
--- a/app/test/packet_burst_generator.c
+++ b/app/test/packet_burst_generator.c
@@ -272,19 +272,21 @@ nomore_mbuf:
if (ipv4) {
pkt->vlan_tci = ETHER_TYPE_IPv4;
pkt->l3_len = sizeof(struct ipv4_hdr);
-
+#ifndef RTE_NEXT_ABI
if (vlan_enabled)
pkt->ol_flags = PKT_RX_IPV4_HDR | PKT_RX_VLAN_PKT;
else
pkt->ol_flags = PKT_RX_IPV4_HDR;
+#endif
} else {
pkt->vlan_tci = ETHER_TYPE_IPv6;
pkt->l3_len = sizeof(struct ipv6_hdr);
-
+#ifndef RTE_NEXT_ABI
if (vlan_enabled)
pkt->ol_flags = PKT_RX_IPV6_HDR | PKT_RX_VLAN_PKT;
else
pkt->ol_flags = PKT_RX_IPV6_HDR;
+#endif
}
pkts_burst[nb_pkt] = pkt;
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v7 13/18] examples/ip_fragmentation: replace bit mask based packet type with unified packet type
2015-06-19 8:14 ` [dpdk-dev] [PATCH v7 00/18] unified packet type Helin Zhang
` (11 preceding siblings ...)
2015-06-19 8:14 ` [dpdk-dev] [PATCH v7 12/18] app/test: Remove useless code Helin Zhang
@ 2015-06-19 8:14 ` Helin Zhang
2015-06-19 8:14 ` [dpdk-dev] [PATCH v7 14/18] examples/ip_reassembly: " Helin Zhang
` (5 subsequent siblings)
18 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-06-19 8:14 UTC (permalink / raw)
To: dev
To unify packet types among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.
To avoid breaking ABI compatibility, all the changes would be
enabled by RTE_NEXT_ABI, which is disabled by default.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
examples/ip_fragmentation/main.c | 9 +++++++++
1 file changed, 9 insertions(+)
v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.
v5 changes:
* Re-worded the commit logs.
v6 changes:
* Disabled the code changes for unified packet type by default, to
avoid breaking ABI compatibility.
v7 changes:
* Renamed RTE_UNIFIED_PKT_TYPE to RTE_NEXT_ABI.
diff --git a/examples/ip_fragmentation/main.c b/examples/ip_fragmentation/main.c
index 0922ba6..b71d05f 100644
--- a/examples/ip_fragmentation/main.c
+++ b/examples/ip_fragmentation/main.c
@@ -283,7 +283,11 @@ l3fwd_simple_forward(struct rte_mbuf *m, struct lcore_queue_conf *qconf,
len = qconf->tx_mbufs[port_out].len;
/* if this is an IPv4 packet */
+#ifdef RTE_NEXT_ABI
+ if (RTE_ETH_IS_IPV4_HDR(m->packet_type)) {
+#else
if (m->ol_flags & PKT_RX_IPV4_HDR) {
+#endif
struct ipv4_hdr *ip_hdr;
uint32_t ip_dst;
/* Read the lookup key (i.e. ip_dst) from the input packet */
@@ -317,9 +321,14 @@ l3fwd_simple_forward(struct rte_mbuf *m, struct lcore_queue_conf *qconf,
if (unlikely (len2 < 0))
return;
}
+#ifdef RTE_NEXT_ABI
+ } else if (RTE_ETH_IS_IPV6_HDR(m->packet_type)) {
+ /* if this is an IPv6 packet */
+#else
}
/* if this is an IPv6 packet */
else if (m->ol_flags & PKT_RX_IPV6_HDR) {
+#endif
struct ipv6_hdr *ip_hdr;
ipv6 = 1;
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v7 14/18] examples/ip_reassembly: replace bit mask based packet type with unified packet type
2015-06-19 8:14 ` [dpdk-dev] [PATCH v7 00/18] unified packet type Helin Zhang
` (12 preceding siblings ...)
2015-06-19 8:14 ` [dpdk-dev] [PATCH v7 13/18] examples/ip_fragmentation: replace bit mask based packet type with unified packet type Helin Zhang
@ 2015-06-19 8:14 ` Helin Zhang
2015-06-19 8:14 ` [dpdk-dev] [PATCH v7 15/18] examples/l3fwd-acl: " Helin Zhang
` (4 subsequent siblings)
18 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-06-19 8:14 UTC (permalink / raw)
To: dev
To unify packet types among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.
To avoid breaking ABI compatibility, all the changes would be
enabled by RTE_NEXT_ABI, which is disabled by default.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
examples/ip_reassembly/main.c | 9 +++++++++
1 file changed, 9 insertions(+)
v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.
v5 changes:
* Re-worded the commit logs.
v6 changes:
* Disabled the code changes for unified packet type by default, to
avoid breaking ABI compatibility.
v7 changes:
* Renamed RTE_UNIFIED_PKT_TYPE to RTE_NEXT_ABI.
diff --git a/examples/ip_reassembly/main.c b/examples/ip_reassembly/main.c
index 9ecb6f9..f1c47ad 100644
--- a/examples/ip_reassembly/main.c
+++ b/examples/ip_reassembly/main.c
@@ -356,7 +356,11 @@ reassemble(struct rte_mbuf *m, uint8_t portid, uint32_t queue,
dst_port = portid;
/* if packet is IPv4 */
+#ifdef RTE_NEXT_ABI
+ if (RTE_ETH_IS_IPV4_HDR(m->packet_type)) {
+#else
if (m->ol_flags & (PKT_RX_IPV4_HDR)) {
+#endif
struct ipv4_hdr *ip_hdr;
uint32_t ip_dst;
@@ -396,9 +400,14 @@ reassemble(struct rte_mbuf *m, uint8_t portid, uint32_t queue,
}
eth_hdr->ether_type = rte_be_to_cpu_16(ETHER_TYPE_IPv4);
+#ifdef RTE_NEXT_ABI
+ } else if (RTE_ETH_IS_IPV6_HDR(m->packet_type)) {
+ /* if packet is IPv6 */
+#else
}
/* if packet is IPv6 */
else if (m->ol_flags & (PKT_RX_IPV6_HDR | PKT_RX_IPV6_HDR_EXT)) {
+#endif
struct ipv6_extension_fragment *frag_hdr;
struct ipv6_hdr *ip_hdr;
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v7 15/18] examples/l3fwd-acl: replace bit mask based packet type with unified packet type
2015-06-19 8:14 ` [dpdk-dev] [PATCH v7 00/18] unified packet type Helin Zhang
` (13 preceding siblings ...)
2015-06-19 8:14 ` [dpdk-dev] [PATCH v7 14/18] examples/ip_reassembly: " Helin Zhang
@ 2015-06-19 8:14 ` Helin Zhang
2015-06-19 8:14 ` [dpdk-dev] [PATCH v7 16/18] examples/l3fwd-power: " Helin Zhang
` (3 subsequent siblings)
18 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-06-19 8:14 UTC (permalink / raw)
To: dev
To unify packet types among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.
To avoid breaking ABI compatibility, all the changes would be
enabled by RTE_NEXT_ABI, which is disabled by default.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
examples/l3fwd-acl/main.c | 29 +++++++++++++++++++++++------
1 file changed, 23 insertions(+), 6 deletions(-)
v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.
v5 changes:
* Re-worded the commit logs.
v6 changes:
* Disabled the code changes for unified packet type by default, to
avoid breaking ABI compatibility.
v7 changes:
* Renamed RTE_UNIFIED_PKT_TYPE to RTE_NEXT_ABI.
diff --git a/examples/l3fwd-acl/main.c b/examples/l3fwd-acl/main.c
index a5d4f25..78b6df2 100644
--- a/examples/l3fwd-acl/main.c
+++ b/examples/l3fwd-acl/main.c
@@ -645,10 +645,13 @@ prepare_one_packet(struct rte_mbuf **pkts_in, struct acl_search_t *acl,
struct ipv4_hdr *ipv4_hdr;
struct rte_mbuf *pkt = pkts_in[index];
+#ifdef RTE_NEXT_ABI
+ if (RTE_ETH_IS_IPV4_HDR(pkt->packet_type)) {
+#else
int type = pkt->ol_flags & (PKT_RX_IPV4_HDR | PKT_RX_IPV6_HDR);
if (type == PKT_RX_IPV4_HDR) {
-
+#endif
ipv4_hdr = (struct ipv4_hdr *)(rte_pktmbuf_mtod(pkt,
unsigned char *) + sizeof(struct ether_hdr));
@@ -667,9 +670,11 @@ prepare_one_packet(struct rte_mbuf **pkts_in, struct acl_search_t *acl,
/* Not a valid IPv4 packet */
rte_pktmbuf_free(pkt);
}
-
+#ifdef RTE_NEXT_ABI
+ } else if (RTE_ETH_IS_IPV6_HDR(pkt->packet_type)) {
+#else
} else if (type == PKT_RX_IPV6_HDR) {
-
+#endif
/* Fill acl structure */
acl->data_ipv6[acl->num_ipv6] = MBUF_IPV6_2PROTO(pkt);
acl->m_ipv6[(acl->num_ipv6)++] = pkt;
@@ -687,17 +692,22 @@ prepare_one_packet(struct rte_mbuf **pkts_in, struct acl_search_t *acl,
{
struct rte_mbuf *pkt = pkts_in[index];
+#ifdef RTE_NEXT_ABI
+ if (RTE_ETH_IS_IPV4_HDR(pkt->packet_type)) {
+#else
int type = pkt->ol_flags & (PKT_RX_IPV4_HDR | PKT_RX_IPV6_HDR);
if (type == PKT_RX_IPV4_HDR) {
-
+#endif
/* Fill acl structure */
acl->data_ipv4[acl->num_ipv4] = MBUF_IPV4_2PROTO(pkt);
acl->m_ipv4[(acl->num_ipv4)++] = pkt;
-
+#ifdef RTE_NEXT_ABI
+ } else if (RTE_ETH_IS_IPV6_HDR(pkt->packet_type)) {
+#else
} else if (type == PKT_RX_IPV6_HDR) {
-
+#endif
/* Fill acl structure */
acl->data_ipv6[acl->num_ipv6] = MBUF_IPV6_2PROTO(pkt);
acl->m_ipv6[(acl->num_ipv6)++] = pkt;
@@ -745,10 +755,17 @@ send_one_packet(struct rte_mbuf *m, uint32_t res)
/* in the ACL list, drop it */
#ifdef L3FWDACL_DEBUG
if ((res & ACL_DENY_SIGNATURE) != 0) {
+#ifdef RTE_NEXT_ABI
+ if (RTE_ETH_IS_IPV4_HDR(m->packet_type))
+ dump_acl4_rule(m, res);
+ else if (RTE_ETH_IS_IPV6_HDR(m->packet_type))
+ dump_acl6_rule(m, res);
+#else
if (m->ol_flags & PKT_RX_IPV4_HDR)
dump_acl4_rule(m, res);
else
dump_acl6_rule(m, res);
+#endif /* RTE_NEXT_ABI */
}
#endif
rte_pktmbuf_free(m);
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v7 16/18] examples/l3fwd-power: replace bit mask based packet type with unified packet type
2015-06-19 8:14 ` [dpdk-dev] [PATCH v7 00/18] unified packet type Helin Zhang
` (14 preceding siblings ...)
2015-06-19 8:14 ` [dpdk-dev] [PATCH v7 15/18] examples/l3fwd-acl: " Helin Zhang
@ 2015-06-19 8:14 ` Helin Zhang
2015-06-19 8:14 ` [dpdk-dev] [PATCH v7 17/18] examples/l3fwd: " Helin Zhang
` (2 subsequent siblings)
18 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-06-19 8:14 UTC (permalink / raw)
To: dev
To unify packet types among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.
To avoid breaking ABI compatibility, all the changes would be
enabled by RTE_NEXT_ABI, which is disabled by default.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
examples/l3fwd-power/main.c | 8 ++++++++
1 file changed, 8 insertions(+)
v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.
v5 changes:
* Re-worded the commit logs.
v6 changes:
* Disabled the code changes for unified packet type by default, to
avoid breaking ABI compatibility.
v7 changes:
* Renamed RTE_UNIFIED_PKT_TYPE to RTE_NEXT_ABI.
diff --git a/examples/l3fwd-power/main.c b/examples/l3fwd-power/main.c
index 6057059..705188f 100644
--- a/examples/l3fwd-power/main.c
+++ b/examples/l3fwd-power/main.c
@@ -635,7 +635,11 @@ l3fwd_simple_forward(struct rte_mbuf *m, uint8_t portid,
eth_hdr = rte_pktmbuf_mtod(m, struct ether_hdr *);
+#ifdef RTE_NEXT_ABI
+ if (RTE_ETH_IS_IPV4_HDR(m->packet_type)) {
+#else
if (m->ol_flags & PKT_RX_IPV4_HDR) {
+#endif
/* Handle IPv4 headers.*/
ipv4_hdr =
(struct ipv4_hdr *)(rte_pktmbuf_mtod(m, unsigned char*)
@@ -670,8 +674,12 @@ l3fwd_simple_forward(struct rte_mbuf *m, uint8_t portid,
ether_addr_copy(&ports_eth_addr[dst_port], ð_hdr->s_addr);
send_single_packet(m, dst_port);
+#ifdef RTE_NEXT_ABI
+ } else if (RTE_ETH_IS_IPV6_HDR(m->packet_type)) {
+#else
}
else {
+#endif
/* Handle IPv6 headers.*/
#if (APP_LOOKUP_METHOD == APP_LOOKUP_EXACT_MATCH)
struct ipv6_hdr *ipv6_hdr;
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v7 17/18] examples/l3fwd: replace bit mask based packet type with unified packet type
2015-06-19 8:14 ` [dpdk-dev] [PATCH v7 00/18] unified packet type Helin Zhang
` (15 preceding siblings ...)
2015-06-19 8:14 ` [dpdk-dev] [PATCH v7 16/18] examples/l3fwd-power: " Helin Zhang
@ 2015-06-19 8:14 ` Helin Zhang
2015-06-19 8:14 ` [dpdk-dev] [PATCH v7 18/18] mbuf: remove old packet type bit masks Helin Zhang
2015-06-23 1:50 ` [dpdk-dev] [PATCH v8 00/18] unified packet type Helin Zhang
18 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-06-19 8:14 UTC (permalink / raw)
To: dev
To unify packet types among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.
To avoid breaking ABI compatibility, all the changes would be
enabled by RTE_NEXT_ABI, which is disabled by default.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
examples/l3fwd/main.c | 123 ++++++++++++++++++++++++++++++++++++++++++++++++--
1 file changed, 120 insertions(+), 3 deletions(-)
v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.
v3 changes:
* Minor bug fixes and enhancements.
v5 changes:
* Re-worded the commit logs.
v6 changes:
* Disabled the code changes for unified packet type by default, to
avoid breaking ABI compatibility.
v7 changes:
* Renamed RTE_UNIFIED_PKT_TYPE to RTE_NEXT_ABI.
diff --git a/examples/l3fwd/main.c b/examples/l3fwd/main.c
index 7e4bbfd..eff9580 100644
--- a/examples/l3fwd/main.c
+++ b/examples/l3fwd/main.c
@@ -948,7 +948,11 @@ l3fwd_simple_forward(struct rte_mbuf *m, uint8_t portid, struct lcore_conf *qcon
eth_hdr = rte_pktmbuf_mtod(m, struct ether_hdr *);
+#ifdef RTE_NEXT_ABI
+ if (RTE_ETH_IS_IPV4_HDR(m->packet_type)) {
+#else
if (m->ol_flags & PKT_RX_IPV4_HDR) {
+#endif
/* Handle IPv4 headers.*/
ipv4_hdr = (struct ipv4_hdr *)(rte_pktmbuf_mtod(m, unsigned char *) +
sizeof(struct ether_hdr));
@@ -979,8 +983,11 @@ l3fwd_simple_forward(struct rte_mbuf *m, uint8_t portid, struct lcore_conf *qcon
ether_addr_copy(&ports_eth_addr[dst_port], ð_hdr->s_addr);
send_single_packet(m, dst_port);
-
+#ifdef RTE_NEXT_ABI
+ } else if (RTE_ETH_IS_IPV6_HDR(m->packet_type)) {
+#else
} else {
+#endif
/* Handle IPv6 headers.*/
struct ipv6_hdr *ipv6_hdr;
@@ -999,8 +1006,13 @@ l3fwd_simple_forward(struct rte_mbuf *m, uint8_t portid, struct lcore_conf *qcon
ether_addr_copy(&ports_eth_addr[dst_port], ð_hdr->s_addr);
send_single_packet(m, dst_port);
+#ifdef RTE_NEXT_ABI
+ } else
+ /* Free the mbuf that contains non-IPV4/IPV6 packet */
+ rte_pktmbuf_free(m);
+#else
}
-
+#endif
}
#ifdef DO_RFC_1812_CHECKS
@@ -1024,12 +1036,19 @@ l3fwd_simple_forward(struct rte_mbuf *m, uint8_t portid, struct lcore_conf *qcon
* to BAD_PORT value.
*/
static inline __attribute__((always_inline)) void
+#ifdef RTE_NEXT_ABI
+rfc1812_process(struct ipv4_hdr *ipv4_hdr, uint16_t *dp, uint32_t ptype)
+#else
rfc1812_process(struct ipv4_hdr *ipv4_hdr, uint16_t *dp, uint32_t flags)
+#endif
{
uint8_t ihl;
+#ifdef RTE_NEXT_ABI
+ if (RTE_ETH_IS_IPV4_HDR(ptype)) {
+#else
if ((flags & PKT_RX_IPV4_HDR) != 0) {
-
+#endif
ihl = ipv4_hdr->version_ihl - IPV4_MIN_VER_IHL;
ipv4_hdr->time_to_live--;
@@ -1059,11 +1078,19 @@ get_dst_port(const struct lcore_conf *qconf, struct rte_mbuf *pkt,
struct ipv6_hdr *ipv6_hdr;
struct ether_hdr *eth_hdr;
+#ifdef RTE_NEXT_ABI
+ if (RTE_ETH_IS_IPV4_HDR(pkt->packet_type)) {
+#else
if (pkt->ol_flags & PKT_RX_IPV4_HDR) {
+#endif
if (rte_lpm_lookup(qconf->ipv4_lookup_struct, dst_ipv4,
&next_hop) != 0)
next_hop = portid;
+#ifdef RTE_NEXT_ABI
+ } else if (RTE_ETH_IS_IPV6_HDR(pkt->packet_type)) {
+#else
} else if (pkt->ol_flags & PKT_RX_IPV6_HDR) {
+#endif
eth_hdr = rte_pktmbuf_mtod(pkt, struct ether_hdr *);
ipv6_hdr = (struct ipv6_hdr *)(eth_hdr + 1);
if (rte_lpm6_lookup(qconf->ipv6_lookup_struct,
@@ -1097,12 +1124,52 @@ process_packet(struct lcore_conf *qconf, struct rte_mbuf *pkt,
ve = val_eth[dp];
dst_port[0] = dp;
+#ifdef RTE_NEXT_ABI
+ rfc1812_process(ipv4_hdr, dst_port, pkt->packet_type);
+#else
rfc1812_process(ipv4_hdr, dst_port, pkt->ol_flags);
+#endif
te = _mm_blend_epi16(te, ve, MASK_ETH);
_mm_store_si128((__m128i *)eth_hdr, te);
}
+#ifdef RTE_NEXT_ABI
+/*
+ * Read packet_type and destination IPV4 addresses from 4 mbufs.
+ */
+static inline void
+processx4_step1(struct rte_mbuf *pkt[FWDSTEP],
+ __m128i *dip,
+ uint32_t *ipv4_flag)
+{
+ struct ipv4_hdr *ipv4_hdr;
+ struct ether_hdr *eth_hdr;
+ uint32_t x0, x1, x2, x3;
+
+ eth_hdr = rte_pktmbuf_mtod(pkt[0], struct ether_hdr *);
+ ipv4_hdr = (struct ipv4_hdr *)(eth_hdr + 1);
+ x0 = ipv4_hdr->dst_addr;
+ ipv4_flag[0] = pkt[0]->packet_type & RTE_PTYPE_L3_IPV4;
+
+ eth_hdr = rte_pktmbuf_mtod(pkt[1], struct ether_hdr *);
+ ipv4_hdr = (struct ipv4_hdr *)(eth_hdr + 1);
+ x1 = ipv4_hdr->dst_addr;
+ ipv4_flag[0] &= pkt[1]->packet_type;
+
+ eth_hdr = rte_pktmbuf_mtod(pkt[2], struct ether_hdr *);
+ ipv4_hdr = (struct ipv4_hdr *)(eth_hdr + 1);
+ x2 = ipv4_hdr->dst_addr;
+ ipv4_flag[0] &= pkt[2]->packet_type;
+
+ eth_hdr = rte_pktmbuf_mtod(pkt[3], struct ether_hdr *);
+ ipv4_hdr = (struct ipv4_hdr *)(eth_hdr + 1);
+ x3 = ipv4_hdr->dst_addr;
+ ipv4_flag[0] &= pkt[3]->packet_type;
+
+ dip[0] = _mm_set_epi32(x3, x2, x1, x0);
+}
+#else /* RTE_NEXT_ABI */
/*
* Read ol_flags and destination IPV4 addresses from 4 mbufs.
*/
@@ -1135,14 +1202,24 @@ processx4_step1(struct rte_mbuf *pkt[FWDSTEP], __m128i *dip, uint32_t *flag)
dip[0] = _mm_set_epi32(x3, x2, x1, x0);
}
+#endif /* RTE_NEXT_ABI */
/*
* Lookup into LPM for destination port.
* If lookup fails, use incoming port (portid) as destination port.
*/
static inline void
+#ifdef RTE_NEXT_ABI
+processx4_step2(const struct lcore_conf *qconf,
+ __m128i dip,
+ uint32_t ipv4_flag,
+ uint8_t portid,
+ struct rte_mbuf *pkt[FWDSTEP],
+ uint16_t dprt[FWDSTEP])
+#else
processx4_step2(const struct lcore_conf *qconf, __m128i dip, uint32_t flag,
uint8_t portid, struct rte_mbuf *pkt[FWDSTEP], uint16_t dprt[FWDSTEP])
+#endif /* RTE_NEXT_ABI */
{
rte_xmm_t dst;
const __m128i bswap_mask = _mm_set_epi8(12, 13, 14, 15, 8, 9, 10, 11,
@@ -1152,7 +1229,11 @@ processx4_step2(const struct lcore_conf *qconf, __m128i dip, uint32_t flag,
dip = _mm_shuffle_epi8(dip, bswap_mask);
/* if all 4 packets are IPV4. */
+#ifdef RTE_NEXT_ABI
+ if (likely(ipv4_flag)) {
+#else
if (likely(flag != 0)) {
+#endif
rte_lpm_lookupx4(qconf->ipv4_lookup_struct, dip, dprt, portid);
} else {
dst.x = dip;
@@ -1202,6 +1283,16 @@ processx4_step3(struct rte_mbuf *pkt[FWDSTEP], uint16_t dst_port[FWDSTEP])
_mm_store_si128(p[2], te[2]);
_mm_store_si128(p[3], te[3]);
+#ifdef RTE_NEXT_ABI
+ rfc1812_process((struct ipv4_hdr *)((struct ether_hdr *)p[0] + 1),
+ &dst_port[0], pkt[0]->packet_type);
+ rfc1812_process((struct ipv4_hdr *)((struct ether_hdr *)p[1] + 1),
+ &dst_port[1], pkt[1]->packet_type);
+ rfc1812_process((struct ipv4_hdr *)((struct ether_hdr *)p[2] + 1),
+ &dst_port[2], pkt[2]->packet_type);
+ rfc1812_process((struct ipv4_hdr *)((struct ether_hdr *)p[3] + 1),
+ &dst_port[3], pkt[3]->packet_type);
+#else /* RTE_NEXT_ABI */
rfc1812_process((struct ipv4_hdr *)((struct ether_hdr *)p[0] + 1),
&dst_port[0], pkt[0]->ol_flags);
rfc1812_process((struct ipv4_hdr *)((struct ether_hdr *)p[1] + 1),
@@ -1210,6 +1301,7 @@ processx4_step3(struct rte_mbuf *pkt[FWDSTEP], uint16_t dst_port[FWDSTEP])
&dst_port[2], pkt[2]->ol_flags);
rfc1812_process((struct ipv4_hdr *)((struct ether_hdr *)p[3] + 1),
&dst_port[3], pkt[3]->ol_flags);
+#endif /* RTE_NEXT_ABI */
}
/*
@@ -1396,7 +1488,11 @@ main_loop(__attribute__((unused)) void *dummy)
uint16_t *lp;
uint16_t dst_port[MAX_PKT_BURST];
__m128i dip[MAX_PKT_BURST / FWDSTEP];
+#ifdef RTE_NEXT_ABI
+ uint32_t ipv4_flag[MAX_PKT_BURST / FWDSTEP];
+#else
uint32_t flag[MAX_PKT_BURST / FWDSTEP];
+#endif
uint16_t pnum[MAX_PKT_BURST + 1];
#endif
@@ -1466,6 +1562,18 @@ main_loop(__attribute__((unused)) void *dummy)
*/
int32_t n = RTE_ALIGN_FLOOR(nb_rx, 4);
for (j = 0; j < n ; j+=4) {
+#ifdef RTE_NEXT_ABI
+ uint32_t pkt_type =
+ pkts_burst[j]->packet_type &
+ pkts_burst[j+1]->packet_type &
+ pkts_burst[j+2]->packet_type &
+ pkts_burst[j+3]->packet_type;
+ if (pkt_type & RTE_PTYPE_L3_IPV4) {
+ simple_ipv4_fwd_4pkts(
+ &pkts_burst[j], portid, qconf);
+ } else if (pkt_type &
+ RTE_PTYPE_L3_IPV6) {
+#else /* RTE_NEXT_ABI */
uint32_t ol_flag = pkts_burst[j]->ol_flags
& pkts_burst[j+1]->ol_flags
& pkts_burst[j+2]->ol_flags
@@ -1474,6 +1582,7 @@ main_loop(__attribute__((unused)) void *dummy)
simple_ipv4_fwd_4pkts(&pkts_burst[j],
portid, qconf);
} else if (ol_flag & PKT_RX_IPV6_HDR) {
+#endif /* RTE_NEXT_ABI */
simple_ipv6_fwd_4pkts(&pkts_burst[j],
portid, qconf);
} else {
@@ -1498,13 +1607,21 @@ main_loop(__attribute__((unused)) void *dummy)
for (j = 0; j != k; j += FWDSTEP) {
processx4_step1(&pkts_burst[j],
&dip[j / FWDSTEP],
+#ifdef RTE_NEXT_ABI
+ &ipv4_flag[j / FWDSTEP]);
+#else
&flag[j / FWDSTEP]);
+#endif
}
k = RTE_ALIGN_FLOOR(nb_rx, FWDSTEP);
for (j = 0; j != k; j += FWDSTEP) {
processx4_step2(qconf, dip[j / FWDSTEP],
+#ifdef RTE_NEXT_ABI
+ ipv4_flag[j / FWDSTEP], portid,
+#else
flag[j / FWDSTEP], portid,
+#endif
&pkts_burst[j], &dst_port[j]);
}
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v7 18/18] mbuf: remove old packet type bit masks
2015-06-19 8:14 ` [dpdk-dev] [PATCH v7 00/18] unified packet type Helin Zhang
` (16 preceding siblings ...)
2015-06-19 8:14 ` [dpdk-dev] [PATCH v7 17/18] examples/l3fwd: " Helin Zhang
@ 2015-06-19 8:14 ` Helin Zhang
2015-06-23 1:50 ` [dpdk-dev] [PATCH v8 00/18] unified packet type Helin Zhang
18 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-06-19 8:14 UTC (permalink / raw)
To: dev
As unified packet types are used instead, those old bit masks and
the relevant macros for packet type indication need to be removed.
To avoid breaking ABI compatibility, all the changes would be
enabled by RTE_NEXT_ABI, which is disabled by default.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
lib/librte_mbuf/rte_mbuf.c | 4 ++++
lib/librte_mbuf/rte_mbuf.h | 4 ++++
2 files changed, 8 insertions(+)
v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.
* Redefined the bit masks for packet RX offload flags.
v5 changes:
* Rolled back the bit masks of RX flags, for ABI compatibility.
v6 changes:
* Disabled the code changes for unified packet type by default, to
avoid breaking ABI compatibility.
v7 changes:
* Renamed RTE_UNIFIED_PKT_TYPE to RTE_NEXT_ABI.
diff --git a/lib/librte_mbuf/rte_mbuf.c b/lib/librte_mbuf/rte_mbuf.c
index f506517..4320dd4 100644
--- a/lib/librte_mbuf/rte_mbuf.c
+++ b/lib/librte_mbuf/rte_mbuf.c
@@ -251,14 +251,18 @@ const char *rte_get_rx_ol_flag_name(uint64_t mask)
/* case PKT_RX_HBUF_OVERFLOW: return "PKT_RX_HBUF_OVERFLOW"; */
/* case PKT_RX_RECIP_ERR: return "PKT_RX_RECIP_ERR"; */
/* case PKT_RX_MAC_ERR: return "PKT_RX_MAC_ERR"; */
+#ifndef RTE_NEXT_ABI
case PKT_RX_IPV4_HDR: return "PKT_RX_IPV4_HDR";
case PKT_RX_IPV4_HDR_EXT: return "PKT_RX_IPV4_HDR_EXT";
case PKT_RX_IPV6_HDR: return "PKT_RX_IPV6_HDR";
case PKT_RX_IPV6_HDR_EXT: return "PKT_RX_IPV6_HDR_EXT";
+#endif /* RTE_NEXT_ABI */
case PKT_RX_IEEE1588_PTP: return "PKT_RX_IEEE1588_PTP";
case PKT_RX_IEEE1588_TMST: return "PKT_RX_IEEE1588_TMST";
+#ifndef RTE_NEXT_ABI
case PKT_RX_TUNNEL_IPV4_HDR: return "PKT_RX_TUNNEL_IPV4_HDR";
case PKT_RX_TUNNEL_IPV6_HDR: return "PKT_RX_TUNNEL_IPV6_HDR";
+#endif /* RTE_NEXT_ABI */
default: return NULL;
}
}
diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
index 5e7cc26..9f32edf 100644
--- a/lib/librte_mbuf/rte_mbuf.h
+++ b/lib/librte_mbuf/rte_mbuf.h
@@ -91,14 +91,18 @@ extern "C" {
#define PKT_RX_HBUF_OVERFLOW (0ULL << 0) /**< Header buffer overflow. */
#define PKT_RX_RECIP_ERR (0ULL << 0) /**< Hardware processing error. */
#define PKT_RX_MAC_ERR (0ULL << 0) /**< MAC error. */
+#ifndef RTE_NEXT_ABI
#define PKT_RX_IPV4_HDR (1ULL << 5) /**< RX packet with IPv4 header. */
#define PKT_RX_IPV4_HDR_EXT (1ULL << 6) /**< RX packet with extended IPv4 header. */
#define PKT_RX_IPV6_HDR (1ULL << 7) /**< RX packet with IPv6 header. */
#define PKT_RX_IPV6_HDR_EXT (1ULL << 8) /**< RX packet with extended IPv6 header. */
+#endif /* RTE_NEXT_ABI */
#define PKT_RX_IEEE1588_PTP (1ULL << 9) /**< RX IEEE1588 L2 Ethernet PT Packet. */
#define PKT_RX_IEEE1588_TMST (1ULL << 10) /**< RX IEEE1588 L2/L4 timestamped packet.*/
+#ifndef RTE_NEXT_ABI
#define PKT_RX_TUNNEL_IPV4_HDR (1ULL << 11) /**< RX tunnel packet with IPv4 header.*/
#define PKT_RX_TUNNEL_IPV6_HDR (1ULL << 12) /**< RX tunnel packet with IPv6 header. */
+#endif /* RTE_NEXT_ABI */
#define PKT_RX_FDIR_ID (1ULL << 13) /**< FD id reported if FDIR match. */
#define PKT_RX_FDIR_FLX (1ULL << 14) /**< Flexible bytes reported if FDIR match. */
#define PKT_RX_QINQ_PKT (1ULL << 15) /**< RX packet with double VLAN stripped. */
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v8 00/18] unified packet type
2015-06-19 8:14 ` [dpdk-dev] [PATCH v7 00/18] unified packet type Helin Zhang
` (17 preceding siblings ...)
2015-06-19 8:14 ` [dpdk-dev] [PATCH v7 18/18] mbuf: remove old packet type bit masks Helin Zhang
@ 2015-06-23 1:50 ` Helin Zhang
2015-06-23 1:50 ` [dpdk-dev] [PATCH v8 01/18] mbuf: redefine packet_type in rte_mbuf Helin Zhang
` (19 more replies)
18 siblings, 20 replies; 257+ messages in thread
From: Helin Zhang @ 2015-06-23 1:50 UTC (permalink / raw)
To: dev
Currently only 6 bits which are stored in ol_flags are used to indicate the
packet types. This is not enough, as some NIC hardware can recognize quite
a lot of packet types, e.g i40e hardware can recognize more than 150 packet
types. Hiding those packet types hides hardware offload capabilities which
could be quite useful for improving performance and for end users.
So an unified packet types are needed to support all possible PMDs. A 16
bits packet_type in mbuf structure can be changed to 32 bits and used for
this purpose. In addition, all packet types stored in ol_flag field should
be deleted at all, and 6 bits of ol_flags can be save as the benifit.
Initially, 32 bits of packet_type can be divided into several sub fields to
indicate different packet type information of a packet. The initial design
is to divide those bits into fields for L2 types, L3 types, L4 types, tunnel
types, inner L2 types, inner L3 types and inner L4 types. All PMDs should
translate the offloaded packet types into these 7 fields of information, for
user applications.
To avoid breaking ABI compatibility, currently all the code changes for
unified packet type are disabled at compile time by default. Users can enable
it manually by defining the macro of RTE_NEXT_ABI. The code changes will be
valid by default in a future release, and the old version will be deleted
accordingly, after the ABI change process is done.
Note that this patch set should be integrated after another patch set for
'[PATCH v3 0/7] support i40e QinQ stripping and insertion', to clearly solve
the conflict during integration. As both patch sets modified 'struct rte_mbuf',
and the final layout of the 'struct rte_mbuf' is key to vectorized ixgbe PMD.
v2 changes:
* Enlarged the packet_type field from 16 bits to 32 bits.
* Redefined the packet type sub-fields.
* Updated the 'struct rte_kni_mbuf' for KNI according to the mbuf changes.
* Used redefined packet types and enlarged packet_type field for all PMDs
and corresponding applications.
* Removed changes in bond and its relevant application, as there is no need
at all according to the recent bond changes.
v3 changes:
* Put the mbuf layout changes into a single patch.
* Put vector ixgbe changes right after mbuf changes.
* Disabled vector ixgbe PMD by default, as mbuf layout changed, and then
re-enabled it after vector ixgbe PMD updated.
* Put the definitions of unified packet type into a single patch.
* Minor bug fixes and enhancements in l3fwd example.
v4 changes:
* Added detailed description of each packet types.
* Supported unified packet type of fm10k.
* Added printing logs of packet types of each received packet for rxonly
mode in testpmd.
* Removed several useless code lines which block packet type unification from
app/test/packet_burst_generator.c.
v5 changes:
* Added more detailed description for each packet types, together with examples.
* Rolled back the macro definitions of RX packet flags, for ABI compitability.
v6 changes:
* Disabled the code changes for unified packet type by default, to
avoid breaking ABI compatibility.
v7 changes:
* Renamed RTE_UNIFIED_PKT_TYPE to RTE_NEXT_ABI.
* Integrated with patch set for '[PATCH v3 0/7] support i40e QinQ stripping
and insertion', to clearly solve the conflicts during merging.
v8 changes:
* Moved the field of 'vlan_tci_outer' in 'struct rte_mbuf' to the end of the 1st
cache line, to avoid breaking any vectorized PMD storing, as fields of
'packet_type, pkt_len, data_len, vlan_tci, rss' should be in an contiguous 128
bits.
Helin Zhang (18):
mbuf: redefine packet_type in rte_mbuf
ixgbe: support unified packet type in vectorized PMD
mbuf: add definitions of unified packet types
e1000: replace bit mask based packet type with unified packet type
ixgbe: replace bit mask based packet type with unified packet type
i40e: replace bit mask based packet type with unified packet type
enic: replace bit mask based packet type with unified packet type
vmxnet3: replace bit mask based packet type with unified packet type
fm10k: replace bit mask based packet type with unified packet type
app/test-pipeline: replace bit mask based packet type with unified
packet type
app/testpmd: replace bit mask based packet type with unified packet
type
app/test: Remove useless code
examples/ip_fragmentation: replace bit mask based packet type with
unified packet type
examples/ip_reassembly: replace bit mask based packet type with
unified packet type
examples/l3fwd-acl: replace bit mask based packet type with unified
packet type
examples/l3fwd-power: replace bit mask based packet type with unified
packet type
examples/l3fwd: replace bit mask based packet type with unified packet
type
mbuf: remove old packet type bit masks
app/test-pipeline/pipeline_hash.c | 13 +
app/test-pmd/csumonly.c | 14 +
app/test-pmd/rxonly.c | 183 +++++++
app/test/packet_burst_generator.c | 6 +-
drivers/net/e1000/igb_rxtx.c | 102 ++++
drivers/net/enic/enic_main.c | 26 +
drivers/net/fm10k/fm10k_rxtx.c | 27 ++
drivers/net/i40e/i40e_rxtx.c | 528 +++++++++++++++++++++
drivers/net/ixgbe/ixgbe_rxtx.c | 163 +++++++
drivers/net/ixgbe/ixgbe_rxtx_vec.c | 75 ++-
drivers/net/vmxnet3/vmxnet3_rxtx.c | 8 +
examples/ip_fragmentation/main.c | 9 +
examples/ip_reassembly/main.c | 9 +
examples/l3fwd-acl/main.c | 29 +-
examples/l3fwd-power/main.c | 8 +
examples/l3fwd/main.c | 123 ++++-
.../linuxapp/eal/include/exec-env/rte_kni_common.h | 6 +
lib/librte_mbuf/rte_mbuf.c | 4 +
lib/librte_mbuf/rte_mbuf.h | 517 ++++++++++++++++++++
19 files changed, 1837 insertions(+), 13 deletions(-)
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v8 01/18] mbuf: redefine packet_type in rte_mbuf
2015-06-23 1:50 ` [dpdk-dev] [PATCH v8 00/18] unified packet type Helin Zhang
@ 2015-06-23 1:50 ` Helin Zhang
2015-07-02 9:03 ` Thomas Monjalon
2015-06-23 1:50 ` [dpdk-dev] [PATCH v8 02/18] ixgbe: support unified packet type in vectorized PMD Helin Zhang
` (18 subsequent siblings)
19 siblings, 1 reply; 257+ messages in thread
From: Helin Zhang @ 2015-06-23 1:50 UTC (permalink / raw)
To: dev
In order to unify the packet type, the field of 'packet_type' in
'struct rte_mbuf' needs to be extended from 16 to 32 bits.
Accordingly, some fields in 'struct rte_mbuf' are re-organized to
support this change for Vector PMD. As 'struct rte_kni_mbuf' for
KNI should be right mapped to 'struct rte_mbuf', it should be
modified accordingly. In addition, Vector PMD of ixgbe is disabled
by default, as 'struct rte_mbuf' changed.
To avoid breaking ABI compatibility, all the changes would be
enabled by RTE_NEXT_ABI, which is disabled by default.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Signed-off-by: Cunming Liang <cunming.liang@intel.com>
---
config/common_linuxapp | 2 +-
.../linuxapp/eal/include/exec-env/rte_kni_common.h | 6 +++++
lib/librte_mbuf/rte_mbuf.h | 26 ++++++++++++++++++++++
3 files changed, 33 insertions(+), 1 deletion(-)
v2 changes:
* Enlarged the packet_type field from 16 bits to 32 bits.
* Redefined the packet type sub-fields.
* Updated the 'struct rte_kni_mbuf' for KNI according to the mbuf changes.
v3 changes:
* Put the mbuf layout changes into a single patch.
* Disabled vector ixgbe PMD by default, as mbuf layout changed.
v5 changes:
* Re-worded the commit logs.
v6 changes:
* Disabled the code changes for unified packet type by default, to
avoid breaking ABI compatibility.
v7 changes:
* Renamed RTE_UNIFIED_PKT_TYPE to RTE_NEXT_ABI.
* Integrated with changes of QinQ stripping/insertion.
v8 changes:
* Moved the field of 'vlan_tci_outer' in 'struct rte_mbuf' to the end
of the 1st cache line, to avoid breaking any vectorized PMD storing.
diff --git a/config/common_linuxapp b/config/common_linuxapp
index 5deb55a..617d4a1 100644
--- a/config/common_linuxapp
+++ b/config/common_linuxapp
@@ -167,7 +167,7 @@ CONFIG_RTE_LIBRTE_IXGBE_DEBUG_TX_FREE=n
CONFIG_RTE_LIBRTE_IXGBE_DEBUG_DRIVER=n
CONFIG_RTE_LIBRTE_IXGBE_PF_DISABLE_STRIP_CRC=n
CONFIG_RTE_LIBRTE_IXGBE_RX_ALLOW_BULK_ALLOC=y
-CONFIG_RTE_IXGBE_INC_VECTOR=y
+CONFIG_RTE_IXGBE_INC_VECTOR=n
CONFIG_RTE_IXGBE_RX_OLFLAGS_ENABLE=y
#
diff --git a/lib/librte_eal/linuxapp/eal/include/exec-env/rte_kni_common.h b/lib/librte_eal/linuxapp/eal/include/exec-env/rte_kni_common.h
index 1e55c2d..e9f38bd 100644
--- a/lib/librte_eal/linuxapp/eal/include/exec-env/rte_kni_common.h
+++ b/lib/librte_eal/linuxapp/eal/include/exec-env/rte_kni_common.h
@@ -117,9 +117,15 @@ struct rte_kni_mbuf {
uint16_t data_off; /**< Start address of data in segment buffer. */
char pad1[4];
uint64_t ol_flags; /**< Offload features. */
+#ifdef RTE_NEXT_ABI
+ char pad2[4];
+ uint32_t pkt_len; /**< Total pkt len: sum of all segment data_len. */
+ uint16_t data_len; /**< Amount of data in segment buffer. */
+#else
char pad2[2];
uint16_t data_len; /**< Amount of data in segment buffer. */
uint32_t pkt_len; /**< Total pkt len: sum of all segment data_len. */
+#endif
/* fields on second cache line */
char pad3[8] __attribute__((__aligned__(RTE_CACHE_LINE_SIZE)));
diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
index a0f3d3b..0315561 100644
--- a/lib/librte_mbuf/rte_mbuf.h
+++ b/lib/librte_mbuf/rte_mbuf.h
@@ -275,6 +275,28 @@ struct rte_mbuf {
/* remaining bytes are set on RX when pulling packet from descriptor */
MARKER rx_descriptor_fields1;
+#ifdef RTE_NEXT_ABI
+ /*
+ * The packet type, which is the combination of outer/inner L2, L3, L4
+ * and tunnel types.
+ */
+ union {
+ uint32_t packet_type; /**< L2/L3/L4 and tunnel information. */
+ struct {
+ uint32_t l2_type:4; /**< (Outer) L2 type. */
+ uint32_t l3_type:4; /**< (Outer) L3 type. */
+ uint32_t l4_type:4; /**< (Outer) L4 type. */
+ uint32_t tun_type:4; /**< Tunnel type. */
+ uint32_t inner_l2_type:4; /**< Inner L2 type. */
+ uint32_t inner_l3_type:4; /**< Inner L3 type. */
+ uint32_t inner_l4_type:4; /**< Inner L4 type. */
+ };
+ };
+
+ uint32_t pkt_len; /**< Total pkt len: sum of all segments. */
+ uint16_t data_len; /**< Amount of data in segment buffer. */
+ uint16_t vlan_tci; /**< VLAN Tag Control Identifier (CPU order) */
+#else /* RTE_NEXT_ABI */
/**
* The packet type, which is used to indicate ordinary packet and also
* tunneled packet format, i.e. each number is represented a type of
@@ -286,6 +308,7 @@ struct rte_mbuf {
uint32_t pkt_len; /**< Total pkt len: sum of all segments. */
uint16_t vlan_tci; /**< VLAN Tag Control Identifier (CPU order) */
uint16_t vlan_tci_outer; /**< Outer VLAN Tag Control Identifier (CPU order) */
+#endif /* RTE_NEXT_ABI */
union {
uint32_t rss; /**< RSS hash result if RSS enabled */
struct {
@@ -306,6 +329,9 @@ struct rte_mbuf {
} hash; /**< hash information */
uint32_t seqn; /**< Sequence number. See also rte_reorder_insert() */
+#ifdef RTE_NEXT_ABI
+ uint16_t vlan_tci_outer; /**< Outer VLAN Tag Control Identifier (CPU order) */
+#endif /* RTE_NEXT_ABI */
/* second cache line - fields only used in slow path or on TX */
MARKER cacheline1 __rte_cache_aligned;
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v8 02/18] ixgbe: support unified packet type in vectorized PMD
2015-06-23 1:50 ` [dpdk-dev] [PATCH v8 00/18] unified packet type Helin Zhang
2015-06-23 1:50 ` [dpdk-dev] [PATCH v8 01/18] mbuf: redefine packet_type in rte_mbuf Helin Zhang
@ 2015-06-23 1:50 ` Helin Zhang
2015-06-23 1:50 ` [dpdk-dev] [PATCH v8 03/18] mbuf: add definitions of unified packet types Helin Zhang
` (17 subsequent siblings)
19 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-06-23 1:50 UTC (permalink / raw)
To: dev
To unify the packet type, bit masks of packet type for ol_flags are
replaced. In addition, more packet types (UDP, TCP and SCTP) are
supported in vectorized ixgbe PMD.
To avoid breaking ABI compatibility, all the changes would be
enabled by RTE_NEXT_ABI, which is disabled by default.
Note that around 2% performance drop (64B) was observed of doing 4
ports (1 port per 82599 card) IO forwarding on the same SNB core.
Signed-off-by: Cunming Liang <cunming.liang@intel.com>
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
config/common_linuxapp | 2 +-
drivers/net/ixgbe/ixgbe_rxtx_vec.c | 75 +++++++++++++++++++++++++++++++++++++-
2 files changed, 74 insertions(+), 3 deletions(-)
v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.
v3 changes:
* Put vector ixgbe changes right after mbuf changes.
* Enabled vector ixgbe PMD by default together with changes for updated
vector PMD.
v5 changes:
* Re-worded the commit logs.
v6 changes:
* Disabled the code changes for unified packet type by default, to
avoid breaking ABI compatibility.
v7 changes:
* Renamed RTE_UNIFIED_PKT_TYPE to RTE_NEXT_ABI.
diff --git a/config/common_linuxapp b/config/common_linuxapp
index 617d4a1..5deb55a 100644
--- a/config/common_linuxapp
+++ b/config/common_linuxapp
@@ -167,7 +167,7 @@ CONFIG_RTE_LIBRTE_IXGBE_DEBUG_TX_FREE=n
CONFIG_RTE_LIBRTE_IXGBE_DEBUG_DRIVER=n
CONFIG_RTE_LIBRTE_IXGBE_PF_DISABLE_STRIP_CRC=n
CONFIG_RTE_LIBRTE_IXGBE_RX_ALLOW_BULK_ALLOC=y
-CONFIG_RTE_IXGBE_INC_VECTOR=n
+CONFIG_RTE_IXGBE_INC_VECTOR=y
CONFIG_RTE_IXGBE_RX_OLFLAGS_ENABLE=y
#
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec.c b/drivers/net/ixgbe/ixgbe_rxtx_vec.c
index abd10f6..ccea7cd 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec.c
@@ -134,6 +134,12 @@ ixgbe_rxq_rearm(struct ixgbe_rx_queue *rxq)
*/
#ifdef RTE_IXGBE_RX_OLFLAGS_ENABLE
+#ifdef RTE_NEXT_ABI
+#define OLFLAGS_MASK_V (((uint64_t)PKT_RX_VLAN_PKT << 48) | \
+ ((uint64_t)PKT_RX_VLAN_PKT << 32) | \
+ ((uint64_t)PKT_RX_VLAN_PKT << 16) | \
+ ((uint64_t)PKT_RX_VLAN_PKT))
+#else
#define OLFLAGS_MASK ((uint16_t)(PKT_RX_VLAN_PKT | PKT_RX_IPV4_HDR |\
PKT_RX_IPV4_HDR_EXT | PKT_RX_IPV6_HDR |\
PKT_RX_IPV6_HDR_EXT))
@@ -142,11 +148,26 @@ ixgbe_rxq_rearm(struct ixgbe_rx_queue *rxq)
((uint64_t)OLFLAGS_MASK << 16) | \
((uint64_t)OLFLAGS_MASK))
#define PTYPE_SHIFT (1)
+#endif /* RTE_NEXT_ABI */
+
#define VTAG_SHIFT (3)
static inline void
desc_to_olflags_v(__m128i descs[4], struct rte_mbuf **rx_pkts)
{
+#ifdef RTE_NEXT_ABI
+ __m128i vtag0, vtag1;
+ union {
+ uint16_t e[4];
+ uint64_t dword;
+ } vol;
+
+ vtag0 = _mm_unpackhi_epi16(descs[0], descs[1]);
+ vtag1 = _mm_unpackhi_epi16(descs[2], descs[3]);
+ vtag1 = _mm_unpacklo_epi32(vtag0, vtag1);
+ vtag1 = _mm_srli_epi16(vtag1, VTAG_SHIFT);
+ vol.dword = _mm_cvtsi128_si64(vtag1) & OLFLAGS_MASK_V;
+#else
__m128i ptype0, ptype1, vtag0, vtag1;
union {
uint16_t e[4];
@@ -166,6 +187,7 @@ desc_to_olflags_v(__m128i descs[4], struct rte_mbuf **rx_pkts)
ptype1 = _mm_or_si128(ptype1, vtag1);
vol.dword = _mm_cvtsi128_si64(ptype1) & OLFLAGS_MASK_V;
+#endif /* RTE_NEXT_ABI */
rx_pkts[0]->ol_flags = vol.e[0];
rx_pkts[1]->ol_flags = vol.e[1];
@@ -196,6 +218,18 @@ _recv_raw_pkts_vec(struct ixgbe_rx_queue *rxq, struct rte_mbuf **rx_pkts,
int pos;
uint64_t var;
__m128i shuf_msk;
+#ifdef RTE_NEXT_ABI
+ __m128i crc_adjust = _mm_set_epi16(
+ 0, 0, 0, /* ignore non-length fields */
+ -rxq->crc_len, /* sub crc on data_len */
+ 0, /* ignore high-16bits of pkt_len */
+ -rxq->crc_len, /* sub crc on pkt_len */
+ 0, 0 /* ignore pkt_type field */
+ );
+ __m128i dd_check, eop_check;
+ __m128i desc_mask = _mm_set_epi32(0xFFFFFFFF, 0xFFFFFFFF,
+ 0xFFFFFFFF, 0xFFFF07F0);
+#else
__m128i crc_adjust = _mm_set_epi16(
0, 0, 0, 0, /* ignore non-length fields */
0, /* ignore high-16bits of pkt_len */
@@ -204,6 +238,7 @@ _recv_raw_pkts_vec(struct ixgbe_rx_queue *rxq, struct rte_mbuf **rx_pkts,
0 /* ignore pkt_type field */
);
__m128i dd_check, eop_check;
+#endif /* RTE_NEXT_ABI */
if (unlikely(nb_pkts < RTE_IXGBE_VPMD_RX_BURST))
return 0;
@@ -232,6 +267,18 @@ _recv_raw_pkts_vec(struct ixgbe_rx_queue *rxq, struct rte_mbuf **rx_pkts,
eop_check = _mm_set_epi64x(0x0000000200000002LL, 0x0000000200000002LL);
/* mask to shuffle from desc. to mbuf */
+#ifdef RTE_NEXT_ABI
+ shuf_msk = _mm_set_epi8(
+ 7, 6, 5, 4, /* octet 4~7, 32bits rss */
+ 15, 14, /* octet 14~15, low 16 bits vlan_macip */
+ 13, 12, /* octet 12~13, 16 bits data_len */
+ 0xFF, 0xFF, /* skip high 16 bits pkt_len, zero out */
+ 13, 12, /* octet 12~13, low 16 bits pkt_len */
+ 0xFF, 0xFF, /* skip high 16 bits pkt_type */
+ 1, /* octet 1, 8 bits pkt_type field */
+ 0 /* octet 0, 4 bits offset 4 pkt_type field */
+ );
+#else
shuf_msk = _mm_set_epi8(
7, 6, 5, 4, /* octet 4~7, 32bits rss */
0xFF, 0xFF, /* skip high 16 bits vlan_macip, zero out */
@@ -241,18 +288,28 @@ _recv_raw_pkts_vec(struct ixgbe_rx_queue *rxq, struct rte_mbuf **rx_pkts,
13, 12, /* octet 12~13, 16 bits data_len */
0xFF, 0xFF /* skip pkt_type field */
);
+#endif /* RTE_NEXT_ABI */
/* Cache is empty -> need to scan the buffer rings, but first move
* the next 'n' mbufs into the cache */
sw_ring = &rxq->sw_ring[rxq->rx_tail];
- /*
- * A. load 4 packet in one loop
+#ifdef RTE_NEXT_ABI
+ /* A. load 4 packet in one loop
+ * [A*. mask out 4 unused dirty field in desc]
* B. copy 4 mbuf point from swring to rx_pkts
* C. calc the number of DD bits among the 4 packets
* [C*. extract the end-of-packet bit, if requested]
* D. fill info. from desc to mbuf
*/
+#else
+ /* A. load 4 packet in one loop
+ * B. copy 4 mbuf point from swring to rx_pkts
+ * C. calc the number of DD bits among the 4 packets
+ * [C*. extract the end-of-packet bit, if requested]
+ * D. fill info. from desc to mbuf
+ */
+#endif /* RTE_NEXT_ABI */
for (pos = 0, nb_pkts_recd = 0; pos < RTE_IXGBE_VPMD_RX_BURST;
pos += RTE_IXGBE_DESCS_PER_LOOP,
rxdp += RTE_IXGBE_DESCS_PER_LOOP) {
@@ -289,6 +346,16 @@ _recv_raw_pkts_vec(struct ixgbe_rx_queue *rxq, struct rte_mbuf **rx_pkts,
/* B.2 copy 2 mbuf point into rx_pkts */
_mm_storeu_si128((__m128i *)&rx_pkts[pos+2], mbp2);
+#ifdef RTE_NEXT_ABI
+ /* A* mask out 0~3 bits RSS type */
+ descs[3] = _mm_and_si128(descs[3], desc_mask);
+ descs[2] = _mm_and_si128(descs[2], desc_mask);
+
+ /* A* mask out 0~3 bits RSS type */
+ descs[1] = _mm_and_si128(descs[1], desc_mask);
+ descs[0] = _mm_and_si128(descs[0], desc_mask);
+#endif /* RTE_NEXT_ABI */
+
/* avoid compiler reorder optimization */
rte_compiler_barrier();
@@ -301,7 +368,11 @@ _recv_raw_pkts_vec(struct ixgbe_rx_queue *rxq, struct rte_mbuf **rx_pkts,
/* C.1 4=>2 filter staterr info only */
sterr_tmp1 = _mm_unpackhi_epi32(descs[1], descs[0]);
+#ifdef RTE_NEXT_ABI
+ /* set ol_flags with vlan packet type */
+#else
/* set ol_flags with packet type and vlan tag */
+#endif /* RTE_NEXT_ABI */
desc_to_olflags_v(descs, &rx_pkts[pos]);
/* D.2 pkt 3,4 set in_port/nb_seg and remove crc */
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v8 03/18] mbuf: add definitions of unified packet types
2015-06-23 1:50 ` [dpdk-dev] [PATCH v8 00/18] unified packet type Helin Zhang
2015-06-23 1:50 ` [dpdk-dev] [PATCH v8 01/18] mbuf: redefine packet_type in rte_mbuf Helin Zhang
2015-06-23 1:50 ` [dpdk-dev] [PATCH v8 02/18] ixgbe: support unified packet type in vectorized PMD Helin Zhang
@ 2015-06-23 1:50 ` Helin Zhang
2015-06-30 8:43 ` Olivier MATZ
2015-06-23 1:50 ` [dpdk-dev] [PATCH v8 04/18] e1000: replace bit mask based packet type with unified packet type Helin Zhang
` (16 subsequent siblings)
19 siblings, 1 reply; 257+ messages in thread
From: Helin Zhang @ 2015-06-23 1:50 UTC (permalink / raw)
To: dev
As there are only 6 bit flags in ol_flags for indicating packet
types, which is not enough to describe all the possible packet
types hardware can recognize. For example, i40e hardware can
recognize more than 150 packet types. Unified packet type is
composed of L2 type, L3 type, L4 type, tunnel type, inner L2 type,
inner L3 type and inner L4 type fields, and can be stored in
'struct rte_mbuf' of 32 bits field 'packet_type'.
To avoid breaking ABI compatibility, all the changes would be
enabled by RTE_NEXT_ABI, which is disabled by default.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
lib/librte_mbuf/rte_mbuf.h | 487 +++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 487 insertions(+)
v3 changes:
* Put the definitions of unified packet type into a single patch.
v4 changes:
* Added detailed description of each packet types.
v5 changes:
* Re-worded the commit logs.
* Added more detailed description for all packet types, together with examples.
v6 changes:
* Disabled the code changes for unified packet type by default, to
avoid breaking ABI compatibility.
v7 changes:
* Renamed RTE_UNIFIED_PKT_TYPE to RTE_NEXT_ABI.
diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
index 0315561..0ee0c55 100644
--- a/lib/librte_mbuf/rte_mbuf.h
+++ b/lib/librte_mbuf/rte_mbuf.h
@@ -201,6 +201,493 @@ extern "C" {
/* Use final bit of flags to indicate a control mbuf */
#define CTRL_MBUF_FLAG (1ULL << 63) /**< Mbuf contains control data */
+#ifdef RTE_NEXT_ABI
+/*
+ * 32 bits are divided into several fields to mark packet types. Note that
+ * each field is indexical.
+ * - Bit 3:0 is for L2 types.
+ * - Bit 7:4 is for L3 or outer L3 (for tunneling case) types.
+ * - Bit 11:8 is for L4 or outer L4 (for tunneling case) types.
+ * - Bit 15:12 is for tunnel types.
+ * - Bit 19:16 is for inner L2 types.
+ * - Bit 23:20 is for inner L3 types.
+ * - Bit 27:24 is for inner L4 types.
+ * - Bit 31:28 is reserved.
+ *
+ * To be compatible with Vector PMD, RTE_PTYPE_L3_IPV4, RTE_PTYPE_L3_IPV4_EXT,
+ * RTE_PTYPE_L3_IPV6, RTE_PTYPE_L3_IPV6_EXT, RTE_PTYPE_L4_TCP, RTE_PTYPE_L4_UDP
+ * and RTE_PTYPE_L4_SCTP should be kept as below in a contiguous 7 bits.
+ *
+ * Note that L3 types values are selected for checking IPV4/IPV6 header from
+ * performance point of view. Reading annotations of RTE_ETH_IS_IPV4_HDR and
+ * RTE_ETH_IS_IPV6_HDR is needed for any future changes of L3 type values.
+ *
+ * Note that the packet types of the same packet recognized by different
+ * hardware may be different, as different hardware may have different
+ * capability of packet type recognition.
+ *
+ * examples:
+ * <'ether type'=0x0800
+ * | 'version'=4, 'protocol'=0x29
+ * | 'version'=6, 'next header'=0x3A
+ * | 'ICMPv6 header'>
+ * will be recognized on i40e hardware as packet type combination of,
+ * RTE_PTYPE_L2_MAC |
+ * RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ * RTE_PTYPE_TUNNEL_IP |
+ * RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ * RTE_PTYPE_INNER_L4_ICMP.
+ *
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=0x2F
+ * | 'GRE header'
+ * | 'version'=6, 'next header'=0x11
+ * | 'UDP header'>
+ * will be recognized on i40e hardware as packet type combination of,
+ * RTE_PTYPE_L2_MAC |
+ * RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ * RTE_PTYPE_TUNNEL_GRENAT |
+ * RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ * RTE_PTYPE_INNER_L4_UDP.
+ */
+#define RTE_PTYPE_UNKNOWN 0x00000000
+/**
+ * MAC (Media Access Control) packet type.
+ * It is used for outer packet for tunneling cases.
+ *
+ * Packet format:
+ * <'ether type'=[0x0800|0x86DD|others]>
+ */
+#define RTE_PTYPE_L2_MAC 0x00000001
+/**
+ * MAC (Media Access Control) packet type for time sync.
+ *
+ * Packet format:
+ * <'ether type'=0x88F7>
+ */
+#define RTE_PTYPE_L2_MAC_TIMESYNC 0x00000002
+/**
+ * ARP (Address Resolution Protocol) packet type.
+ *
+ * Packet format:
+ * <'ether type'=0x0806>
+ */
+#define RTE_PTYPE_L2_ARP 0x00000003
+/**
+ * LLDP (Link Layer Discovery Protocol) packet type.
+ *
+ * Packet format:
+ * <'ether type'=0x88CC>
+ */
+#define RTE_PTYPE_L2_LLDP 0x00000004
+/**
+ * Mask of layer 2 packet types.
+ * It is used for outer packet for tunneling cases.
+ */
+#define RTE_PTYPE_L2_MASK 0x0000000f
+/**
+ * IP (Internet Protocol) version 4 packet type.
+ * It is used for outer packet for tunneling cases, and does not contain any
+ * header option.
+ *
+ * Packet format:
+ * <'ether type'=0x0800
+ * | 'version'=4, 'ihl'=5>
+ */
+#define RTE_PTYPE_L3_IPV4 0x00000010
+/**
+ * IP (Internet Protocol) version 4 packet type.
+ * It is used for outer packet for tunneling cases, and contains header
+ * options.
+ *
+ * Packet format:
+ * <'ether type'=0x0800
+ * | 'version'=4, 'ihl'=[6-15], 'options'>
+ */
+#define RTE_PTYPE_L3_IPV4_EXT 0x00000030
+/**
+ * IP (Internet Protocol) version 6 packet type.
+ * It is used for outer packet for tunneling cases, and does not contain any
+ * extension header.
+ *
+ * Packet format:
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=0x3B>
+ */
+#define RTE_PTYPE_L3_IPV6 0x00000040
+/**
+ * IP (Internet Protocol) version 4 packet type.
+ * It is used for outer packet for tunneling cases, and may or maynot contain
+ * header options.
+ *
+ * Packet format:
+ * <'ether type'=0x0800
+ * | 'version'=4, 'ihl'=[5-15], <'options'>>
+ */
+#define RTE_PTYPE_L3_IPV4_EXT_UNKNOWN 0x00000090
+/**
+ * IP (Internet Protocol) version 6 packet type.
+ * It is used for outer packet for tunneling cases, and contains extension
+ * headers.
+ *
+ * Packet format:
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=[0x0|0x2B|0x2C|0x32|0x33|0x3C|0x87],
+ * 'extension headers'>
+ */
+#define RTE_PTYPE_L3_IPV6_EXT 0x000000c0
+/**
+ * IP (Internet Protocol) version 6 packet type.
+ * It is used for outer packet for tunneling cases, and may or maynot contain
+ * extension headers.
+ *
+ * Packet format:
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=[0x3B|0x0|0x2B|0x2C|0x32|0x33|0x3C|0x87],
+ * <'extension headers'>>
+ */
+#define RTE_PTYPE_L3_IPV6_EXT_UNKNOWN 0x000000e0
+/**
+ * Mask of layer 3 packet types.
+ * It is used for outer packet for tunneling cases.
+ */
+#define RTE_PTYPE_L3_MASK 0x000000f0
+/**
+ * TCP (Transmission Control Protocol) packet type.
+ * It is used for outer packet for tunneling cases.
+ *
+ * Packet format:
+ * <'ether type'=0x0800
+ * | 'version'=4, 'protocol'=6, 'MF'=0>
+ * or,
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=6>
+ */
+#define RTE_PTYPE_L4_TCP 0x00000100
+/**
+ * UDP (User Datagram Protocol) packet type.
+ * It is used for outer packet for tunneling cases.
+ *
+ * Packet format:
+ * <'ether type'=0x0800
+ * | 'version'=4, 'protocol'=17, 'MF'=0>
+ * or,
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=17>
+ */
+#define RTE_PTYPE_L4_UDP 0x00000200
+/**
+ * Fragmented IP (Internet Protocol) packet type.
+ * It is used for outer packet for tunneling cases.
+ *
+ * It refers to those packets of any IP types, which can be recognized as
+ * fragmented. A fragmented packet cannot be recognized as any other L4 types
+ * (RTE_PTYPE_L4_TCP, RTE_PTYPE_L4_UDP, RTE_PTYPE_L4_SCTP, RTE_PTYPE_L4_ICMP,
+ * RTE_PTYPE_L4_NONFRAG).
+ *
+ * Packet format:
+ * <'ether type'=0x0800
+ * | 'version'=4, 'MF'=1>
+ * or,
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=44>
+ */
+#define RTE_PTYPE_L4_FRAG 0x00000300
+/**
+ * SCTP (Stream Control Transmission Protocol) packet type.
+ * It is used for outer packet for tunneling cases.
+ *
+ * Packet format:
+ * <'ether type'=0x0800
+ * | 'version'=4, 'protocol'=132, 'MF'=0>
+ * or,
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=132>
+ */
+#define RTE_PTYPE_L4_SCTP 0x00000400
+/**
+ * ICMP (Internet Control Message Protocol) packet type.
+ * It is used for outer packet for tunneling cases.
+ *
+ * Packet format:
+ * <'ether type'=0x0800
+ * | 'version'=4, 'protocol'=1, 'MF'=0>
+ * or,
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=1>
+ */
+#define RTE_PTYPE_L4_ICMP 0x00000500
+/**
+ * Non-fragmented IP (Internet Protocol) packet type.
+ * It is used for outer packet for tunneling cases.
+ *
+ * It refers to those packets of any IP types, while cannot be recognized as
+ * any of above L4 types (RTE_PTYPE_L4_TCP, RTE_PTYPE_L4_UDP,
+ * RTE_PTYPE_L4_FRAG, RTE_PTYPE_L4_SCTP, RTE_PTYPE_L4_ICMP).
+ *
+ * Packet format:
+ * <'ether type'=0x0800
+ * | 'version'=4, 'protocol'!=[6|17|132|1], 'MF'=0>
+ * or,
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'!=[6|17|44|132|1]>
+ */
+#define RTE_PTYPE_L4_NONFRAG 0x00000600
+/**
+ * Mask of layer 4 packet types.
+ * It is used for outer packet for tunneling cases.
+ */
+#define RTE_PTYPE_L4_MASK 0x00000f00
+/**
+ * IP (Internet Protocol) in IP (Internet Protocol) tunneling packet type.
+ *
+ * Packet format:
+ * <'ether type'=0x0800
+ * | 'version'=4, 'protocol'=[4|41]>
+ * or,
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=[4|41]>
+ */
+#define RTE_PTYPE_TUNNEL_IP 0x00001000
+/**
+ * GRE (Generic Routing Encapsulation) tunneling packet type.
+ *
+ * Packet format:
+ * <'ether type'=0x0800
+ * | 'version'=4, 'protocol'=47>
+ * or,
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=47>
+ */
+#define RTE_PTYPE_TUNNEL_GRE 0x00002000
+/**
+ * VXLAN (Virtual eXtensible Local Area Network) tunneling packet type.
+ *
+ * Packet format:
+ * <'ether type'=0x0800
+ * | 'version'=4, 'protocol'=17
+ * | 'destination port'=4798>
+ * or,
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=17
+ * | 'destination port'=4798>
+ */
+#define RTE_PTYPE_TUNNEL_VXLAN 0x00003000
+/**
+ * NVGRE (Network Virtualization using Generic Routing Encapsulation) tunneling
+ * packet type.
+ *
+ * Packet format:
+ * <'ether type'=0x0800
+ * | 'version'=4, 'protocol'=47
+ * | 'protocol type'=0x6558>
+ * or,
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=47
+ * | 'protocol type'=0x6558'>
+ */
+#define RTE_PTYPE_TUNNEL_NVGRE 0x00004000
+/**
+ * GENEVE (Generic Network Virtualization Encapsulation) tunneling packet type.
+ *
+ * Packet format:
+ * <'ether type'=0x0800
+ * | 'version'=4, 'protocol'=17
+ * | 'destination port'=6081>
+ * or,
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=17
+ * | 'destination port'=6081>
+ */
+#define RTE_PTYPE_TUNNEL_GENEVE 0x00005000
+/**
+ * Tunneling packet type of Teredo, VXLAN (Virtual eXtensible Local Area
+ * Network) or GRE (Generic Routing Encapsulation) could be recognized as this
+ * packet type, if they can not be recognized independently as of hardware
+ * capability.
+ */
+#define RTE_PTYPE_TUNNEL_GRENAT 0x00006000
+/**
+ * Mask of tunneling packet types.
+ */
+#define RTE_PTYPE_TUNNEL_MASK 0x0000f000
+/**
+ * MAC (Media Access Control) packet type.
+ * It is used for inner packet type only.
+ *
+ * Packet format (inner only):
+ * <'ether type'=[0x800|0x86DD]>
+ */
+#define RTE_PTYPE_INNER_L2_MAC 0x00010000
+/**
+ * MAC (Media Access Control) packet type with VLAN (Virtual Local Area
+ * Network) tag.
+ *
+ * Packet format (inner only):
+ * <'ether type'=[0x800|0x86DD], vlan=[1-4095]>
+ */
+#define RTE_PTYPE_INNER_L2_MAC_VLAN 0x00020000
+/**
+ * Mask of inner layer 2 packet types.
+ */
+#define RTE_PTYPE_INNER_L2_MASK 0x000f0000
+/**
+ * IP (Internet Protocol) version 4 packet type.
+ * It is used for inner packet only, and does not contain any header option.
+ *
+ * Packet format (inner only):
+ * <'ether type'=0x0800
+ * | 'version'=4, 'ihl'=5>
+ */
+#define RTE_PTYPE_INNER_L3_IPV4 0x00100000
+/**
+ * IP (Internet Protocol) version 4 packet type.
+ * It is used for inner packet only, and contains header options.
+ *
+ * Packet format (inner only):
+ * <'ether type'=0x0800
+ * | 'version'=4, 'ihl'=[6-15], 'options'>
+ */
+#define RTE_PTYPE_INNER_L3_IPV4_EXT 0x00200000
+/**
+ * IP (Internet Protocol) version 6 packet type.
+ * It is used for inner packet only, and does not contain any extension header.
+ *
+ * Packet format (inner only):
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=0x3B>
+ */
+#define RTE_PTYPE_INNER_L3_IPV6 0x00300000
+/**
+ * IP (Internet Protocol) version 4 packet type.
+ * It is used for inner packet only, and may or maynot contain header options.
+ *
+ * Packet format (inner only):
+ * <'ether type'=0x0800
+ * | 'version'=4, 'ihl'=[5-15], <'options'>>
+ */
+#define RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN 0x00400000
+/**
+ * IP (Internet Protocol) version 6 packet type.
+ * It is used for inner packet only, and contains extension headers.
+ *
+ * Packet format (inner only):
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=[0x0|0x2B|0x2C|0x32|0x33|0x3C|0x87],
+ * 'extension headers'>
+ */
+#define RTE_PTYPE_INNER_L3_IPV6_EXT 0x00500000
+/**
+ * IP (Internet Protocol) version 6 packet type.
+ * It is used for inner packet only, and may or maynot contain extension
+ * headers.
+ *
+ * Packet format (inner only):
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=[0x3B|0x0|0x2B|0x2C|0x32|0x33|0x3C|0x87],
+ * <'extension headers'>>
+ */
+#define RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN 0x00600000
+/**
+ * Mask of inner layer 3 packet types.
+ */
+#define RTE_PTYPE_INNER_INNER_L3_MASK 0x00f00000
+/**
+ * TCP (Transmission Control Protocol) packet type.
+ * It is used for inner packet only.
+ *
+ * Packet format (inner only):
+ * <'ether type'=0x0800
+ * | 'version'=4, 'protocol'=6, 'MF'=0>
+ * or,
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=6>
+ */
+#define RTE_PTYPE_INNER_L4_TCP 0x01000000
+/**
+ * UDP (User Datagram Protocol) packet type.
+ * It is used for inner packet only.
+ *
+ * Packet format (inner only):
+ * <'ether type'=0x0800
+ * | 'version'=4, 'protocol'=17, 'MF'=0>
+ * or,
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=17>
+ */
+#define RTE_PTYPE_INNER_L4_UDP 0x02000000
+/**
+ * Fragmented IP (Internet Protocol) packet type.
+ * It is used for inner packet only, and may or maynot have layer 4 packet.
+ *
+ * Packet format (inner only):
+ * <'ether type'=0x0800
+ * | 'version'=4, 'MF'=1>
+ * or,
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=44>
+ */
+#define RTE_PTYPE_INNER_L4_FRAG 0x03000000
+/**
+ * SCTP (Stream Control Transmission Protocol) packet type.
+ * It is used for inner packet only.
+ *
+ * Packet format (inner only):
+ * <'ether type'=0x0800
+ * | 'version'=4, 'protocol'=132, 'MF'=0>
+ * or,
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=132>
+ */
+#define RTE_PTYPE_INNER_L4_SCTP 0x04000000
+/**
+ * ICMP (Internet Control Message Protocol) packet type.
+ * It is used for inner packet only.
+ *
+ * Packet format (inner only):
+ * <'ether type'=0x0800
+ * | 'version'=4, 'protocol'=1, 'MF'=0>
+ * or,
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=1>
+ */
+#define RTE_PTYPE_INNER_L4_ICMP 0x05000000
+/**
+ * Non-fragmented IP (Internet Protocol) packet type.
+ * It is used for inner packet only, and may or maynot have other unknown layer
+ * 4 packet types.
+ *
+ * Packet format (inner only):
+ * <'ether type'=0x0800
+ * | 'version'=4, 'protocol'!=[6|17|132|1], 'MF'=0>
+ * or,
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'!=[6|17|44|132|1]>
+ */
+#define RTE_PTYPE_INNER_L4_NONFRAG 0x06000000
+/**
+ * Mask of inner layer 4 packet types.
+ */
+#define RTE_PTYPE_INNER_L4_MASK 0x0f000000
+
+/**
+ * Check if the (outer) L3 header is IPv4. To avoid comparing IPv4 types one by
+ * one, bit 4 is selected to be used for IPv4 only. Then checking bit 4 can
+ * determin if it is an IPV4 packet.
+ */
+#define RTE_ETH_IS_IPV4_HDR(ptype) ((ptype) & RTE_PTYPE_L3_IPV4)
+
+/**
+ * Check if the (outer) L3 header is IPv4. To avoid comparing IPv4 types one by
+ * one, bit 6 is selected to be used for IPv4 only. Then checking bit 6 can
+ * determin if it is an IPV4 packet.
+ */
+#define RTE_ETH_IS_IPV6_HDR(ptype) ((ptype) & RTE_PTYPE_L3_IPV6)
+
+/* Check if it is a tunneling packet */
+#define RTE_ETH_IS_TUNNEL_PKT(ptype) ((ptype) & RTE_PTYPE_TUNNEL_MASK)
+#endif /* RTE_NEXT_ABI */
+
/**
* Get the name of a RX offload flag
*
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v8 04/18] e1000: replace bit mask based packet type with unified packet type
2015-06-23 1:50 ` [dpdk-dev] [PATCH v8 00/18] unified packet type Helin Zhang
` (2 preceding siblings ...)
2015-06-23 1:50 ` [dpdk-dev] [PATCH v8 03/18] mbuf: add definitions of unified packet types Helin Zhang
@ 2015-06-23 1:50 ` Helin Zhang
2015-06-23 1:50 ` [dpdk-dev] [PATCH v8 05/18] ixgbe: " Helin Zhang
` (15 subsequent siblings)
19 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-06-23 1:50 UTC (permalink / raw)
To: dev
To unify packet types among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.
To avoid breaking ABI compatibility, all the changes would be
enabled by RTE_NEXT_ABI, which is disabled by default.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
drivers/net/e1000/igb_rxtx.c | 102 +++++++++++++++++++++++++++++++++++++++++++
1 file changed, 102 insertions(+)
v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.
v5 changes:
* Re-worded the commit logs.
v6 changes:
* Disabled the code changes for unified packet type by default, to
avoid breaking ABI compatibility.
v7 changes:
* Renamed RTE_UNIFIED_PKT_TYPE to RTE_NEXT_ABI.
diff --git a/drivers/net/e1000/igb_rxtx.c b/drivers/net/e1000/igb_rxtx.c
index 43d6703..d1c2ef8 100644
--- a/drivers/net/e1000/igb_rxtx.c
+++ b/drivers/net/e1000/igb_rxtx.c
@@ -590,6 +590,99 @@ eth_igb_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
* RX functions
*
**********************************************************************/
+#ifdef RTE_NEXT_ABI
+#define IGB_PACKET_TYPE_IPV4 0X01
+#define IGB_PACKET_TYPE_IPV4_TCP 0X11
+#define IGB_PACKET_TYPE_IPV4_UDP 0X21
+#define IGB_PACKET_TYPE_IPV4_SCTP 0X41
+#define IGB_PACKET_TYPE_IPV4_EXT 0X03
+#define IGB_PACKET_TYPE_IPV4_EXT_SCTP 0X43
+#define IGB_PACKET_TYPE_IPV6 0X04
+#define IGB_PACKET_TYPE_IPV6_TCP 0X14
+#define IGB_PACKET_TYPE_IPV6_UDP 0X24
+#define IGB_PACKET_TYPE_IPV6_EXT 0X0C
+#define IGB_PACKET_TYPE_IPV6_EXT_TCP 0X1C
+#define IGB_PACKET_TYPE_IPV6_EXT_UDP 0X2C
+#define IGB_PACKET_TYPE_IPV4_IPV6 0X05
+#define IGB_PACKET_TYPE_IPV4_IPV6_TCP 0X15
+#define IGB_PACKET_TYPE_IPV4_IPV6_UDP 0X25
+#define IGB_PACKET_TYPE_IPV4_IPV6_EXT 0X0D
+#define IGB_PACKET_TYPE_IPV4_IPV6_EXT_TCP 0X1D
+#define IGB_PACKET_TYPE_IPV4_IPV6_EXT_UDP 0X2D
+#define IGB_PACKET_TYPE_MAX 0X80
+#define IGB_PACKET_TYPE_MASK 0X7F
+#define IGB_PACKET_TYPE_SHIFT 0X04
+static inline uint32_t
+igb_rxd_pkt_info_to_pkt_type(uint16_t pkt_info)
+{
+ static const uint32_t
+ ptype_table[IGB_PACKET_TYPE_MAX] __rte_cache_aligned = {
+ [IGB_PACKET_TYPE_IPV4] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4,
+ [IGB_PACKET_TYPE_IPV4_EXT] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4_EXT,
+ [IGB_PACKET_TYPE_IPV6] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6,
+ [IGB_PACKET_TYPE_IPV4_IPV6] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6,
+ [IGB_PACKET_TYPE_IPV6_EXT] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV6_EXT,
+ [IGB_PACKET_TYPE_IPV4_IPV6_EXT] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT,
+ [IGB_PACKET_TYPE_IPV4_TCP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_TCP,
+ [IGB_PACKET_TYPE_IPV6_TCP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_TCP,
+ [IGB_PACKET_TYPE_IPV4_IPV6_TCP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6 | RTE_PTYPE_INNER_L4_TCP,
+ [IGB_PACKET_TYPE_IPV6_EXT_TCP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV6_EXT | RTE_PTYPE_L4_TCP,
+ [IGB_PACKET_TYPE_IPV4_IPV6_EXT_TCP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT | RTE_PTYPE_INNER_L4_TCP,
+ [IGB_PACKET_TYPE_IPV4_UDP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_UDP,
+ [IGB_PACKET_TYPE_IPV6_UDP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_UDP,
+ [IGB_PACKET_TYPE_IPV4_IPV6_UDP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6 | RTE_PTYPE_INNER_L4_UDP,
+ [IGB_PACKET_TYPE_IPV6_EXT_UDP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV6_EXT | RTE_PTYPE_L4_UDP,
+ [IGB_PACKET_TYPE_IPV4_IPV6_EXT_UDP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT | RTE_PTYPE_INNER_L4_UDP,
+ [IGB_PACKET_TYPE_IPV4_SCTP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_SCTP,
+ [IGB_PACKET_TYPE_IPV4_EXT_SCTP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4_EXT | RTE_PTYPE_L4_SCTP,
+ };
+ if (unlikely(pkt_info & E1000_RXDADV_PKTTYPE_ETQF))
+ return RTE_PTYPE_UNKNOWN;
+
+ pkt_info = (pkt_info >> IGB_PACKET_TYPE_SHIFT) & IGB_PACKET_TYPE_MASK;
+
+ return ptype_table[pkt_info];
+}
+
+static inline uint64_t
+rx_desc_hlen_type_rss_to_pkt_flags(uint32_t hl_tp_rs)
+{
+ uint64_t pkt_flags = ((hl_tp_rs & 0x0F) == 0) ? 0 : PKT_RX_RSS_HASH;
+
+#if defined(RTE_LIBRTE_IEEE1588)
+ static uint32_t ip_pkt_etqf_map[8] = {
+ 0, 0, 0, PKT_RX_IEEE1588_PTP,
+ 0, 0, 0, 0,
+ };
+
+ pkt_flags |= ip_pkt_etqf_map[(hl_tp_rs >> 4) & 0x07];
+#endif
+
+ return pkt_flags;
+}
+#else /* RTE_NEXT_ABI */
static inline uint64_t
rx_desc_hlen_type_rss_to_pkt_flags(uint32_t hl_tp_rs)
{
@@ -617,6 +710,7 @@ rx_desc_hlen_type_rss_to_pkt_flags(uint32_t hl_tp_rs)
#endif
return pkt_flags | (((hl_tp_rs & 0x0F) == 0) ? 0 : PKT_RX_RSS_HASH);
}
+#endif /* RTE_NEXT_ABI */
static inline uint64_t
rx_desc_status_to_pkt_flags(uint32_t rx_status)
@@ -790,6 +884,10 @@ eth_igb_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
pkt_flags = pkt_flags | rx_desc_status_to_pkt_flags(staterr);
pkt_flags = pkt_flags | rx_desc_error_to_pkt_flags(staterr);
rxm->ol_flags = pkt_flags;
+#ifdef RTE_NEXT_ABI
+ rxm->packet_type = igb_rxd_pkt_info_to_pkt_type(rxd.wb.lower.
+ lo_dword.hs_rss.pkt_info);
+#endif
/*
* Store the mbuf address into the next entry of the array
@@ -1024,6 +1122,10 @@ eth_igb_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
pkt_flags = pkt_flags | rx_desc_status_to_pkt_flags(staterr);
pkt_flags = pkt_flags | rx_desc_error_to_pkt_flags(staterr);
first_seg->ol_flags = pkt_flags;
+#ifdef RTE_NEXT_ABI
+ first_seg->packet_type = igb_rxd_pkt_info_to_pkt_type(rxd.wb.
+ lower.lo_dword.hs_rss.pkt_info);
+#endif
/* Prefetch data of first segment, if configured to do so. */
rte_packet_prefetch((char *)first_seg->buf_addr +
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v8 05/18] ixgbe: replace bit mask based packet type with unified packet type
2015-06-23 1:50 ` [dpdk-dev] [PATCH v8 00/18] unified packet type Helin Zhang
` (3 preceding siblings ...)
2015-06-23 1:50 ` [dpdk-dev] [PATCH v8 04/18] e1000: replace bit mask based packet type with unified packet type Helin Zhang
@ 2015-06-23 1:50 ` Helin Zhang
2015-06-23 1:50 ` [dpdk-dev] [PATCH v8 06/18] i40e: " Helin Zhang
` (14 subsequent siblings)
19 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-06-23 1:50 UTC (permalink / raw)
To: dev
To unify packet type among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.
To avoid breaking ABI compatibility, all the changes would be
enabled by RTE_NEXT_ABI, which is disabled by default.
Note that around 2.5% performance drop (64B) was observed of doing
4 ports (1 port per 82599 card) IO forwarding on the same SNB core.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
drivers/net/ixgbe/ixgbe_rxtx.c | 163 +++++++++++++++++++++++++++++++++++++++++
1 file changed, 163 insertions(+)
v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.
v5 changes:
* Re-worded the commit logs.
v6 changes:
* Disabled the code changes for unified packet type by default, to
avoid breaking ABI compatibility.
v7 changes:
* Renamed RTE_UNIFIED_PKT_TYPE to RTE_NEXT_ABI.
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c
index a211096..83a869f 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx.c
@@ -860,6 +860,110 @@ end_of_tx:
* RX functions
*
**********************************************************************/
+#ifdef RTE_NEXT_ABI
+#define IXGBE_PACKET_TYPE_IPV4 0X01
+#define IXGBE_PACKET_TYPE_IPV4_TCP 0X11
+#define IXGBE_PACKET_TYPE_IPV4_UDP 0X21
+#define IXGBE_PACKET_TYPE_IPV4_SCTP 0X41
+#define IXGBE_PACKET_TYPE_IPV4_EXT 0X03
+#define IXGBE_PACKET_TYPE_IPV4_EXT_SCTP 0X43
+#define IXGBE_PACKET_TYPE_IPV6 0X04
+#define IXGBE_PACKET_TYPE_IPV6_TCP 0X14
+#define IXGBE_PACKET_TYPE_IPV6_UDP 0X24
+#define IXGBE_PACKET_TYPE_IPV6_EXT 0X0C
+#define IXGBE_PACKET_TYPE_IPV6_EXT_TCP 0X1C
+#define IXGBE_PACKET_TYPE_IPV6_EXT_UDP 0X2C
+#define IXGBE_PACKET_TYPE_IPV4_IPV6 0X05
+#define IXGBE_PACKET_TYPE_IPV4_IPV6_TCP 0X15
+#define IXGBE_PACKET_TYPE_IPV4_IPV6_UDP 0X25
+#define IXGBE_PACKET_TYPE_IPV4_IPV6_EXT 0X0D
+#define IXGBE_PACKET_TYPE_IPV4_IPV6_EXT_TCP 0X1D
+#define IXGBE_PACKET_TYPE_IPV4_IPV6_EXT_UDP 0X2D
+#define IXGBE_PACKET_TYPE_MAX 0X80
+#define IXGBE_PACKET_TYPE_MASK 0X7F
+#define IXGBE_PACKET_TYPE_SHIFT 0X04
+static inline uint32_t
+ixgbe_rxd_pkt_info_to_pkt_type(uint16_t pkt_info)
+{
+ static const uint32_t
+ ptype_table[IXGBE_PACKET_TYPE_MAX] __rte_cache_aligned = {
+ [IXGBE_PACKET_TYPE_IPV4] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4,
+ [IXGBE_PACKET_TYPE_IPV4_EXT] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4_EXT,
+ [IXGBE_PACKET_TYPE_IPV6] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV6,
+ [IXGBE_PACKET_TYPE_IPV4_IPV6] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6,
+ [IXGBE_PACKET_TYPE_IPV6_EXT] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV6_EXT,
+ [IXGBE_PACKET_TYPE_IPV4_IPV6_EXT] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT,
+ [IXGBE_PACKET_TYPE_IPV4_TCP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_TCP,
+ [IXGBE_PACKET_TYPE_IPV6_TCP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_TCP,
+ [IXGBE_PACKET_TYPE_IPV4_IPV6_TCP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6 | RTE_PTYPE_INNER_L4_TCP,
+ [IXGBE_PACKET_TYPE_IPV6_EXT_TCP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV6_EXT | RTE_PTYPE_L4_TCP,
+ [IXGBE_PACKET_TYPE_IPV4_IPV6_EXT_TCP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT | RTE_PTYPE_INNER_L4_TCP,
+ [IXGBE_PACKET_TYPE_IPV4_UDP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_UDP,
+ [IXGBE_PACKET_TYPE_IPV6_UDP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_UDP,
+ [IXGBE_PACKET_TYPE_IPV4_IPV6_UDP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6 | RTE_PTYPE_INNER_L4_UDP,
+ [IXGBE_PACKET_TYPE_IPV6_EXT_UDP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV6_EXT | RTE_PTYPE_L4_UDP,
+ [IXGBE_PACKET_TYPE_IPV4_IPV6_EXT_UDP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT | RTE_PTYPE_INNER_L4_UDP,
+ [IXGBE_PACKET_TYPE_IPV4_SCTP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_SCTP,
+ [IXGBE_PACKET_TYPE_IPV4_EXT_SCTP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4_EXT | RTE_PTYPE_L4_SCTP,
+ };
+ if (unlikely(pkt_info & IXGBE_RXDADV_PKTTYPE_ETQF))
+ return RTE_PTYPE_UNKNOWN;
+
+ pkt_info = (pkt_info >> IXGBE_PACKET_TYPE_SHIFT) &
+ IXGBE_PACKET_TYPE_MASK;
+
+ return ptype_table[pkt_info];
+}
+
+static inline uint64_t
+ixgbe_rxd_pkt_info_to_pkt_flags(uint16_t pkt_info)
+{
+ static uint64_t ip_rss_types_map[16] __rte_cache_aligned = {
+ 0, PKT_RX_RSS_HASH, PKT_RX_RSS_HASH, PKT_RX_RSS_HASH,
+ 0, PKT_RX_RSS_HASH, 0, PKT_RX_RSS_HASH,
+ PKT_RX_RSS_HASH, 0, 0, 0,
+ 0, 0, 0, PKT_RX_FDIR,
+ };
+#ifdef RTE_LIBRTE_IEEE1588
+ static uint64_t ip_pkt_etqf_map[8] = {
+ 0, 0, 0, PKT_RX_IEEE1588_PTP,
+ 0, 0, 0, 0,
+ };
+
+ if (likely(pkt_info & IXGBE_RXDADV_PKTTYPE_ETQF))
+ return ip_pkt_etqf_map[(pkt_info >> 4) & 0X07] |
+ ip_rss_types_map[pkt_info & 0XF];
+ else
+ return ip_rss_types_map[pkt_info & 0XF];
+#else
+ return ip_rss_types_map[pkt_info & 0XF];
+#endif
+}
+#else /* RTE_NEXT_ABI */
static inline uint64_t
rx_desc_hlen_type_rss_to_pkt_flags(uint32_t hl_tp_rs)
{
@@ -895,6 +999,7 @@ rx_desc_hlen_type_rss_to_pkt_flags(uint32_t hl_tp_rs)
#endif
return pkt_flags | ip_rss_types_map[hl_tp_rs & 0xF];
}
+#endif /* RTE_NEXT_ABI */
static inline uint64_t
rx_desc_status_to_pkt_flags(uint32_t rx_status)
@@ -950,7 +1055,13 @@ ixgbe_rx_scan_hw_ring(struct ixgbe_rx_queue *rxq)
struct rte_mbuf *mb;
uint16_t pkt_len;
uint64_t pkt_flags;
+#ifdef RTE_NEXT_ABI
+ int nb_dd;
+ uint32_t s[LOOK_AHEAD];
+ uint16_t pkt_info[LOOK_AHEAD];
+#else
int s[LOOK_AHEAD], nb_dd;
+#endif /* RTE_NEXT_ABI */
int i, j, nb_rx = 0;
@@ -973,6 +1084,12 @@ ixgbe_rx_scan_hw_ring(struct ixgbe_rx_queue *rxq)
for (j = LOOK_AHEAD-1; j >= 0; --j)
s[j] = rxdp[j].wb.upper.status_error;
+#ifdef RTE_NEXT_ABI
+ for (j = LOOK_AHEAD-1; j >= 0; --j)
+ pkt_info[j] = rxdp[j].wb.lower.lo_dword.
+ hs_rss.pkt_info;
+#endif /* RTE_NEXT_ABI */
+
/* Compute how many status bits were set */
nb_dd = 0;
for (j = 0; j < LOOK_AHEAD; ++j)
@@ -989,12 +1106,22 @@ ixgbe_rx_scan_hw_ring(struct ixgbe_rx_queue *rxq)
mb->vlan_tci = rte_le_to_cpu_16(rxdp[j].wb.upper.vlan);
/* convert descriptor fields to rte mbuf flags */
+#ifdef RTE_NEXT_ABI
+ pkt_flags = rx_desc_status_to_pkt_flags(s[j]);
+ pkt_flags |= rx_desc_error_to_pkt_flags(s[j]);
+ pkt_flags |=
+ ixgbe_rxd_pkt_info_to_pkt_flags(pkt_info[j]);
+ mb->ol_flags = pkt_flags;
+ mb->packet_type =
+ ixgbe_rxd_pkt_info_to_pkt_type(pkt_info[j]);
+#else /* RTE_NEXT_ABI */
pkt_flags = rx_desc_hlen_type_rss_to_pkt_flags(
rxdp[j].wb.lower.lo_dword.data);
/* reuse status field from scan list */
pkt_flags |= rx_desc_status_to_pkt_flags(s[j]);
pkt_flags |= rx_desc_error_to_pkt_flags(s[j]);
mb->ol_flags = pkt_flags;
+#endif /* RTE_NEXT_ABI */
if (likely(pkt_flags & PKT_RX_RSS_HASH))
mb->hash.rss = rxdp[j].wb.lower.hi_dword.rss;
@@ -1211,7 +1338,11 @@ ixgbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
union ixgbe_adv_rx_desc rxd;
uint64_t dma_addr;
uint32_t staterr;
+#ifdef RTE_NEXT_ABI
+ uint32_t pkt_info;
+#else
uint32_t hlen_type_rss;
+#endif
uint16_t pkt_len;
uint16_t rx_id;
uint16_t nb_rx;
@@ -1329,6 +1460,19 @@ ixgbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
rxm->data_len = pkt_len;
rxm->port = rxq->port_id;
+#ifdef RTE_NEXT_ABI
+ pkt_info = rte_le_to_cpu_32(rxd.wb.lower.lo_dword.hs_rss.
+ pkt_info);
+ /* Only valid if PKT_RX_VLAN_PKT set in pkt_flags */
+ rxm->vlan_tci = rte_le_to_cpu_16(rxd.wb.upper.vlan);
+
+ pkt_flags = rx_desc_status_to_pkt_flags(staterr);
+ pkt_flags = pkt_flags | rx_desc_error_to_pkt_flags(staterr);
+ pkt_flags = pkt_flags |
+ ixgbe_rxd_pkt_info_to_pkt_flags(pkt_info);
+ rxm->ol_flags = pkt_flags;
+ rxm->packet_type = ixgbe_rxd_pkt_info_to_pkt_type(pkt_info);
+#else /* RTE_NEXT_ABI */
hlen_type_rss = rte_le_to_cpu_32(rxd.wb.lower.lo_dword.data);
/* Only valid if PKT_RX_VLAN_PKT set in pkt_flags */
rxm->vlan_tci = rte_le_to_cpu_16(rxd.wb.upper.vlan);
@@ -1337,6 +1481,7 @@ ixgbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
pkt_flags = pkt_flags | rx_desc_status_to_pkt_flags(staterr);
pkt_flags = pkt_flags | rx_desc_error_to_pkt_flags(staterr);
rxm->ol_flags = pkt_flags;
+#endif /* RTE_NEXT_ABI */
if (likely(pkt_flags & PKT_RX_RSS_HASH))
rxm->hash.rss = rxd.wb.lower.hi_dword.rss;
@@ -1410,6 +1555,23 @@ ixgbe_fill_cluster_head_buf(
uint8_t port_id,
uint32_t staterr)
{
+#ifdef RTE_NEXT_ABI
+ uint16_t pkt_info;
+ uint64_t pkt_flags;
+
+ head->port = port_id;
+
+ /* The vlan_tci field is only valid when PKT_RX_VLAN_PKT is
+ * set in the pkt_flags field.
+ */
+ head->vlan_tci = rte_le_to_cpu_16(desc->wb.upper.vlan);
+ pkt_info = rte_le_to_cpu_32(desc->wb.lower.lo_dword.hs_rss.pkt_info);
+ pkt_flags = rx_desc_status_to_pkt_flags(staterr);
+ pkt_flags |= rx_desc_error_to_pkt_flags(staterr);
+ pkt_flags |= ixgbe_rxd_pkt_info_to_pkt_flags(pkt_info);
+ head->ol_flags = pkt_flags;
+ head->packet_type = ixgbe_rxd_pkt_info_to_pkt_type(pkt_info);
+#else /* RTE_NEXT_ABI */
uint32_t hlen_type_rss;
uint64_t pkt_flags;
@@ -1425,6 +1587,7 @@ ixgbe_fill_cluster_head_buf(
pkt_flags |= rx_desc_status_to_pkt_flags(staterr);
pkt_flags |= rx_desc_error_to_pkt_flags(staterr);
head->ol_flags = pkt_flags;
+#endif /* RTE_NEXT_ABI */
if (likely(pkt_flags & PKT_RX_RSS_HASH))
head->hash.rss = rte_le_to_cpu_32(desc->wb.lower.hi_dword.rss);
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v8 06/18] i40e: replace bit mask based packet type with unified packet type
2015-06-23 1:50 ` [dpdk-dev] [PATCH v8 00/18] unified packet type Helin Zhang
` (4 preceding siblings ...)
2015-06-23 1:50 ` [dpdk-dev] [PATCH v8 05/18] ixgbe: " Helin Zhang
@ 2015-06-23 1:50 ` Helin Zhang
2015-06-23 1:50 ` [dpdk-dev] [PATCH v8 07/18] enic: " Helin Zhang
` (13 subsequent siblings)
19 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-06-23 1:50 UTC (permalink / raw)
To: dev
To unify packet types among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.
To avoid breaking ABI compatibility, all the changes would be
enabled by RTE_NEXT_ABI, which is disabled by default.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
drivers/net/i40e/i40e_rxtx.c | 528 +++++++++++++++++++++++++++++++++++++++++++
1 file changed, 528 insertions(+)
v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.
v5 changes:
* Re-worded the commit logs.
v6 changes:
* Disabled the code changes for unified packet type by default, to
avoid breaking ABI compatibility.
v7 changes:
* Renamed RTE_UNIFIED_PKT_TYPE to RTE_NEXT_ABI.
diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
index b2e1d6d..b951da0 100644
--- a/drivers/net/i40e/i40e_rxtx.c
+++ b/drivers/net/i40e/i40e_rxtx.c
@@ -176,6 +176,514 @@ i40e_rxd_error_to_pkt_flags(uint64_t qword)
return flags;
}
+#ifdef RTE_NEXT_ABI
+/* For each value it means, datasheet of hardware can tell more details */
+static inline uint32_t
+i40e_rxd_pkt_type_mapping(uint8_t ptype)
+{
+ static const uint32_t ptype_table[UINT8_MAX] __rte_cache_aligned = {
+ /* L2 types */
+ /* [0] reserved */
+ [1] = RTE_PTYPE_L2_MAC,
+ [2] = RTE_PTYPE_L2_MAC_TIMESYNC,
+ /* [3] - [5] reserved */
+ [6] = RTE_PTYPE_L2_LLDP,
+ /* [7] - [10] reserved */
+ [11] = RTE_PTYPE_L2_ARP,
+ /* [12] - [21] reserved */
+
+ /* Non tunneled IPv4 */
+ [22] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_FRAG,
+ [23] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_NONFRAG,
+ [24] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_UDP,
+ /* [25] reserved */
+ [26] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_TCP,
+ [27] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_SCTP,
+ [28] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_ICMP,
+
+ /* IPv4 --> IPv4 */
+ [29] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [30] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [31] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [32] reserved */
+ [33] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [34] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [35] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv4 --> IPv6 */
+ [36] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [37] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [38] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [39] reserved */
+ [40] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [41] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [42] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv4 --> GRE/Teredo/VXLAN */
+ [43] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT,
+
+ /* IPv4 --> GRE/Teredo/VXLAN --> IPv4 */
+ [44] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [45] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [46] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [47] reserved */
+ [48] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [49] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [50] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv4 --> GRE/Teredo/VXLAN --> IPv6 */
+ [51] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [52] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [53] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [54] reserved */
+ [55] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [56] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [57] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv4 --> GRE/Teredo/VXLAN --> MAC */
+ [58] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC,
+
+ /* IPv4 --> GRE/Teredo/VXLAN --> MAC --> IPv4 */
+ [59] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [60] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [61] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [62] reserved */
+ [63] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [64] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [65] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv4 --> GRE/Teredo/VXLAN --> MAC --> IPv6 */
+ [66] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [67] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [68] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [69] reserved */
+ [70] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [71] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [72] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv4 --> GRE/Teredo/VXLAN --> MAC/VLAN */
+ [73] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN,
+
+ /* IPv4 --> GRE/Teredo/VXLAN --> MAC/VLAN --> IPv4 */
+ [74] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [75] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [76] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [77] reserved */
+ [78] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [79] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [80] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv4 --> GRE/Teredo/VXLAN --> MAC/VLAN --> IPv6 */
+ [81] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [82] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [83] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [84] reserved */
+ [85] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [86] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [87] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* Non tunneled IPv6 */
+ [88] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_FRAG,
+ [89] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_NONFRAG,
+ [90] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_UDP,
+ /* [91] reserved */
+ [92] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_TCP,
+ [93] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_SCTP,
+ [94] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_ICMP,
+
+ /* IPv6 --> IPv4 */
+ [95] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [96] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [97] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [98] reserved */
+ [99] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [100] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [101] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv6 --> IPv6 */
+ [102] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [103] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [104] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [105] reserved */
+ [106] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [107] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [108] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv6 --> GRE/Teredo/VXLAN */
+ [109] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT,
+
+ /* IPv6 --> GRE/Teredo/VXLAN --> IPv4 */
+ [110] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [111] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [112] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [113] reserved */
+ [114] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [115] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [116] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv6 --> GRE/Teredo/VXLAN --> IPv6 */
+ [117] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [118] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [119] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [120] reserved */
+ [121] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [122] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [123] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv6 --> GRE/Teredo/VXLAN --> MAC */
+ [124] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC,
+
+ /* IPv6 --> GRE/Teredo/VXLAN --> MAC --> IPv4 */
+ [125] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [126] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [127] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [128] reserved */
+ [129] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [130] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [131] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv6 --> GRE/Teredo/VXLAN --> MAC --> IPv6 */
+ [132] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [133] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [134] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [135] reserved */
+ [136] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [137] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [138] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv6 --> GRE/Teredo/VXLAN --> MAC/VLAN */
+ [139] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN,
+
+ /* IPv6 --> GRE/Teredo/VXLAN --> MAC/VLAN --> IPv4 */
+ [140] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [141] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [142] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [143] reserved */
+ [144] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [145] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [146] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv6 --> GRE/Teredo/VXLAN --> MAC/VLAN --> IPv6 */
+ [147] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [148] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [149] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [150] reserved */
+ [151] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [152] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [153] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_MAC_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* All others reserved */
+ };
+
+ return ptype_table[ptype];
+}
+#else /* RTE_NEXT_ABI */
/* Translate pkt types to pkt flags */
static inline uint64_t
i40e_rxd_ptype_to_pkt_flags(uint64_t qword)
@@ -443,6 +951,7 @@ i40e_rxd_ptype_to_pkt_flags(uint64_t qword)
return ip_ptype_map[ptype];
}
+#endif /* RTE_NEXT_ABI */
#define I40E_RX_DESC_EXT_STATUS_FLEXBH_MASK 0x03
#define I40E_RX_DESC_EXT_STATUS_FLEXBH_FD_ID 0x01
@@ -730,11 +1239,18 @@ i40e_rx_scan_hw_ring(struct i40e_rx_queue *rxq)
i40e_rxd_to_vlan_tci(mb, &rxdp[j]);
pkt_flags = i40e_rxd_status_to_pkt_flags(qword1);
pkt_flags |= i40e_rxd_error_to_pkt_flags(qword1);
+#ifdef RTE_NEXT_ABI
+ mb->packet_type =
+ i40e_rxd_pkt_type_mapping((uint8_t)((qword1 &
+ I40E_RXD_QW1_PTYPE_MASK) >>
+ I40E_RXD_QW1_PTYPE_SHIFT));
+#else
pkt_flags |= i40e_rxd_ptype_to_pkt_flags(qword1);
mb->packet_type = (uint16_t)((qword1 &
I40E_RXD_QW1_PTYPE_MASK) >>
I40E_RXD_QW1_PTYPE_SHIFT);
+#endif /* RTE_NEXT_ABI */
if (pkt_flags & PKT_RX_RSS_HASH)
mb->hash.rss = rte_le_to_cpu_32(\
rxdp[j].wb.qword0.hi_dword.rss);
@@ -971,9 +1487,15 @@ i40e_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
i40e_rxd_to_vlan_tci(rxm, &rxd);
pkt_flags = i40e_rxd_status_to_pkt_flags(qword1);
pkt_flags |= i40e_rxd_error_to_pkt_flags(qword1);
+#ifdef RTE_NEXT_ABI
+ rxm->packet_type =
+ i40e_rxd_pkt_type_mapping((uint8_t)((qword1 &
+ I40E_RXD_QW1_PTYPE_MASK) >> I40E_RXD_QW1_PTYPE_SHIFT));
+#else
pkt_flags |= i40e_rxd_ptype_to_pkt_flags(qword1);
rxm->packet_type = (uint16_t)((qword1 & I40E_RXD_QW1_PTYPE_MASK) >>
I40E_RXD_QW1_PTYPE_SHIFT);
+#endif /* RTE_NEXT_ABI */
if (pkt_flags & PKT_RX_RSS_HASH)
rxm->hash.rss =
rte_le_to_cpu_32(rxd.wb.qword0.hi_dword.rss);
@@ -1129,10 +1651,16 @@ i40e_recv_scattered_pkts(void *rx_queue,
i40e_rxd_to_vlan_tci(first_seg, &rxd);
pkt_flags = i40e_rxd_status_to_pkt_flags(qword1);
pkt_flags |= i40e_rxd_error_to_pkt_flags(qword1);
+#ifdef RTE_NEXT_ABI
+ first_seg->packet_type =
+ i40e_rxd_pkt_type_mapping((uint8_t)((qword1 &
+ I40E_RXD_QW1_PTYPE_MASK) >> I40E_RXD_QW1_PTYPE_SHIFT));
+#else
pkt_flags |= i40e_rxd_ptype_to_pkt_flags(qword1);
first_seg->packet_type = (uint16_t)((qword1 &
I40E_RXD_QW1_PTYPE_MASK) >>
I40E_RXD_QW1_PTYPE_SHIFT);
+#endif /* RTE_NEXT_ABI */
if (pkt_flags & PKT_RX_RSS_HASH)
rxm->hash.rss =
rte_le_to_cpu_32(rxd.wb.qword0.hi_dword.rss);
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v8 07/18] enic: replace bit mask based packet type with unified packet type
2015-06-23 1:50 ` [dpdk-dev] [PATCH v8 00/18] unified packet type Helin Zhang
` (5 preceding siblings ...)
2015-06-23 1:50 ` [dpdk-dev] [PATCH v8 06/18] i40e: " Helin Zhang
@ 2015-06-23 1:50 ` Helin Zhang
2015-06-23 1:50 ` [dpdk-dev] [PATCH v8 08/18] vmxnet3: " Helin Zhang
` (12 subsequent siblings)
19 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-06-23 1:50 UTC (permalink / raw)
To: dev
To unify packet types among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.
To avoid breaking ABI compatibility, all the changes would be
enabled by RTE_NEXT_ABI, which is disabled by default.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
drivers/net/enic/enic_main.c | 26 ++++++++++++++++++++++++++
1 file changed, 26 insertions(+)
v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.
v5 changes:
* Re-worded the commit logs.
v6 changes:
* Disabled the code changes for unified packet type by default, to
avoid breaking ABI compatibility.
v7 changes:
* Renamed RTE_UNIFIED_PKT_TYPE to RTE_NEXT_ABI.
diff --git a/drivers/net/enic/enic_main.c b/drivers/net/enic/enic_main.c
index 15313c2..f47e96c 100644
--- a/drivers/net/enic/enic_main.c
+++ b/drivers/net/enic/enic_main.c
@@ -423,7 +423,11 @@ static int enic_rq_indicate_buf(struct vnic_rq *rq,
rx_pkt->pkt_len = bytes_written;
if (ipv4) {
+#ifdef RTE_NEXT_ABI
+ rx_pkt->packet_type = RTE_PTYPE_L3_IPV4;
+#else
rx_pkt->ol_flags |= PKT_RX_IPV4_HDR;
+#endif
if (!csum_not_calc) {
if (unlikely(!ipv4_csum_ok))
rx_pkt->ol_flags |= PKT_RX_IP_CKSUM_BAD;
@@ -432,7 +436,11 @@ static int enic_rq_indicate_buf(struct vnic_rq *rq,
rx_pkt->ol_flags |= PKT_RX_L4_CKSUM_BAD;
}
} else if (ipv6)
+#ifdef RTE_NEXT_ABI
+ rx_pkt->packet_type = RTE_PTYPE_L3_IPV6;
+#else
rx_pkt->ol_flags |= PKT_RX_IPV6_HDR;
+#endif
} else {
/* Header split */
if (sop && !eop) {
@@ -445,7 +453,11 @@ static int enic_rq_indicate_buf(struct vnic_rq *rq,
*rx_pkt_bucket = rx_pkt;
rx_pkt->pkt_len = bytes_written;
if (ipv4) {
+#ifdef RTE_NEXT_ABI
+ rx_pkt->packet_type = RTE_PTYPE_L3_IPV4;
+#else
rx_pkt->ol_flags |= PKT_RX_IPV4_HDR;
+#endif
if (!csum_not_calc) {
if (unlikely(!ipv4_csum_ok))
rx_pkt->ol_flags |=
@@ -457,13 +469,22 @@ static int enic_rq_indicate_buf(struct vnic_rq *rq,
PKT_RX_L4_CKSUM_BAD;
}
} else if (ipv6)
+#ifdef RTE_NEXT_ABI
+ rx_pkt->packet_type = RTE_PTYPE_L3_IPV6;
+#else
rx_pkt->ol_flags |= PKT_RX_IPV6_HDR;
+#endif
} else {
/* Payload */
hdr_rx_pkt = *rx_pkt_bucket;
hdr_rx_pkt->pkt_len += bytes_written;
if (ipv4) {
+#ifdef RTE_NEXT_ABI
+ hdr_rx_pkt->packet_type =
+ RTE_PTYPE_L3_IPV4;
+#else
hdr_rx_pkt->ol_flags |= PKT_RX_IPV4_HDR;
+#endif
if (!csum_not_calc) {
if (unlikely(!ipv4_csum_ok))
hdr_rx_pkt->ol_flags |=
@@ -475,7 +496,12 @@ static int enic_rq_indicate_buf(struct vnic_rq *rq,
PKT_RX_L4_CKSUM_BAD;
}
} else if (ipv6)
+#ifdef RTE_NEXT_ABI
+ hdr_rx_pkt->packet_type =
+ RTE_PTYPE_L3_IPV6;
+#else
hdr_rx_pkt->ol_flags |= PKT_RX_IPV6_HDR;
+#endif
}
}
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v8 08/18] vmxnet3: replace bit mask based packet type with unified packet type
2015-06-23 1:50 ` [dpdk-dev] [PATCH v8 00/18] unified packet type Helin Zhang
` (6 preceding siblings ...)
2015-06-23 1:50 ` [dpdk-dev] [PATCH v8 07/18] enic: " Helin Zhang
@ 2015-06-23 1:50 ` Helin Zhang
2015-06-23 1:50 ` [dpdk-dev] [PATCH v8 09/18] fm10k: " Helin Zhang
` (11 subsequent siblings)
19 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-06-23 1:50 UTC (permalink / raw)
To: dev
To unify packet types among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.
To avoid breaking ABI compatibility, all the changes would be
enabled by RTE_NEXT_ABI, which is disabled by default.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
drivers/net/vmxnet3/vmxnet3_rxtx.c | 8 ++++++++
1 file changed, 8 insertions(+)
v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.
v5 changes:
* Re-worded the commit logs.
v6 changes:
* Disabled the code changes for unified packet type by default, to
avoid breaking ABI compatibility.
v7 changes:
* Renamed RTE_UNIFIED_PKT_TYPE to RTE_NEXT_ABI.
diff --git a/drivers/net/vmxnet3/vmxnet3_rxtx.c b/drivers/net/vmxnet3/vmxnet3_rxtx.c
index a1eac45..25ae2f6 100644
--- a/drivers/net/vmxnet3/vmxnet3_rxtx.c
+++ b/drivers/net/vmxnet3/vmxnet3_rxtx.c
@@ -649,9 +649,17 @@ vmxnet3_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
struct ipv4_hdr *ip = (struct ipv4_hdr *)(eth + 1);
if (((ip->version_ihl & 0xf) << 2) > (int)sizeof(struct ipv4_hdr))
+#ifdef RTE_NEXT_ABI
+ rxm->packet_type = RTE_PTYPE_L3_IPV4_EXT;
+#else
rxm->ol_flags |= PKT_RX_IPV4_HDR_EXT;
+#endif
else
+#ifdef RTE_NEXT_ABI
+ rxm->packet_type = RTE_PTYPE_L3_IPV4;
+#else
rxm->ol_flags |= PKT_RX_IPV4_HDR;
+#endif
if (!rcd->cnc) {
if (!rcd->ipc)
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v8 09/18] fm10k: replace bit mask based packet type with unified packet type
2015-06-23 1:50 ` [dpdk-dev] [PATCH v8 00/18] unified packet type Helin Zhang
` (7 preceding siblings ...)
2015-06-23 1:50 ` [dpdk-dev] [PATCH v8 08/18] vmxnet3: " Helin Zhang
@ 2015-06-23 1:50 ` Helin Zhang
2015-06-23 1:50 ` [dpdk-dev] [PATCH v8 10/18] app/test-pipeline: " Helin Zhang
` (10 subsequent siblings)
19 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-06-23 1:50 UTC (permalink / raw)
To: dev
To unify packet types among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.
To avoid breaking ABI compatibility, all the changes would be
enabled by RTE_NEXT_ABI, which is disabled by default.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
drivers/net/fm10k/fm10k_rxtx.c | 27 +++++++++++++++++++++++++++
1 file changed, 27 insertions(+)
v4 changes:
* Supported unified packet type of fm10k from v4.
v5 changes:
* Re-worded the commit logs.
v6 changes:
* Disabled the code changes for unified packet type by default, to
avoid breaking ABI compatibility.
v7 changes:
* Renamed RTE_UNIFIED_PKT_TYPE to RTE_NEXT_ABI.
diff --git a/drivers/net/fm10k/fm10k_rxtx.c b/drivers/net/fm10k/fm10k_rxtx.c
index f5d1ad0..4b00f5c 100644
--- a/drivers/net/fm10k/fm10k_rxtx.c
+++ b/drivers/net/fm10k/fm10k_rxtx.c
@@ -68,12 +68,37 @@ static inline void dump_rxd(union fm10k_rx_desc *rxd)
static inline void
rx_desc_to_ol_flags(struct rte_mbuf *m, const union fm10k_rx_desc *d)
{
+#ifdef RTE_NEXT_ABI
+ static const uint32_t
+ ptype_table[FM10K_RXD_PKTTYPE_MASK >> FM10K_RXD_PKTTYPE_SHIFT]
+ __rte_cache_aligned = {
+ [FM10K_PKTTYPE_OTHER] = RTE_PTYPE_L2_MAC,
+ [FM10K_PKTTYPE_IPV4] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV4,
+ [FM10K_PKTTYPE_IPV4_EX] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4_EXT,
+ [FM10K_PKTTYPE_IPV6] = RTE_PTYPE_L2_MAC | RTE_PTYPE_L3_IPV6,
+ [FM10K_PKTTYPE_IPV6_EX] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV6_EXT,
+ [FM10K_PKTTYPE_IPV4 | FM10K_PKTTYPE_TCP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_TCP,
+ [FM10K_PKTTYPE_IPV6 | FM10K_PKTTYPE_TCP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_TCP,
+ [FM10K_PKTTYPE_IPV4 | FM10K_PKTTYPE_UDP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_UDP,
+ [FM10K_PKTTYPE_IPV6 | FM10K_PKTTYPE_UDP] = RTE_PTYPE_L2_MAC |
+ RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_UDP,
+ };
+
+ m->packet_type = ptype_table[(d->w.pkt_info & FM10K_RXD_PKTTYPE_MASK)
+ >> FM10K_RXD_PKTTYPE_SHIFT];
+#else /* RTE_NEXT_ABI */
uint16_t ptype;
static const uint16_t pt_lut[] = { 0,
PKT_RX_IPV4_HDR, PKT_RX_IPV4_HDR_EXT,
PKT_RX_IPV6_HDR, PKT_RX_IPV6_HDR_EXT,
0, 0, 0
};
+#endif /* RTE_NEXT_ABI */
if (d->w.pkt_info & FM10K_RXD_RSSTYPE_MASK)
m->ol_flags |= PKT_RX_RSS_HASH;
@@ -97,9 +122,11 @@ rx_desc_to_ol_flags(struct rte_mbuf *m, const union fm10k_rx_desc *d)
if (unlikely(d->d.staterr & FM10K_RXD_STATUS_RXE))
m->ol_flags |= PKT_RX_RECIP_ERR;
+#ifndef RTE_NEXT_ABI
ptype = (d->d.data & FM10K_RXD_PKTTYPE_MASK_L3) >>
FM10K_RXD_PKTTYPE_SHIFT;
m->ol_flags |= pt_lut[(uint8_t)ptype];
+#endif
}
uint16_t
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v8 10/18] app/test-pipeline: replace bit mask based packet type with unified packet type
2015-06-23 1:50 ` [dpdk-dev] [PATCH v8 00/18] unified packet type Helin Zhang
` (8 preceding siblings ...)
2015-06-23 1:50 ` [dpdk-dev] [PATCH v8 09/18] fm10k: " Helin Zhang
@ 2015-06-23 1:50 ` Helin Zhang
2015-06-23 1:50 ` [dpdk-dev] [PATCH v8 11/18] app/testpmd: " Helin Zhang
` (9 subsequent siblings)
19 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-06-23 1:50 UTC (permalink / raw)
To: dev
To unify packet types among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.
To avoid breaking ABI compatibility, all the changes would be
enabled by RTE_NEXT_ABI, which is disabled by default.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
app/test-pipeline/pipeline_hash.c | 13 +++++++++++++
1 file changed, 13 insertions(+)
v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.
v5 changes:
* Re-worded the commit logs.
v6 changes:
* Disabled the code changes for unified packet type by default, to
avoid breaking ABI compatibility.
v7 changes:
* Renamed RTE_UNIFIED_PKT_TYPE to RTE_NEXT_ABI.
diff --git a/app/test-pipeline/pipeline_hash.c b/app/test-pipeline/pipeline_hash.c
index 4598ad4..aa3f9e5 100644
--- a/app/test-pipeline/pipeline_hash.c
+++ b/app/test-pipeline/pipeline_hash.c
@@ -459,20 +459,33 @@ app_main_loop_rx_metadata(void) {
signature = RTE_MBUF_METADATA_UINT32_PTR(m, 0);
key = RTE_MBUF_METADATA_UINT8_PTR(m, 32);
+#ifdef RTE_NEXT_ABI
+ if (RTE_ETH_IS_IPV4_HDR(m->packet_type)) {
+#else
if (m->ol_flags & PKT_RX_IPV4_HDR) {
+#endif
ip_hdr = (struct ipv4_hdr *)
&m_data[sizeof(struct ether_hdr)];
ip_dst = ip_hdr->dst_addr;
k32 = (uint32_t *) key;
k32[0] = ip_dst & 0xFFFFFF00;
+#ifdef RTE_NEXT_ABI
+ } else if (RTE_ETH_IS_IPV6_HDR(m->packet_type)) {
+#else
} else {
+#endif
ipv6_hdr = (struct ipv6_hdr *)
&m_data[sizeof(struct ether_hdr)];
ipv6_dst = ipv6_hdr->dst_addr;
memcpy(key, ipv6_dst, 16);
+#ifdef RTE_NEXT_ABI
+ } else
+ continue;
+#else
}
+#endif
*signature = test_hash(key, 0, 0);
}
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v8 11/18] app/testpmd: replace bit mask based packet type with unified packet type
2015-06-23 1:50 ` [dpdk-dev] [PATCH v8 00/18] unified packet type Helin Zhang
` (9 preceding siblings ...)
2015-06-23 1:50 ` [dpdk-dev] [PATCH v8 10/18] app/test-pipeline: " Helin Zhang
@ 2015-06-23 1:50 ` Helin Zhang
2015-06-23 1:50 ` [dpdk-dev] [PATCH v8 12/18] app/test: Remove useless code Helin Zhang
` (8 subsequent siblings)
19 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-06-23 1:50 UTC (permalink / raw)
To: dev
To unify packet types among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.
To avoid breaking ABI compatibility, all the changes would be
enabled by RTE_NEXT_ABI, which is disabled by default.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Signed-off-by: Jijiang Liu <jijiang.liu@intel.com>
---
app/test-pmd/csumonly.c | 14 ++++
app/test-pmd/rxonly.c | 183 ++++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 197 insertions(+)
v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.
v4 changes:
* Added printing logs of packet types of each received packet in rxonly mode.
v5 changes:
* Re-worded the commit logs.
v6 changes:
* Disabled the code changes for unified packet type by default, to
avoid breaking ABI compatibility.
v7 changes:
* Renamed RTE_UNIFIED_PKT_TYPE to RTE_NEXT_ABI.
diff --git a/app/test-pmd/csumonly.c b/app/test-pmd/csumonly.c
index 950ea82..fab9600 100644
--- a/app/test-pmd/csumonly.c
+++ b/app/test-pmd/csumonly.c
@@ -202,8 +202,14 @@ parse_ethernet(struct ether_hdr *eth_hdr, struct testpmd_offload_info *info)
/* Parse a vxlan header */
static void
+#ifdef RTE_NEXT_ABI
+parse_vxlan(struct udp_hdr *udp_hdr,
+ struct testpmd_offload_info *info,
+ uint32_t pkt_type)
+#else
parse_vxlan(struct udp_hdr *udp_hdr, struct testpmd_offload_info *info,
uint64_t mbuf_olflags)
+#endif
{
struct ether_hdr *eth_hdr;
@@ -211,8 +217,12 @@ parse_vxlan(struct udp_hdr *udp_hdr, struct testpmd_offload_info *info,
* (rfc7348) or that the rx offload flag is set (i40e only
* currently) */
if (udp_hdr->dst_port != _htons(4789) &&
+#ifdef RTE_NEXT_ABI
+ RTE_ETH_IS_TUNNEL_PKT(pkt_type) == 0)
+#else
(mbuf_olflags & (PKT_RX_TUNNEL_IPV4_HDR |
PKT_RX_TUNNEL_IPV6_HDR)) == 0)
+#endif
return;
info->is_tunnel = 1;
@@ -549,7 +559,11 @@ pkt_burst_checksum_forward(struct fwd_stream *fs)
struct udp_hdr *udp_hdr;
udp_hdr = (struct udp_hdr *)((char *)l3_hdr +
info.l3_len);
+#ifdef RTE_NEXT_ABI
+ parse_vxlan(udp_hdr, &info, m->packet_type);
+#else
parse_vxlan(udp_hdr, &info, m->ol_flags);
+#endif
} else if (info.l4_proto == IPPROTO_GRE) {
struct simple_gre_hdr *gre_hdr;
gre_hdr = (struct simple_gre_hdr *)
diff --git a/app/test-pmd/rxonly.c b/app/test-pmd/rxonly.c
index f6a2f84..5a30347 100644
--- a/app/test-pmd/rxonly.c
+++ b/app/test-pmd/rxonly.c
@@ -91,7 +91,11 @@ pkt_burst_receive(struct fwd_stream *fs)
uint64_t ol_flags;
uint16_t nb_rx;
uint16_t i, packet_type;
+#ifdef RTE_NEXT_ABI
+ uint16_t is_encapsulation;
+#else
uint64_t is_encapsulation;
+#endif
#ifdef RTE_TEST_PMD_RECORD_CORE_CYCLES
uint64_t start_tsc;
@@ -135,8 +139,12 @@ pkt_burst_receive(struct fwd_stream *fs)
ol_flags = mb->ol_flags;
packet_type = mb->packet_type;
+#ifdef RTE_NEXT_ABI
+ is_encapsulation = RTE_ETH_IS_TUNNEL_PKT(packet_type);
+#else
is_encapsulation = ol_flags & (PKT_RX_TUNNEL_IPV4_HDR |
PKT_RX_TUNNEL_IPV6_HDR);
+#endif
print_ether_addr(" src=", ð_hdr->s_addr);
print_ether_addr(" - dst=", ð_hdr->d_addr);
@@ -163,6 +171,177 @@ pkt_burst_receive(struct fwd_stream *fs)
if (ol_flags & PKT_RX_QINQ_PKT)
printf(" - QinQ VLAN tci=0x%x, VLAN tci outer=0x%x",
mb->vlan_tci, mb->vlan_tci_outer);
+#ifdef RTE_NEXT_ABI
+ if (mb->packet_type) {
+ uint32_t ptype;
+
+ /* (outer) L2 packet type */
+ ptype = mb->packet_type & RTE_PTYPE_L2_MASK;
+ switch (ptype) {
+ case RTE_PTYPE_L2_MAC:
+ printf(" - (outer) L2 type: MAC");
+ break;
+ case RTE_PTYPE_L2_MAC_TIMESYNC:
+ printf(" - (outer) L2 type: MAC Timesync");
+ break;
+ case RTE_PTYPE_L2_ARP:
+ printf(" - (outer) L2 type: ARP");
+ break;
+ case RTE_PTYPE_L2_LLDP:
+ printf(" - (outer) L2 type: LLDP");
+ break;
+ default:
+ printf(" - (outer) L2 type: Unknown");
+ break;
+ }
+
+ /* (outer) L3 packet type */
+ ptype = mb->packet_type & RTE_PTYPE_L3_MASK;
+ switch (ptype) {
+ case RTE_PTYPE_L3_IPV4:
+ printf(" - (outer) L3 type: IPV4");
+ break;
+ case RTE_PTYPE_L3_IPV4_EXT:
+ printf(" - (outer) L3 type: IPV4_EXT");
+ break;
+ case RTE_PTYPE_L3_IPV6:
+ printf(" - (outer) L3 type: IPV6");
+ break;
+ case RTE_PTYPE_L3_IPV4_EXT_UNKNOWN:
+ printf(" - (outer) L3 type: IPV4_EXT_UNKNOWN");
+ break;
+ case RTE_PTYPE_L3_IPV6_EXT:
+ printf(" - (outer) L3 type: IPV6_EXT");
+ break;
+ case RTE_PTYPE_L3_IPV6_EXT_UNKNOWN:
+ printf(" - (outer) L3 type: IPV6_EXT_UNKNOWN");
+ break;
+ default:
+ printf(" - (outer) L3 type: Unknown");
+ break;
+ }
+
+ /* (outer) L4 packet type */
+ ptype = mb->packet_type & RTE_PTYPE_L4_MASK;
+ switch (ptype) {
+ case RTE_PTYPE_L4_TCP:
+ printf(" - (outer) L4 type: TCP");
+ break;
+ case RTE_PTYPE_L4_UDP:
+ printf(" - (outer) L4 type: UDP");
+ break;
+ case RTE_PTYPE_L4_FRAG:
+ printf(" - (outer) L4 type: L4_FRAG");
+ break;
+ case RTE_PTYPE_L4_SCTP:
+ printf(" - (outer) L4 type: SCTP");
+ break;
+ case RTE_PTYPE_L4_ICMP:
+ printf(" - (outer) L4 type: ICMP");
+ break;
+ case RTE_PTYPE_L4_NONFRAG:
+ printf(" - (outer) L4 type: L4_NONFRAG");
+ break;
+ default:
+ printf(" - (outer) L4 type: Unknown");
+ break;
+ }
+
+ /* packet tunnel type */
+ ptype = mb->packet_type & RTE_PTYPE_TUNNEL_MASK;
+ switch (ptype) {
+ case RTE_PTYPE_TUNNEL_IP:
+ printf(" - Tunnel type: IP");
+ break;
+ case RTE_PTYPE_TUNNEL_GRE:
+ printf(" - Tunnel type: GRE");
+ break;
+ case RTE_PTYPE_TUNNEL_VXLAN:
+ printf(" - Tunnel type: VXLAN");
+ break;
+ case RTE_PTYPE_TUNNEL_NVGRE:
+ printf(" - Tunnel type: NVGRE");
+ break;
+ case RTE_PTYPE_TUNNEL_GENEVE:
+ printf(" - Tunnel type: GENEVE");
+ break;
+ case RTE_PTYPE_TUNNEL_GRENAT:
+ printf(" - Tunnel type: GRENAT");
+ break;
+ default:
+ printf(" - Tunnel type: Unkown");
+ break;
+ }
+
+ /* inner L2 packet type */
+ ptype = mb->packet_type & RTE_PTYPE_INNER_L2_MASK;
+ switch (ptype) {
+ case RTE_PTYPE_INNER_L2_MAC:
+ printf(" - Inner L2 type: MAC");
+ break;
+ case RTE_PTYPE_INNER_L2_MAC_VLAN:
+ printf(" - Inner L2 type: MAC_VLAN");
+ break;
+ default:
+ printf(" - Inner L2 type: Unknown");
+ break;
+ }
+
+ /* inner L3 packet type */
+ ptype = mb->packet_type & RTE_PTYPE_INNER_INNER_L3_MASK;
+ switch (ptype) {
+ case RTE_PTYPE_INNER_L3_IPV4:
+ printf(" - Inner L3 type: IPV4");
+ break;
+ case RTE_PTYPE_INNER_L3_IPV4_EXT:
+ printf(" - Inner L3 type: IPV4_EXT");
+ break;
+ case RTE_PTYPE_INNER_L3_IPV6:
+ printf(" - Inner L3 type: IPV6");
+ break;
+ case RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN:
+ printf(" - Inner L3 type: IPV4_EXT_UNKNOWN");
+ break;
+ case RTE_PTYPE_INNER_L3_IPV6_EXT:
+ printf(" - Inner L3 type: IPV6_EXT");
+ break;
+ case RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN:
+ printf(" - Inner L3 type: IPV6_EXT_UNKOWN");
+ break;
+ default:
+ printf(" - Inner L3 type: Unkown");
+ break;
+ }
+
+ /* inner L4 packet type */
+ ptype = mb->packet_type & RTE_PTYPE_INNER_L4_MASK;
+ switch (ptype) {
+ case RTE_PTYPE_INNER_L4_TCP:
+ printf(" - Inner L4 type: TCP");
+ break;
+ case RTE_PTYPE_INNER_L4_UDP:
+ printf(" - Inner L4 type: UDP");
+ break;
+ case RTE_PTYPE_INNER_L4_FRAG:
+ printf(" - Inner L4 type: L4_FRAG");
+ break;
+ case RTE_PTYPE_INNER_L4_SCTP:
+ printf(" - Inner L4 type: SCTP");
+ break;
+ case RTE_PTYPE_INNER_L4_ICMP:
+ printf(" - Inner L4 type: ICMP");
+ break;
+ case RTE_PTYPE_INNER_L4_NONFRAG:
+ printf(" - Inner L4 type: L4_NONFRAG");
+ break;
+ default:
+ printf(" - Inner L4 type: Unknown");
+ break;
+ }
+ printf("\n");
+ } else
+ printf("Unknown packet type\n");
+#endif /* RTE_NEXT_ABI */
if (is_encapsulation) {
struct ipv4_hdr *ipv4_hdr;
struct ipv6_hdr *ipv6_hdr;
@@ -176,7 +355,11 @@ pkt_burst_receive(struct fwd_stream *fs)
l2_len = sizeof(struct ether_hdr);
/* Do not support ipv4 option field */
+#ifdef RTE_NEXT_ABI
+ if (RTE_ETH_IS_IPV4_HDR(packet_type)) {
+#else
if (ol_flags & PKT_RX_TUNNEL_IPV4_HDR) {
+#endif
l3_len = sizeof(struct ipv4_hdr);
ipv4_hdr = (struct ipv4_hdr *) (rte_pktmbuf_mtod(mb,
unsigned char *) + l2_len);
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v8 12/18] app/test: Remove useless code
2015-06-23 1:50 ` [dpdk-dev] [PATCH v8 00/18] unified packet type Helin Zhang
` (10 preceding siblings ...)
2015-06-23 1:50 ` [dpdk-dev] [PATCH v8 11/18] app/testpmd: " Helin Zhang
@ 2015-06-23 1:50 ` Helin Zhang
2015-06-23 1:50 ` [dpdk-dev] [PATCH v8 13/18] examples/ip_fragmentation: replace bit mask based packet type with unified packet type Helin Zhang
` (7 subsequent siblings)
19 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-06-23 1:50 UTC (permalink / raw)
To: dev
Severl useless code lines are added accidently, which blocks packet
type unification. They should be deleted at all.
To avoid breaking ABI compatibility, all the changes would be
enabled by RTE_NEXT_ABI, which is disabled by default.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
app/test/packet_burst_generator.c | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
v4 changes:
* Removed several useless code lines which block packet type unification.
v5 changes:
* Re-worded the commit logs.
v6 changes:
* Disabled the code changes for unified packet type by default, to
avoid breaking ABI compatibility.
v7 changes:
* Renamed RTE_UNIFIED_PKT_TYPE to RTE_NEXT_ABI.
diff --git a/app/test/packet_burst_generator.c b/app/test/packet_burst_generator.c
index b46eed7..61e6340 100644
--- a/app/test/packet_burst_generator.c
+++ b/app/test/packet_burst_generator.c
@@ -272,19 +272,21 @@ nomore_mbuf:
if (ipv4) {
pkt->vlan_tci = ETHER_TYPE_IPv4;
pkt->l3_len = sizeof(struct ipv4_hdr);
-
+#ifndef RTE_NEXT_ABI
if (vlan_enabled)
pkt->ol_flags = PKT_RX_IPV4_HDR | PKT_RX_VLAN_PKT;
else
pkt->ol_flags = PKT_RX_IPV4_HDR;
+#endif
} else {
pkt->vlan_tci = ETHER_TYPE_IPv6;
pkt->l3_len = sizeof(struct ipv6_hdr);
-
+#ifndef RTE_NEXT_ABI
if (vlan_enabled)
pkt->ol_flags = PKT_RX_IPV6_HDR | PKT_RX_VLAN_PKT;
else
pkt->ol_flags = PKT_RX_IPV6_HDR;
+#endif
}
pkts_burst[nb_pkt] = pkt;
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v8 13/18] examples/ip_fragmentation: replace bit mask based packet type with unified packet type
2015-06-23 1:50 ` [dpdk-dev] [PATCH v8 00/18] unified packet type Helin Zhang
` (11 preceding siblings ...)
2015-06-23 1:50 ` [dpdk-dev] [PATCH v8 12/18] app/test: Remove useless code Helin Zhang
@ 2015-06-23 1:50 ` Helin Zhang
2015-06-23 1:50 ` [dpdk-dev] [PATCH v8 14/18] examples/ip_reassembly: " Helin Zhang
` (6 subsequent siblings)
19 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-06-23 1:50 UTC (permalink / raw)
To: dev
To unify packet types among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.
To avoid breaking ABI compatibility, all the changes would be
enabled by RTE_NEXT_ABI, which is disabled by default.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
examples/ip_fragmentation/main.c | 9 +++++++++
1 file changed, 9 insertions(+)
v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.
v5 changes:
* Re-worded the commit logs.
v6 changes:
* Disabled the code changes for unified packet type by default, to
avoid breaking ABI compatibility.
v7 changes:
* Renamed RTE_UNIFIED_PKT_TYPE to RTE_NEXT_ABI.
diff --git a/examples/ip_fragmentation/main.c b/examples/ip_fragmentation/main.c
index 0922ba6..b71d05f 100644
--- a/examples/ip_fragmentation/main.c
+++ b/examples/ip_fragmentation/main.c
@@ -283,7 +283,11 @@ l3fwd_simple_forward(struct rte_mbuf *m, struct lcore_queue_conf *qconf,
len = qconf->tx_mbufs[port_out].len;
/* if this is an IPv4 packet */
+#ifdef RTE_NEXT_ABI
+ if (RTE_ETH_IS_IPV4_HDR(m->packet_type)) {
+#else
if (m->ol_flags & PKT_RX_IPV4_HDR) {
+#endif
struct ipv4_hdr *ip_hdr;
uint32_t ip_dst;
/* Read the lookup key (i.e. ip_dst) from the input packet */
@@ -317,9 +321,14 @@ l3fwd_simple_forward(struct rte_mbuf *m, struct lcore_queue_conf *qconf,
if (unlikely (len2 < 0))
return;
}
+#ifdef RTE_NEXT_ABI
+ } else if (RTE_ETH_IS_IPV6_HDR(m->packet_type)) {
+ /* if this is an IPv6 packet */
+#else
}
/* if this is an IPv6 packet */
else if (m->ol_flags & PKT_RX_IPV6_HDR) {
+#endif
struct ipv6_hdr *ip_hdr;
ipv6 = 1;
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v8 14/18] examples/ip_reassembly: replace bit mask based packet type with unified packet type
2015-06-23 1:50 ` [dpdk-dev] [PATCH v8 00/18] unified packet type Helin Zhang
` (12 preceding siblings ...)
2015-06-23 1:50 ` [dpdk-dev] [PATCH v8 13/18] examples/ip_fragmentation: replace bit mask based packet type with unified packet type Helin Zhang
@ 2015-06-23 1:50 ` Helin Zhang
2015-06-23 1:50 ` [dpdk-dev] [PATCH v8 15/18] examples/l3fwd-acl: " Helin Zhang
` (5 subsequent siblings)
19 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-06-23 1:50 UTC (permalink / raw)
To: dev
To unify packet types among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.
To avoid breaking ABI compatibility, all the changes would be
enabled by RTE_NEXT_ABI, which is disabled by default.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
examples/ip_reassembly/main.c | 9 +++++++++
1 file changed, 9 insertions(+)
v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.
v5 changes:
* Re-worded the commit logs.
v6 changes:
* Disabled the code changes for unified packet type by default, to
avoid breaking ABI compatibility.
v7 changes:
* Renamed RTE_UNIFIED_PKT_TYPE to RTE_NEXT_ABI.
diff --git a/examples/ip_reassembly/main.c b/examples/ip_reassembly/main.c
index 9ecb6f9..f1c47ad 100644
--- a/examples/ip_reassembly/main.c
+++ b/examples/ip_reassembly/main.c
@@ -356,7 +356,11 @@ reassemble(struct rte_mbuf *m, uint8_t portid, uint32_t queue,
dst_port = portid;
/* if packet is IPv4 */
+#ifdef RTE_NEXT_ABI
+ if (RTE_ETH_IS_IPV4_HDR(m->packet_type)) {
+#else
if (m->ol_flags & (PKT_RX_IPV4_HDR)) {
+#endif
struct ipv4_hdr *ip_hdr;
uint32_t ip_dst;
@@ -396,9 +400,14 @@ reassemble(struct rte_mbuf *m, uint8_t portid, uint32_t queue,
}
eth_hdr->ether_type = rte_be_to_cpu_16(ETHER_TYPE_IPv4);
+#ifdef RTE_NEXT_ABI
+ } else if (RTE_ETH_IS_IPV6_HDR(m->packet_type)) {
+ /* if packet is IPv6 */
+#else
}
/* if packet is IPv6 */
else if (m->ol_flags & (PKT_RX_IPV6_HDR | PKT_RX_IPV6_HDR_EXT)) {
+#endif
struct ipv6_extension_fragment *frag_hdr;
struct ipv6_hdr *ip_hdr;
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v8 15/18] examples/l3fwd-acl: replace bit mask based packet type with unified packet type
2015-06-23 1:50 ` [dpdk-dev] [PATCH v8 00/18] unified packet type Helin Zhang
` (13 preceding siblings ...)
2015-06-23 1:50 ` [dpdk-dev] [PATCH v8 14/18] examples/ip_reassembly: " Helin Zhang
@ 2015-06-23 1:50 ` Helin Zhang
2015-06-23 1:50 ` [dpdk-dev] [PATCH v8 16/18] examples/l3fwd-power: " Helin Zhang
` (4 subsequent siblings)
19 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-06-23 1:50 UTC (permalink / raw)
To: dev
To unify packet types among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.
To avoid breaking ABI compatibility, all the changes would be
enabled by RTE_NEXT_ABI, which is disabled by default.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
examples/l3fwd-acl/main.c | 29 +++++++++++++++++++++++------
1 file changed, 23 insertions(+), 6 deletions(-)
v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.
v5 changes:
* Re-worded the commit logs.
v6 changes:
* Disabled the code changes for unified packet type by default, to
avoid breaking ABI compatibility.
v7 changes:
* Renamed RTE_UNIFIED_PKT_TYPE to RTE_NEXT_ABI.
diff --git a/examples/l3fwd-acl/main.c b/examples/l3fwd-acl/main.c
index a5d4f25..78b6df2 100644
--- a/examples/l3fwd-acl/main.c
+++ b/examples/l3fwd-acl/main.c
@@ -645,10 +645,13 @@ prepare_one_packet(struct rte_mbuf **pkts_in, struct acl_search_t *acl,
struct ipv4_hdr *ipv4_hdr;
struct rte_mbuf *pkt = pkts_in[index];
+#ifdef RTE_NEXT_ABI
+ if (RTE_ETH_IS_IPV4_HDR(pkt->packet_type)) {
+#else
int type = pkt->ol_flags & (PKT_RX_IPV4_HDR | PKT_RX_IPV6_HDR);
if (type == PKT_RX_IPV4_HDR) {
-
+#endif
ipv4_hdr = (struct ipv4_hdr *)(rte_pktmbuf_mtod(pkt,
unsigned char *) + sizeof(struct ether_hdr));
@@ -667,9 +670,11 @@ prepare_one_packet(struct rte_mbuf **pkts_in, struct acl_search_t *acl,
/* Not a valid IPv4 packet */
rte_pktmbuf_free(pkt);
}
-
+#ifdef RTE_NEXT_ABI
+ } else if (RTE_ETH_IS_IPV6_HDR(pkt->packet_type)) {
+#else
} else if (type == PKT_RX_IPV6_HDR) {
-
+#endif
/* Fill acl structure */
acl->data_ipv6[acl->num_ipv6] = MBUF_IPV6_2PROTO(pkt);
acl->m_ipv6[(acl->num_ipv6)++] = pkt;
@@ -687,17 +692,22 @@ prepare_one_packet(struct rte_mbuf **pkts_in, struct acl_search_t *acl,
{
struct rte_mbuf *pkt = pkts_in[index];
+#ifdef RTE_NEXT_ABI
+ if (RTE_ETH_IS_IPV4_HDR(pkt->packet_type)) {
+#else
int type = pkt->ol_flags & (PKT_RX_IPV4_HDR | PKT_RX_IPV6_HDR);
if (type == PKT_RX_IPV4_HDR) {
-
+#endif
/* Fill acl structure */
acl->data_ipv4[acl->num_ipv4] = MBUF_IPV4_2PROTO(pkt);
acl->m_ipv4[(acl->num_ipv4)++] = pkt;
-
+#ifdef RTE_NEXT_ABI
+ } else if (RTE_ETH_IS_IPV6_HDR(pkt->packet_type)) {
+#else
} else if (type == PKT_RX_IPV6_HDR) {
-
+#endif
/* Fill acl structure */
acl->data_ipv6[acl->num_ipv6] = MBUF_IPV6_2PROTO(pkt);
acl->m_ipv6[(acl->num_ipv6)++] = pkt;
@@ -745,10 +755,17 @@ send_one_packet(struct rte_mbuf *m, uint32_t res)
/* in the ACL list, drop it */
#ifdef L3FWDACL_DEBUG
if ((res & ACL_DENY_SIGNATURE) != 0) {
+#ifdef RTE_NEXT_ABI
+ if (RTE_ETH_IS_IPV4_HDR(m->packet_type))
+ dump_acl4_rule(m, res);
+ else if (RTE_ETH_IS_IPV6_HDR(m->packet_type))
+ dump_acl6_rule(m, res);
+#else
if (m->ol_flags & PKT_RX_IPV4_HDR)
dump_acl4_rule(m, res);
else
dump_acl6_rule(m, res);
+#endif /* RTE_NEXT_ABI */
}
#endif
rte_pktmbuf_free(m);
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v8 16/18] examples/l3fwd-power: replace bit mask based packet type with unified packet type
2015-06-23 1:50 ` [dpdk-dev] [PATCH v8 00/18] unified packet type Helin Zhang
` (14 preceding siblings ...)
2015-06-23 1:50 ` [dpdk-dev] [PATCH v8 15/18] examples/l3fwd-acl: " Helin Zhang
@ 2015-06-23 1:50 ` Helin Zhang
2015-06-23 1:50 ` [dpdk-dev] [PATCH v8 17/18] examples/l3fwd: " Helin Zhang
` (3 subsequent siblings)
19 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-06-23 1:50 UTC (permalink / raw)
To: dev
To unify packet types among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.
To avoid breaking ABI compatibility, all the changes would be
enabled by RTE_NEXT_ABI, which is disabled by default.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
examples/l3fwd-power/main.c | 8 ++++++++
1 file changed, 8 insertions(+)
v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.
v5 changes:
* Re-worded the commit logs.
v6 changes:
* Disabled the code changes for unified packet type by default, to
avoid breaking ABI compatibility.
v7 changes:
* Renamed RTE_UNIFIED_PKT_TYPE to RTE_NEXT_ABI.
diff --git a/examples/l3fwd-power/main.c b/examples/l3fwd-power/main.c
index 6057059..705188f 100644
--- a/examples/l3fwd-power/main.c
+++ b/examples/l3fwd-power/main.c
@@ -635,7 +635,11 @@ l3fwd_simple_forward(struct rte_mbuf *m, uint8_t portid,
eth_hdr = rte_pktmbuf_mtod(m, struct ether_hdr *);
+#ifdef RTE_NEXT_ABI
+ if (RTE_ETH_IS_IPV4_HDR(m->packet_type)) {
+#else
if (m->ol_flags & PKT_RX_IPV4_HDR) {
+#endif
/* Handle IPv4 headers.*/
ipv4_hdr =
(struct ipv4_hdr *)(rte_pktmbuf_mtod(m, unsigned char*)
@@ -670,8 +674,12 @@ l3fwd_simple_forward(struct rte_mbuf *m, uint8_t portid,
ether_addr_copy(&ports_eth_addr[dst_port], ð_hdr->s_addr);
send_single_packet(m, dst_port);
+#ifdef RTE_NEXT_ABI
+ } else if (RTE_ETH_IS_IPV6_HDR(m->packet_type)) {
+#else
}
else {
+#endif
/* Handle IPv6 headers.*/
#if (APP_LOOKUP_METHOD == APP_LOOKUP_EXACT_MATCH)
struct ipv6_hdr *ipv6_hdr;
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v8 17/18] examples/l3fwd: replace bit mask based packet type with unified packet type
2015-06-23 1:50 ` [dpdk-dev] [PATCH v8 00/18] unified packet type Helin Zhang
` (15 preceding siblings ...)
2015-06-23 1:50 ` [dpdk-dev] [PATCH v8 16/18] examples/l3fwd-power: " Helin Zhang
@ 2015-06-23 1:50 ` Helin Zhang
2015-06-23 1:50 ` [dpdk-dev] [PATCH v8 18/18] mbuf: remove old packet type bit masks Helin Zhang
` (2 subsequent siblings)
19 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-06-23 1:50 UTC (permalink / raw)
To: dev
To unify packet types among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.
To avoid breaking ABI compatibility, all the changes would be
enabled by RTE_NEXT_ABI, which is disabled by default.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
examples/l3fwd/main.c | 123 ++++++++++++++++++++++++++++++++++++++++++++++++--
1 file changed, 120 insertions(+), 3 deletions(-)
v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.
v3 changes:
* Minor bug fixes and enhancements.
v5 changes:
* Re-worded the commit logs.
v6 changes:
* Disabled the code changes for unified packet type by default, to
avoid breaking ABI compatibility.
v7 changes:
* Renamed RTE_UNIFIED_PKT_TYPE to RTE_NEXT_ABI.
diff --git a/examples/l3fwd/main.c b/examples/l3fwd/main.c
index 7e4bbfd..eff9580 100644
--- a/examples/l3fwd/main.c
+++ b/examples/l3fwd/main.c
@@ -948,7 +948,11 @@ l3fwd_simple_forward(struct rte_mbuf *m, uint8_t portid, struct lcore_conf *qcon
eth_hdr = rte_pktmbuf_mtod(m, struct ether_hdr *);
+#ifdef RTE_NEXT_ABI
+ if (RTE_ETH_IS_IPV4_HDR(m->packet_type)) {
+#else
if (m->ol_flags & PKT_RX_IPV4_HDR) {
+#endif
/* Handle IPv4 headers.*/
ipv4_hdr = (struct ipv4_hdr *)(rte_pktmbuf_mtod(m, unsigned char *) +
sizeof(struct ether_hdr));
@@ -979,8 +983,11 @@ l3fwd_simple_forward(struct rte_mbuf *m, uint8_t portid, struct lcore_conf *qcon
ether_addr_copy(&ports_eth_addr[dst_port], ð_hdr->s_addr);
send_single_packet(m, dst_port);
-
+#ifdef RTE_NEXT_ABI
+ } else if (RTE_ETH_IS_IPV6_HDR(m->packet_type)) {
+#else
} else {
+#endif
/* Handle IPv6 headers.*/
struct ipv6_hdr *ipv6_hdr;
@@ -999,8 +1006,13 @@ l3fwd_simple_forward(struct rte_mbuf *m, uint8_t portid, struct lcore_conf *qcon
ether_addr_copy(&ports_eth_addr[dst_port], ð_hdr->s_addr);
send_single_packet(m, dst_port);
+#ifdef RTE_NEXT_ABI
+ } else
+ /* Free the mbuf that contains non-IPV4/IPV6 packet */
+ rte_pktmbuf_free(m);
+#else
}
-
+#endif
}
#ifdef DO_RFC_1812_CHECKS
@@ -1024,12 +1036,19 @@ l3fwd_simple_forward(struct rte_mbuf *m, uint8_t portid, struct lcore_conf *qcon
* to BAD_PORT value.
*/
static inline __attribute__((always_inline)) void
+#ifdef RTE_NEXT_ABI
+rfc1812_process(struct ipv4_hdr *ipv4_hdr, uint16_t *dp, uint32_t ptype)
+#else
rfc1812_process(struct ipv4_hdr *ipv4_hdr, uint16_t *dp, uint32_t flags)
+#endif
{
uint8_t ihl;
+#ifdef RTE_NEXT_ABI
+ if (RTE_ETH_IS_IPV4_HDR(ptype)) {
+#else
if ((flags & PKT_RX_IPV4_HDR) != 0) {
-
+#endif
ihl = ipv4_hdr->version_ihl - IPV4_MIN_VER_IHL;
ipv4_hdr->time_to_live--;
@@ -1059,11 +1078,19 @@ get_dst_port(const struct lcore_conf *qconf, struct rte_mbuf *pkt,
struct ipv6_hdr *ipv6_hdr;
struct ether_hdr *eth_hdr;
+#ifdef RTE_NEXT_ABI
+ if (RTE_ETH_IS_IPV4_HDR(pkt->packet_type)) {
+#else
if (pkt->ol_flags & PKT_RX_IPV4_HDR) {
+#endif
if (rte_lpm_lookup(qconf->ipv4_lookup_struct, dst_ipv4,
&next_hop) != 0)
next_hop = portid;
+#ifdef RTE_NEXT_ABI
+ } else if (RTE_ETH_IS_IPV6_HDR(pkt->packet_type)) {
+#else
} else if (pkt->ol_flags & PKT_RX_IPV6_HDR) {
+#endif
eth_hdr = rte_pktmbuf_mtod(pkt, struct ether_hdr *);
ipv6_hdr = (struct ipv6_hdr *)(eth_hdr + 1);
if (rte_lpm6_lookup(qconf->ipv6_lookup_struct,
@@ -1097,12 +1124,52 @@ process_packet(struct lcore_conf *qconf, struct rte_mbuf *pkt,
ve = val_eth[dp];
dst_port[0] = dp;
+#ifdef RTE_NEXT_ABI
+ rfc1812_process(ipv4_hdr, dst_port, pkt->packet_type);
+#else
rfc1812_process(ipv4_hdr, dst_port, pkt->ol_flags);
+#endif
te = _mm_blend_epi16(te, ve, MASK_ETH);
_mm_store_si128((__m128i *)eth_hdr, te);
}
+#ifdef RTE_NEXT_ABI
+/*
+ * Read packet_type and destination IPV4 addresses from 4 mbufs.
+ */
+static inline void
+processx4_step1(struct rte_mbuf *pkt[FWDSTEP],
+ __m128i *dip,
+ uint32_t *ipv4_flag)
+{
+ struct ipv4_hdr *ipv4_hdr;
+ struct ether_hdr *eth_hdr;
+ uint32_t x0, x1, x2, x3;
+
+ eth_hdr = rte_pktmbuf_mtod(pkt[0], struct ether_hdr *);
+ ipv4_hdr = (struct ipv4_hdr *)(eth_hdr + 1);
+ x0 = ipv4_hdr->dst_addr;
+ ipv4_flag[0] = pkt[0]->packet_type & RTE_PTYPE_L3_IPV4;
+
+ eth_hdr = rte_pktmbuf_mtod(pkt[1], struct ether_hdr *);
+ ipv4_hdr = (struct ipv4_hdr *)(eth_hdr + 1);
+ x1 = ipv4_hdr->dst_addr;
+ ipv4_flag[0] &= pkt[1]->packet_type;
+
+ eth_hdr = rte_pktmbuf_mtod(pkt[2], struct ether_hdr *);
+ ipv4_hdr = (struct ipv4_hdr *)(eth_hdr + 1);
+ x2 = ipv4_hdr->dst_addr;
+ ipv4_flag[0] &= pkt[2]->packet_type;
+
+ eth_hdr = rte_pktmbuf_mtod(pkt[3], struct ether_hdr *);
+ ipv4_hdr = (struct ipv4_hdr *)(eth_hdr + 1);
+ x3 = ipv4_hdr->dst_addr;
+ ipv4_flag[0] &= pkt[3]->packet_type;
+
+ dip[0] = _mm_set_epi32(x3, x2, x1, x0);
+}
+#else /* RTE_NEXT_ABI */
/*
* Read ol_flags and destination IPV4 addresses from 4 mbufs.
*/
@@ -1135,14 +1202,24 @@ processx4_step1(struct rte_mbuf *pkt[FWDSTEP], __m128i *dip, uint32_t *flag)
dip[0] = _mm_set_epi32(x3, x2, x1, x0);
}
+#endif /* RTE_NEXT_ABI */
/*
* Lookup into LPM for destination port.
* If lookup fails, use incoming port (portid) as destination port.
*/
static inline void
+#ifdef RTE_NEXT_ABI
+processx4_step2(const struct lcore_conf *qconf,
+ __m128i dip,
+ uint32_t ipv4_flag,
+ uint8_t portid,
+ struct rte_mbuf *pkt[FWDSTEP],
+ uint16_t dprt[FWDSTEP])
+#else
processx4_step2(const struct lcore_conf *qconf, __m128i dip, uint32_t flag,
uint8_t portid, struct rte_mbuf *pkt[FWDSTEP], uint16_t dprt[FWDSTEP])
+#endif /* RTE_NEXT_ABI */
{
rte_xmm_t dst;
const __m128i bswap_mask = _mm_set_epi8(12, 13, 14, 15, 8, 9, 10, 11,
@@ -1152,7 +1229,11 @@ processx4_step2(const struct lcore_conf *qconf, __m128i dip, uint32_t flag,
dip = _mm_shuffle_epi8(dip, bswap_mask);
/* if all 4 packets are IPV4. */
+#ifdef RTE_NEXT_ABI
+ if (likely(ipv4_flag)) {
+#else
if (likely(flag != 0)) {
+#endif
rte_lpm_lookupx4(qconf->ipv4_lookup_struct, dip, dprt, portid);
} else {
dst.x = dip;
@@ -1202,6 +1283,16 @@ processx4_step3(struct rte_mbuf *pkt[FWDSTEP], uint16_t dst_port[FWDSTEP])
_mm_store_si128(p[2], te[2]);
_mm_store_si128(p[3], te[3]);
+#ifdef RTE_NEXT_ABI
+ rfc1812_process((struct ipv4_hdr *)((struct ether_hdr *)p[0] + 1),
+ &dst_port[0], pkt[0]->packet_type);
+ rfc1812_process((struct ipv4_hdr *)((struct ether_hdr *)p[1] + 1),
+ &dst_port[1], pkt[1]->packet_type);
+ rfc1812_process((struct ipv4_hdr *)((struct ether_hdr *)p[2] + 1),
+ &dst_port[2], pkt[2]->packet_type);
+ rfc1812_process((struct ipv4_hdr *)((struct ether_hdr *)p[3] + 1),
+ &dst_port[3], pkt[3]->packet_type);
+#else /* RTE_NEXT_ABI */
rfc1812_process((struct ipv4_hdr *)((struct ether_hdr *)p[0] + 1),
&dst_port[0], pkt[0]->ol_flags);
rfc1812_process((struct ipv4_hdr *)((struct ether_hdr *)p[1] + 1),
@@ -1210,6 +1301,7 @@ processx4_step3(struct rte_mbuf *pkt[FWDSTEP], uint16_t dst_port[FWDSTEP])
&dst_port[2], pkt[2]->ol_flags);
rfc1812_process((struct ipv4_hdr *)((struct ether_hdr *)p[3] + 1),
&dst_port[3], pkt[3]->ol_flags);
+#endif /* RTE_NEXT_ABI */
}
/*
@@ -1396,7 +1488,11 @@ main_loop(__attribute__((unused)) void *dummy)
uint16_t *lp;
uint16_t dst_port[MAX_PKT_BURST];
__m128i dip[MAX_PKT_BURST / FWDSTEP];
+#ifdef RTE_NEXT_ABI
+ uint32_t ipv4_flag[MAX_PKT_BURST / FWDSTEP];
+#else
uint32_t flag[MAX_PKT_BURST / FWDSTEP];
+#endif
uint16_t pnum[MAX_PKT_BURST + 1];
#endif
@@ -1466,6 +1562,18 @@ main_loop(__attribute__((unused)) void *dummy)
*/
int32_t n = RTE_ALIGN_FLOOR(nb_rx, 4);
for (j = 0; j < n ; j+=4) {
+#ifdef RTE_NEXT_ABI
+ uint32_t pkt_type =
+ pkts_burst[j]->packet_type &
+ pkts_burst[j+1]->packet_type &
+ pkts_burst[j+2]->packet_type &
+ pkts_burst[j+3]->packet_type;
+ if (pkt_type & RTE_PTYPE_L3_IPV4) {
+ simple_ipv4_fwd_4pkts(
+ &pkts_burst[j], portid, qconf);
+ } else if (pkt_type &
+ RTE_PTYPE_L3_IPV6) {
+#else /* RTE_NEXT_ABI */
uint32_t ol_flag = pkts_burst[j]->ol_flags
& pkts_burst[j+1]->ol_flags
& pkts_burst[j+2]->ol_flags
@@ -1474,6 +1582,7 @@ main_loop(__attribute__((unused)) void *dummy)
simple_ipv4_fwd_4pkts(&pkts_burst[j],
portid, qconf);
} else if (ol_flag & PKT_RX_IPV6_HDR) {
+#endif /* RTE_NEXT_ABI */
simple_ipv6_fwd_4pkts(&pkts_burst[j],
portid, qconf);
} else {
@@ -1498,13 +1607,21 @@ main_loop(__attribute__((unused)) void *dummy)
for (j = 0; j != k; j += FWDSTEP) {
processx4_step1(&pkts_burst[j],
&dip[j / FWDSTEP],
+#ifdef RTE_NEXT_ABI
+ &ipv4_flag[j / FWDSTEP]);
+#else
&flag[j / FWDSTEP]);
+#endif
}
k = RTE_ALIGN_FLOOR(nb_rx, FWDSTEP);
for (j = 0; j != k; j += FWDSTEP) {
processx4_step2(qconf, dip[j / FWDSTEP],
+#ifdef RTE_NEXT_ABI
+ ipv4_flag[j / FWDSTEP], portid,
+#else
flag[j / FWDSTEP], portid,
+#endif
&pkts_burst[j], &dst_port[j]);
}
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v8 18/18] mbuf: remove old packet type bit masks
2015-06-23 1:50 ` [dpdk-dev] [PATCH v8 00/18] unified packet type Helin Zhang
` (16 preceding siblings ...)
2015-06-23 1:50 ` [dpdk-dev] [PATCH v8 17/18] examples/l3fwd: " Helin Zhang
@ 2015-06-23 1:50 ` Helin Zhang
2015-06-23 16:13 ` [dpdk-dev] [PATCH v8 00/18] unified packet type Ananyev, Konstantin
2015-07-03 8:32 ` [dpdk-dev] [PATCH v9 00/19] " Helin Zhang
19 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-06-23 1:50 UTC (permalink / raw)
To: dev
As unified packet types are used instead, those old bit masks and
the relevant macros for packet type indication need to be removed.
To avoid breaking ABI compatibility, all the changes would be
enabled by RTE_NEXT_ABI, which is disabled by default.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
lib/librte_mbuf/rte_mbuf.c | 4 ++++
lib/librte_mbuf/rte_mbuf.h | 4 ++++
2 files changed, 8 insertions(+)
v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.
* Redefined the bit masks for packet RX offload flags.
v5 changes:
* Rolled back the bit masks of RX flags, for ABI compatibility.
v6 changes:
* Disabled the code changes for unified packet type by default, to
avoid breaking ABI compatibility.
v7 changes:
* Renamed RTE_UNIFIED_PKT_TYPE to RTE_NEXT_ABI.
diff --git a/lib/librte_mbuf/rte_mbuf.c b/lib/librte_mbuf/rte_mbuf.c
index f506517..4320dd4 100644
--- a/lib/librte_mbuf/rte_mbuf.c
+++ b/lib/librte_mbuf/rte_mbuf.c
@@ -251,14 +251,18 @@ const char *rte_get_rx_ol_flag_name(uint64_t mask)
/* case PKT_RX_HBUF_OVERFLOW: return "PKT_RX_HBUF_OVERFLOW"; */
/* case PKT_RX_RECIP_ERR: return "PKT_RX_RECIP_ERR"; */
/* case PKT_RX_MAC_ERR: return "PKT_RX_MAC_ERR"; */
+#ifndef RTE_NEXT_ABI
case PKT_RX_IPV4_HDR: return "PKT_RX_IPV4_HDR";
case PKT_RX_IPV4_HDR_EXT: return "PKT_RX_IPV4_HDR_EXT";
case PKT_RX_IPV6_HDR: return "PKT_RX_IPV6_HDR";
case PKT_RX_IPV6_HDR_EXT: return "PKT_RX_IPV6_HDR_EXT";
+#endif /* RTE_NEXT_ABI */
case PKT_RX_IEEE1588_PTP: return "PKT_RX_IEEE1588_PTP";
case PKT_RX_IEEE1588_TMST: return "PKT_RX_IEEE1588_TMST";
+#ifndef RTE_NEXT_ABI
case PKT_RX_TUNNEL_IPV4_HDR: return "PKT_RX_TUNNEL_IPV4_HDR";
case PKT_RX_TUNNEL_IPV6_HDR: return "PKT_RX_TUNNEL_IPV6_HDR";
+#endif /* RTE_NEXT_ABI */
default: return NULL;
}
}
diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
index 0ee0c55..74a7f41 100644
--- a/lib/librte_mbuf/rte_mbuf.h
+++ b/lib/librte_mbuf/rte_mbuf.h
@@ -91,14 +91,18 @@ extern "C" {
#define PKT_RX_HBUF_OVERFLOW (0ULL << 0) /**< Header buffer overflow. */
#define PKT_RX_RECIP_ERR (0ULL << 0) /**< Hardware processing error. */
#define PKT_RX_MAC_ERR (0ULL << 0) /**< MAC error. */
+#ifndef RTE_NEXT_ABI
#define PKT_RX_IPV4_HDR (1ULL << 5) /**< RX packet with IPv4 header. */
#define PKT_RX_IPV4_HDR_EXT (1ULL << 6) /**< RX packet with extended IPv4 header. */
#define PKT_RX_IPV6_HDR (1ULL << 7) /**< RX packet with IPv6 header. */
#define PKT_RX_IPV6_HDR_EXT (1ULL << 8) /**< RX packet with extended IPv6 header. */
+#endif /* RTE_NEXT_ABI */
#define PKT_RX_IEEE1588_PTP (1ULL << 9) /**< RX IEEE1588 L2 Ethernet PT Packet. */
#define PKT_RX_IEEE1588_TMST (1ULL << 10) /**< RX IEEE1588 L2/L4 timestamped packet.*/
+#ifndef RTE_NEXT_ABI
#define PKT_RX_TUNNEL_IPV4_HDR (1ULL << 11) /**< RX tunnel packet with IPv4 header.*/
#define PKT_RX_TUNNEL_IPV6_HDR (1ULL << 12) /**< RX tunnel packet with IPv6 header. */
+#endif /* RTE_NEXT_ABI */
#define PKT_RX_FDIR_ID (1ULL << 13) /**< FD id reported if FDIR match. */
#define PKT_RX_FDIR_FLX (1ULL << 14) /**< Flexible bytes reported if FDIR match. */
#define PKT_RX_QINQ_PKT (1ULL << 15) /**< RX packet with double VLAN stripped. */
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* Re: [dpdk-dev] [PATCH v8 00/18] unified packet type
2015-06-23 1:50 ` [dpdk-dev] [PATCH v8 00/18] unified packet type Helin Zhang
` (17 preceding siblings ...)
2015-06-23 1:50 ` [dpdk-dev] [PATCH v8 18/18] mbuf: remove old packet type bit masks Helin Zhang
@ 2015-06-23 16:13 ` Ananyev, Konstantin
2015-07-02 8:45 ` Liu, Yong
2015-07-03 8:32 ` [dpdk-dev] [PATCH v9 00/19] " Helin Zhang
19 siblings, 1 reply; 257+ messages in thread
From: Ananyev, Konstantin @ 2015-06-23 16:13 UTC (permalink / raw)
To: Zhang, Helin, dev
> -----Original Message-----
> From: Zhang, Helin
> Sent: Tuesday, June 23, 2015 2:50 AM
> To: dev@dpdk.org
> Cc: Cao, Waterman; Liang, Cunming; Liu, Jijiang; Ananyev, Konstantin; Richardson, Bruce; yongwang@vmware.com;
> olivier.matz@6wind.com; Wu, Jingjing; Zhang, Helin
> Subject: [PATCH v8 00/18] unified packet type
>
> Currently only 6 bits which are stored in ol_flags are used to indicate the
> packet types. This is not enough, as some NIC hardware can recognize quite
> a lot of packet types, e.g i40e hardware can recognize more than 150 packet
> types. Hiding those packet types hides hardware offload capabilities which
> could be quite useful for improving performance and for end users.
> So an unified packet types are needed to support all possible PMDs. A 16
> bits packet_type in mbuf structure can be changed to 32 bits and used for
> this purpose. In addition, all packet types stored in ol_flag field should
> be deleted at all, and 6 bits of ol_flags can be save as the benifit.
>
> Initially, 32 bits of packet_type can be divided into several sub fields to
> indicate different packet type information of a packet. The initial design
> is to divide those bits into fields for L2 types, L3 types, L4 types, tunnel
> types, inner L2 types, inner L3 types and inner L4 types. All PMDs should
> translate the offloaded packet types into these 7 fields of information, for
> user applications.
>
> To avoid breaking ABI compatibility, currently all the code changes for
> unified packet type are disabled at compile time by default. Users can enable
> it manually by defining the macro of RTE_NEXT_ABI. The code changes will be
> valid by default in a future release, and the old version will be deleted
> accordingly, after the ABI change process is done.
>
> Note that this patch set should be integrated after another patch set for
> '[PATCH v3 0/7] support i40e QinQ stripping and insertion', to clearly solve
> the conflict during integration. As both patch sets modified 'struct rte_mbuf',
> and the final layout of the 'struct rte_mbuf' is key to vectorized ixgbe PMD.
>
> v2 changes:
> * Enlarged the packet_type field from 16 bits to 32 bits.
> * Redefined the packet type sub-fields.
> * Updated the 'struct rte_kni_mbuf' for KNI according to the mbuf changes.
> * Used redefined packet types and enlarged packet_type field for all PMDs
> and corresponding applications.
> * Removed changes in bond and its relevant application, as there is no need
> at all according to the recent bond changes.
>
> v3 changes:
> * Put the mbuf layout changes into a single patch.
> * Put vector ixgbe changes right after mbuf changes.
> * Disabled vector ixgbe PMD by default, as mbuf layout changed, and then
> re-enabled it after vector ixgbe PMD updated.
> * Put the definitions of unified packet type into a single patch.
> * Minor bug fixes and enhancements in l3fwd example.
>
> v4 changes:
> * Added detailed description of each packet types.
> * Supported unified packet type of fm10k.
> * Added printing logs of packet types of each received packet for rxonly
> mode in testpmd.
> * Removed several useless code lines which block packet type unification from
> app/test/packet_burst_generator.c.
>
> v5 changes:
> * Added more detailed description for each packet types, together with examples.
> * Rolled back the macro definitions of RX packet flags, for ABI compitability.
>
> v6 changes:
> * Disabled the code changes for unified packet type by default, to
> avoid breaking ABI compatibility.
>
> v7 changes:
> * Renamed RTE_UNIFIED_PKT_TYPE to RTE_NEXT_ABI.
> * Integrated with patch set for '[PATCH v3 0/7] support i40e QinQ stripping
> and insertion', to clearly solve the conflicts during merging.
>
> v8 changes:
> * Moved the field of 'vlan_tci_outer' in 'struct rte_mbuf' to the end of the 1st
> cache line, to avoid breaking any vectorized PMD storing, as fields of
> 'packet_type, pkt_len, data_len, vlan_tci, rss' should be in an contiguous 128
> bits.
>
> Helin Zhang (18):
> mbuf: redefine packet_type in rte_mbuf
> ixgbe: support unified packet type in vectorized PMD
> mbuf: add definitions of unified packet types
> e1000: replace bit mask based packet type with unified packet type
> ixgbe: replace bit mask based packet type with unified packet type
> i40e: replace bit mask based packet type with unified packet type
> enic: replace bit mask based packet type with unified packet type
> vmxnet3: replace bit mask based packet type with unified packet type
> fm10k: replace bit mask based packet type with unified packet type
> app/test-pipeline: replace bit mask based packet type with unified
> packet type
> app/testpmd: replace bit mask based packet type with unified packet
> type
> app/test: Remove useless code
> examples/ip_fragmentation: replace bit mask based packet type with
> unified packet type
> examples/ip_reassembly: replace bit mask based packet type with
> unified packet type
> examples/l3fwd-acl: replace bit mask based packet type with unified
> packet type
> examples/l3fwd-power: replace bit mask based packet type with unified
> packet type
> examples/l3fwd: replace bit mask based packet type with unified packet
> type
> mbuf: remove old packet type bit masks
>
> app/test-pipeline/pipeline_hash.c | 13 +
> app/test-pmd/csumonly.c | 14 +
> app/test-pmd/rxonly.c | 183 +++++++
> app/test/packet_burst_generator.c | 6 +-
> drivers/net/e1000/igb_rxtx.c | 102 ++++
> drivers/net/enic/enic_main.c | 26 +
> drivers/net/fm10k/fm10k_rxtx.c | 27 ++
> drivers/net/i40e/i40e_rxtx.c | 528 +++++++++++++++++++++
> drivers/net/ixgbe/ixgbe_rxtx.c | 163 +++++++
> drivers/net/ixgbe/ixgbe_rxtx_vec.c | 75 ++-
> drivers/net/vmxnet3/vmxnet3_rxtx.c | 8 +
> examples/ip_fragmentation/main.c | 9 +
> examples/ip_reassembly/main.c | 9 +
> examples/l3fwd-acl/main.c | 29 +-
> examples/l3fwd-power/main.c | 8 +
> examples/l3fwd/main.c | 123 ++++-
> .../linuxapp/eal/include/exec-env/rte_kni_common.h | 6 +
> lib/librte_mbuf/rte_mbuf.c | 4 +
> lib/librte_mbuf/rte_mbuf.h | 517 ++++++++++++++++++++
> 19 files changed, 1837 insertions(+), 13 deletions(-)
>
> --
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> 1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* Re: [dpdk-dev] [PATCH v8 03/18] mbuf: add definitions of unified packet types
2015-06-23 1:50 ` [dpdk-dev] [PATCH v8 03/18] mbuf: add definitions of unified packet types Helin Zhang
@ 2015-06-30 8:43 ` Olivier MATZ
2015-07-02 1:30 ` Zhang, Helin
0 siblings, 1 reply; 257+ messages in thread
From: Olivier MATZ @ 2015-06-30 8:43 UTC (permalink / raw)
To: Helin Zhang, dev
Hi Helin,
This is greatly documented, thanks!
Please find a small comment below.
On 06/23/2015 03:50 AM, Helin Zhang wrote:
> As there are only 6 bit flags in ol_flags for indicating packet
> types, which is not enough to describe all the possible packet
> types hardware can recognize. For example, i40e hardware can
> recognize more than 150 packet types. Unified packet type is
> composed of L2 type, L3 type, L4 type, tunnel type, inner L2 type,
> inner L3 type and inner L4 type fields, and can be stored in
> 'struct rte_mbuf' of 32 bits field 'packet_type'.
> To avoid breaking ABI compatibility, all the changes would be
> enabled by RTE_NEXT_ABI, which is disabled by default.
>
> [...]
> diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
> index 0315561..0ee0c55 100644
> --- a/lib/librte_mbuf/rte_mbuf.h
> +++ b/lib/librte_mbuf/rte_mbuf.h
> @@ -201,6 +201,493 @@ extern "C" {
> /* Use final bit of flags to indicate a control mbuf */
> #define CTRL_MBUF_FLAG (1ULL << 63) /**< Mbuf contains control data */
>
> +#ifdef RTE_NEXT_ABI
> +/*
> + * 32 bits are divided into several fields to mark packet types. Note that
> + * each field is indexical.
> + * - Bit 3:0 is for L2 types.
> + * - Bit 7:4 is for L3 or outer L3 (for tunneling case) types.
> + * - Bit 11:8 is for L4 or outer L4 (for tunneling case) types.
> + * - Bit 15:12 is for tunnel types.
> + * - Bit 19:16 is for inner L2 types.
> + * - Bit 23:20 is for inner L3 types.
> + * - Bit 27:24 is for inner L4 types.
> + * - Bit 31:28 is reserved.
> + *
> + * To be compatible with Vector PMD, RTE_PTYPE_L3_IPV4, RTE_PTYPE_L3_IPV4_EXT,
> + * RTE_PTYPE_L3_IPV6, RTE_PTYPE_L3_IPV6_EXT, RTE_PTYPE_L4_TCP, RTE_PTYPE_L4_UDP
> + * and RTE_PTYPE_L4_SCTP should be kept as below in a contiguous 7 bits.
> + *
> + * Note that L3 types values are selected for checking IPV4/IPV6 header from
> + * performance point of view. Reading annotations of RTE_ETH_IS_IPV4_HDR and
> + * RTE_ETH_IS_IPV6_HDR is needed for any future changes of L3 type values.
> + *
> + * Note that the packet types of the same packet recognized by different
> + * hardware may be different, as different hardware may have different
> + * capability of packet type recognition.
> + *
> + * examples:
> + * <'ether type'=0x0800
> + * | 'version'=4, 'protocol'=0x29
> + * | 'version'=6, 'next header'=0x3A
> + * | 'ICMPv6 header'>
> + * will be recognized on i40e hardware as packet type combination of,
> + * RTE_PTYPE_L2_MAC |
> + * RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
> + * RTE_PTYPE_TUNNEL_IP |
> + * RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
> + * RTE_PTYPE_INNER_L4_ICMP.
> + *
> + * <'ether type'=0x86DD
> + * | 'version'=6, 'next header'=0x2F
> + * | 'GRE header'
> + * | 'version'=6, 'next header'=0x11
> + * | 'UDP header'>
> + * will be recognized on i40e hardware as packet type combination of,
> + * RTE_PTYPE_L2_MAC |
> + * RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
> + * RTE_PTYPE_TUNNEL_GRENAT |
> + * RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
> + * RTE_PTYPE_INNER_L4_UDP.
> + */
> +#define RTE_PTYPE_UNKNOWN 0x00000000
> +/**
> + * MAC (Media Access Control) packet type.
> + * It is used for outer packet for tunneling cases.
> + *
> + * Packet format:
> + * <'ether type'=[0x0800|0x86DD|others]>
> + */
> +#define RTE_PTYPE_L2_MAC 0x00000001
I'm wondering if RTE_PTYPE_L2_ETHER is not a better name?
> +/**
> + * MAC (Media Access Control) packet type for time sync.
> + *
> + * Packet format:
> + * <'ether type'=0x88F7>
> + */
> +#define RTE_PTYPE_L2_MAC_TIMESYNC 0x00000002
> +/**
> + * ARP (Address Resolution Protocol) packet type.
> + *
> + * Packet format:
> + * <'ether type'=0x0806>
> + */
> +#define RTE_PTYPE_L2_ARP 0x00000003
> +/**
> + * LLDP (Link Layer Discovery Protocol) packet type.
> + *
> + * Packet format:
> + * <'ether type'=0x88CC>
> + */
> +#define RTE_PTYPE_L2_LLDP 0x00000004
Maybe ETHER should appear in these names too, what do you think?
> +/**
> + * Mask of layer 2 packet types.
> + * It is used for outer packet for tunneling cases.
> + */
> +#define RTE_PTYPE_L2_MASK 0x0000000f
> +/**
> + * IP (Internet Protocol) version 4 packet type.
> + * It is used for outer packet for tunneling cases, and does not contain any
> + * header option.
> + *
> + * Packet format:
> + * <'ether type'=0x0800
> + * | 'version'=4, 'ihl'=5>
> + */
> +#define RTE_PTYPE_L3_IPV4 0x00000010
> +/**
> + * IP (Internet Protocol) version 4 packet type.
> + * It is used for outer packet for tunneling cases, and contains header
> + * options.
> + *
> + * Packet format:
> + * <'ether type'=0x0800
> + * | 'version'=4, 'ihl'=[6-15], 'options'>
> + */
> +#define RTE_PTYPE_L3_IPV4_EXT 0x00000030
> +/**
> + * IP (Internet Protocol) version 6 packet type.
> + * It is used for outer packet for tunneling cases, and does not contain any
> + * extension header.
> + *
> + * Packet format:
> + * <'ether type'=0x86DD
> + * | 'version'=6, 'next header'=0x3B>
> + */
> +#define RTE_PTYPE_L3_IPV6 0x00000040
> +/**
> + * IP (Internet Protocol) version 4 packet type.
> + * It is used for outer packet for tunneling cases, and may or maynot contain
> + * header options.
> + *
> + * Packet format:
> + * <'ether type'=0x0800
> + * | 'version'=4, 'ihl'=[5-15], <'options'>>
> + */
> +#define RTE_PTYPE_L3_IPV4_EXT_UNKNOWN 0x00000090
> +/**
> + * IP (Internet Protocol) version 6 packet type.
> + * It is used for outer packet for tunneling cases, and contains extension
> + * headers.
> + *
> + * Packet format:
> + * <'ether type'=0x86DD
> + * | 'version'=6, 'next header'=[0x0|0x2B|0x2C|0x32|0x33|0x3C|0x87],
> + * 'extension headers'>
> + */
> +#define RTE_PTYPE_L3_IPV6_EXT 0x000000c0
> +/**
> + * IP (Internet Protocol) version 6 packet type.
> + * It is used for outer packet for tunneling cases, and may or maynot contain
> + * extension headers.
> + *
> + * Packet format:
> + * <'ether type'=0x86DD
> + * | 'version'=6, 'next header'=[0x3B|0x0|0x2B|0x2C|0x32|0x33|0x3C|0x87],
> + * <'extension headers'>>
> + */
> +#define RTE_PTYPE_L3_IPV6_EXT_UNKNOWN 0x000000e0
> +/**
> + * Mask of layer 3 packet types.
> + * It is used for outer packet for tunneling cases.
> + */
> +#define RTE_PTYPE_L3_MASK 0x000000f0
> +/**
> + * TCP (Transmission Control Protocol) packet type.
> + * It is used for outer packet for tunneling cases.
> + *
> + * Packet format:
> + * <'ether type'=0x0800
> + * | 'version'=4, 'protocol'=6, 'MF'=0>
> + * or,
> + * <'ether type'=0x86DD
> + * | 'version'=6, 'next header'=6>
> + */
> +#define RTE_PTYPE_L4_TCP 0x00000100
> +/**
> + * UDP (User Datagram Protocol) packet type.
> + * It is used for outer packet for tunneling cases.
> + *
> + * Packet format:
> + * <'ether type'=0x0800
> + * | 'version'=4, 'protocol'=17, 'MF'=0>
> + * or,
> + * <'ether type'=0x86DD
> + * | 'version'=6, 'next header'=17>
> + */
> +#define RTE_PTYPE_L4_UDP 0x00000200
> +/**
> + * Fragmented IP (Internet Protocol) packet type.
> + * It is used for outer packet for tunneling cases.
> + *
> + * It refers to those packets of any IP types, which can be recognized as
> + * fragmented. A fragmented packet cannot be recognized as any other L4 types
> + * (RTE_PTYPE_L4_TCP, RTE_PTYPE_L4_UDP, RTE_PTYPE_L4_SCTP, RTE_PTYPE_L4_ICMP,
> + * RTE_PTYPE_L4_NONFRAG).
> + *
> + * Packet format:
> + * <'ether type'=0x0800
> + * | 'version'=4, 'MF'=1>
> + * or,
> + * <'ether type'=0x86DD
> + * | 'version'=6, 'next header'=44>
> + */
> +#define RTE_PTYPE_L4_FRAG 0x00000300
> +/**
> + * SCTP (Stream Control Transmission Protocol) packet type.
> + * It is used for outer packet for tunneling cases.
> + *
> + * Packet format:
> + * <'ether type'=0x0800
> + * | 'version'=4, 'protocol'=132, 'MF'=0>
> + * or,
> + * <'ether type'=0x86DD
> + * | 'version'=6, 'next header'=132>
> + */
> +#define RTE_PTYPE_L4_SCTP 0x00000400
> +/**
> + * ICMP (Internet Control Message Protocol) packet type.
> + * It is used for outer packet for tunneling cases.
> + *
> + * Packet format:
> + * <'ether type'=0x0800
> + * | 'version'=4, 'protocol'=1, 'MF'=0>
> + * or,
> + * <'ether type'=0x86DD
> + * | 'version'=6, 'next header'=1>
> + */
> +#define RTE_PTYPE_L4_ICMP 0x00000500
> +/**
> + * Non-fragmented IP (Internet Protocol) packet type.
> + * It is used for outer packet for tunneling cases.
> + *
> + * It refers to those packets of any IP types, while cannot be recognized as
> + * any of above L4 types (RTE_PTYPE_L4_TCP, RTE_PTYPE_L4_UDP,
> + * RTE_PTYPE_L4_FRAG, RTE_PTYPE_L4_SCTP, RTE_PTYPE_L4_ICMP).
> + *
> + * Packet format:
> + * <'ether type'=0x0800
> + * | 'version'=4, 'protocol'!=[6|17|132|1], 'MF'=0>
> + * or,
> + * <'ether type'=0x86DD
> + * | 'version'=6, 'next header'!=[6|17|44|132|1]>
> + */
> +#define RTE_PTYPE_L4_NONFRAG 0x00000600
> +/**
> + * Mask of layer 4 packet types.
> + * It is used for outer packet for tunneling cases.
> + */
> +#define RTE_PTYPE_L4_MASK 0x00000f00
> +/**
> + * IP (Internet Protocol) in IP (Internet Protocol) tunneling packet type.
> + *
> + * Packet format:
> + * <'ether type'=0x0800
> + * | 'version'=4, 'protocol'=[4|41]>
> + * or,
> + * <'ether type'=0x86DD
> + * | 'version'=6, 'next header'=[4|41]>
> + */
> +#define RTE_PTYPE_TUNNEL_IP 0x00001000
> +/**
> + * GRE (Generic Routing Encapsulation) tunneling packet type.
> + *
> + * Packet format:
> + * <'ether type'=0x0800
> + * | 'version'=4, 'protocol'=47>
> + * or,
> + * <'ether type'=0x86DD
> + * | 'version'=6, 'next header'=47>
> + */
> +#define RTE_PTYPE_TUNNEL_GRE 0x00002000
> +/**
> + * VXLAN (Virtual eXtensible Local Area Network) tunneling packet type.
> + *
> + * Packet format:
> + * <'ether type'=0x0800
> + * | 'version'=4, 'protocol'=17
> + * | 'destination port'=4798>
> + * or,
> + * <'ether type'=0x86DD
> + * | 'version'=6, 'next header'=17
> + * | 'destination port'=4798>
> + */
> +#define RTE_PTYPE_TUNNEL_VXLAN 0x00003000
> +/**
> + * NVGRE (Network Virtualization using Generic Routing Encapsulation) tunneling
> + * packet type.
> + *
> + * Packet format:
> + * <'ether type'=0x0800
> + * | 'version'=4, 'protocol'=47
> + * | 'protocol type'=0x6558>
> + * or,
> + * <'ether type'=0x86DD
> + * | 'version'=6, 'next header'=47
> + * | 'protocol type'=0x6558'>
> + */
> +#define RTE_PTYPE_TUNNEL_NVGRE 0x00004000
> +/**
> + * GENEVE (Generic Network Virtualization Encapsulation) tunneling packet type.
> + *
> + * Packet format:
> + * <'ether type'=0x0800
> + * | 'version'=4, 'protocol'=17
> + * | 'destination port'=6081>
> + * or,
> + * <'ether type'=0x86DD
> + * | 'version'=6, 'next header'=17
> + * | 'destination port'=6081>
> + */
> +#define RTE_PTYPE_TUNNEL_GENEVE 0x00005000
> +/**
> + * Tunneling packet type of Teredo, VXLAN (Virtual eXtensible Local Area
> + * Network) or GRE (Generic Routing Encapsulation) could be recognized as this
> + * packet type, if they can not be recognized independently as of hardware
> + * capability.
> + */
> +#define RTE_PTYPE_TUNNEL_GRENAT 0x00006000
> +/**
> + * Mask of tunneling packet types.
> + */
> +#define RTE_PTYPE_TUNNEL_MASK 0x0000f000
> +/**
> + * MAC (Media Access Control) packet type.
> + * It is used for inner packet type only.
> + *
> + * Packet format (inner only):
> + * <'ether type'=[0x800|0x86DD]>
> + */
> +#define RTE_PTYPE_INNER_L2_MAC 0x00010000
> +/**
> + * MAC (Media Access Control) packet type with VLAN (Virtual Local Area
> + * Network) tag.
> + *
> + * Packet format (inner only):
> + * <'ether type'=[0x800|0x86DD], vlan=[1-4095]>
> + */
> +#define RTE_PTYPE_INNER_L2_MAC_VLAN 0x00020000
> +/**
> + * Mask of inner layer 2 packet types.
> + */
> +#define RTE_PTYPE_INNER_L2_MASK 0x000f0000
> +/**
> + * IP (Internet Protocol) version 4 packet type.
> + * It is used for inner packet only, and does not contain any header option.
> + *
> + * Packet format (inner only):
> + * <'ether type'=0x0800
> + * | 'version'=4, 'ihl'=5>
> + */
> +#define RTE_PTYPE_INNER_L3_IPV4 0x00100000
> +/**
> + * IP (Internet Protocol) version 4 packet type.
> + * It is used for inner packet only, and contains header options.
> + *
> + * Packet format (inner only):
> + * <'ether type'=0x0800
> + * | 'version'=4, 'ihl'=[6-15], 'options'>
> + */
> +#define RTE_PTYPE_INNER_L3_IPV4_EXT 0x00200000
> +/**
> + * IP (Internet Protocol) version 6 packet type.
> + * It is used for inner packet only, and does not contain any extension header.
> + *
> + * Packet format (inner only):
> + * <'ether type'=0x86DD
> + * | 'version'=6, 'next header'=0x3B>
> + */
> +#define RTE_PTYPE_INNER_L3_IPV6 0x00300000
> +/**
> + * IP (Internet Protocol) version 4 packet type.
> + * It is used for inner packet only, and may or maynot contain header options.
> + *
> + * Packet format (inner only):
> + * <'ether type'=0x0800
> + * | 'version'=4, 'ihl'=[5-15], <'options'>>
> + */
> +#define RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN 0x00400000
> +/**
> + * IP (Internet Protocol) version 6 packet type.
> + * It is used for inner packet only, and contains extension headers.
> + *
> + * Packet format (inner only):
> + * <'ether type'=0x86DD
> + * | 'version'=6, 'next header'=[0x0|0x2B|0x2C|0x32|0x33|0x3C|0x87],
> + * 'extension headers'>
> + */
> +#define RTE_PTYPE_INNER_L3_IPV6_EXT 0x00500000
> +/**
> + * IP (Internet Protocol) version 6 packet type.
> + * It is used for inner packet only, and may or maynot contain extension
> + * headers.
> + *
> + * Packet format (inner only):
> + * <'ether type'=0x86DD
> + * | 'version'=6, 'next header'=[0x3B|0x0|0x2B|0x2C|0x32|0x33|0x3C|0x87],
> + * <'extension headers'>>
> + */
> +#define RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN 0x00600000
> +/**
> + * Mask of inner layer 3 packet types.
> + */
> +#define RTE_PTYPE_INNER_INNER_L3_MASK 0x00f00000
> +/**
> + * TCP (Transmission Control Protocol) packet type.
> + * It is used for inner packet only.
> + *
> + * Packet format (inner only):
> + * <'ether type'=0x0800
> + * | 'version'=4, 'protocol'=6, 'MF'=0>
> + * or,
> + * <'ether type'=0x86DD
> + * | 'version'=6, 'next header'=6>
> + */
> +#define RTE_PTYPE_INNER_L4_TCP 0x01000000
> +/**
> + * UDP (User Datagram Protocol) packet type.
> + * It is used for inner packet only.
> + *
> + * Packet format (inner only):
> + * <'ether type'=0x0800
> + * | 'version'=4, 'protocol'=17, 'MF'=0>
> + * or,
> + * <'ether type'=0x86DD
> + * | 'version'=6, 'next header'=17>
> + */
> +#define RTE_PTYPE_INNER_L4_UDP 0x02000000
> +/**
> + * Fragmented IP (Internet Protocol) packet type.
> + * It is used for inner packet only, and may or maynot have layer 4 packet.
> + *
> + * Packet format (inner only):
> + * <'ether type'=0x0800
> + * | 'version'=4, 'MF'=1>
> + * or,
> + * <'ether type'=0x86DD
> + * | 'version'=6, 'next header'=44>
> + */
> +#define RTE_PTYPE_INNER_L4_FRAG 0x03000000
> +/**
> + * SCTP (Stream Control Transmission Protocol) packet type.
> + * It is used for inner packet only.
> + *
> + * Packet format (inner only):
> + * <'ether type'=0x0800
> + * | 'version'=4, 'protocol'=132, 'MF'=0>
> + * or,
> + * <'ether type'=0x86DD
> + * | 'version'=6, 'next header'=132>
> + */
> +#define RTE_PTYPE_INNER_L4_SCTP 0x04000000
> +/**
> + * ICMP (Internet Control Message Protocol) packet type.
> + * It is used for inner packet only.
> + *
> + * Packet format (inner only):
> + * <'ether type'=0x0800
> + * | 'version'=4, 'protocol'=1, 'MF'=0>
> + * or,
> + * <'ether type'=0x86DD
> + * | 'version'=6, 'next header'=1>
> + */
> +#define RTE_PTYPE_INNER_L4_ICMP 0x05000000
> +/**
> + * Non-fragmented IP (Internet Protocol) packet type.
> + * It is used for inner packet only, and may or maynot have other unknown layer
> + * 4 packet types.
> + *
> + * Packet format (inner only):
> + * <'ether type'=0x0800
> + * | 'version'=4, 'protocol'!=[6|17|132|1], 'MF'=0>
> + * or,
> + * <'ether type'=0x86DD
> + * | 'version'=6, 'next header'!=[6|17|44|132|1]>
> + */
> +#define RTE_PTYPE_INNER_L4_NONFRAG 0x06000000
> +/**
> + * Mask of inner layer 4 packet types.
> + */
> +#define RTE_PTYPE_INNER_L4_MASK 0x0f000000
> +
> +/**
> + * Check if the (outer) L3 header is IPv4. To avoid comparing IPv4 types one by
> + * one, bit 4 is selected to be used for IPv4 only. Then checking bit 4 can
> + * determin if it is an IPV4 packet.
> + */
> +#define RTE_ETH_IS_IPV4_HDR(ptype) ((ptype) & RTE_PTYPE_L3_IPV4)
> +
> +/**
> + * Check if the (outer) L3 header is IPv4. To avoid comparing IPv4 types one by
> + * one, bit 6 is selected to be used for IPv4 only. Then checking bit 6 can
> + * determin if it is an IPV4 packet.
> + */
> +#define RTE_ETH_IS_IPV6_HDR(ptype) ((ptype) & RTE_PTYPE_L3_IPV6)
> +
> +/* Check if it is a tunneling packet */
> +#define RTE_ETH_IS_TUNNEL_PKT(ptype) ((ptype) & RTE_PTYPE_TUNNEL_MASK)
> +#endif /* RTE_NEXT_ABI */
> +
> /**
> * Get the name of a RX offload flag
> *
>
^ permalink raw reply [flat|nested] 257+ messages in thread
* Re: [dpdk-dev] [PATCH v8 03/18] mbuf: add definitions of unified packet types
2015-06-30 8:43 ` Olivier MATZ
@ 2015-07-02 1:30 ` Zhang, Helin
2015-07-02 9:31 ` Olivier MATZ
0 siblings, 1 reply; 257+ messages in thread
From: Zhang, Helin @ 2015-07-02 1:30 UTC (permalink / raw)
To: Olivier MATZ, dev
Hi Oliver
Thanks for your helps!
> -----Original Message-----
> From: Olivier MATZ [mailto:olivier.matz@6wind.com]
> Sent: Tuesday, June 30, 2015 4:44 PM
> To: Zhang, Helin; dev@dpdk.org
> Cc: Cao, Waterman; Liang, Cunming; Liu, Jijiang; Ananyev, Konstantin; Richardson,
> Bruce; yongwang@vmware.com; Wu, Jingjing
> Subject: Re: [PATCH v8 03/18] mbuf: add definitions of unified packet types
>
> Hi Helin,
>
> This is greatly documented, thanks!
> Please find a small comment below.
>
> On 06/23/2015 03:50 AM, Helin Zhang wrote:
> > As there are only 6 bit flags in ol_flags for indicating packet types,
> > which is not enough to describe all the possible packet types hardware
> > can recognize. For example, i40e hardware can recognize more than 150
> > packet types. Unified packet type is composed of L2 type, L3 type, L4
> > type, tunnel type, inner L2 type, inner L3 type and inner L4 type
> > fields, and can be stored in 'struct rte_mbuf' of 32 bits field
> > 'packet_type'.
> > To avoid breaking ABI compatibility, all the changes would be enabled
> > by RTE_NEXT_ABI, which is disabled by default.
> >
> > [...]
> > diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
> > index 0315561..0ee0c55 100644
> > --- a/lib/librte_mbuf/rte_mbuf.h
> > +++ b/lib/librte_mbuf/rte_mbuf.h
> > @@ -201,6 +201,493 @@ extern "C" {
> > /* Use final bit of flags to indicate a control mbuf */
> > #define CTRL_MBUF_FLAG (1ULL << 63) /**< Mbuf contains control
> data */
> >
> > +#ifdef RTE_NEXT_ABI
> > +/*
> > + * 32 bits are divided into several fields to mark packet types. Note
> > +that
> > + * each field is indexical.
> > + * - Bit 3:0 is for L2 types.
> > + * - Bit 7:4 is for L3 or outer L3 (for tunneling case) types.
> > + * - Bit 11:8 is for L4 or outer L4 (for tunneling case) types.
> > + * - Bit 15:12 is for tunnel types.
> > + * - Bit 19:16 is for inner L2 types.
> > + * - Bit 23:20 is for inner L3 types.
> > + * - Bit 27:24 is for inner L4 types.
> > + * - Bit 31:28 is reserved.
> > + *
> > + * To be compatible with Vector PMD, RTE_PTYPE_L3_IPV4,
> > +RTE_PTYPE_L3_IPV4_EXT,
> > + * RTE_PTYPE_L3_IPV6, RTE_PTYPE_L3_IPV6_EXT, RTE_PTYPE_L4_TCP,
> > +RTE_PTYPE_L4_UDP
> > + * and RTE_PTYPE_L4_SCTP should be kept as below in a contiguous 7 bits.
> > + *
> > + * Note that L3 types values are selected for checking IPV4/IPV6
> > +header from
> > + * performance point of view. Reading annotations of
> > +RTE_ETH_IS_IPV4_HDR and
> > + * RTE_ETH_IS_IPV6_HDR is needed for any future changes of L3 type values.
> > + *
> > + * Note that the packet types of the same packet recognized by
> > +different
> > + * hardware may be different, as different hardware may have
> > +different
> > + * capability of packet type recognition.
> > + *
> > + * examples:
> > + * <'ether type'=0x0800
> > + * | 'version'=4, 'protocol'=0x29
> > + * | 'version'=6, 'next header'=0x3A
> > + * | 'ICMPv6 header'>
> > + * will be recognized on i40e hardware as packet type combination of,
> > + * RTE_PTYPE_L2_MAC |
> > + * RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
> > + * RTE_PTYPE_TUNNEL_IP |
> > + * RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
> > + * RTE_PTYPE_INNER_L4_ICMP.
> > + *
> > + * <'ether type'=0x86DD
> > + * | 'version'=6, 'next header'=0x2F
> > + * | 'GRE header'
> > + * | 'version'=6, 'next header'=0x11
> > + * | 'UDP header'>
> > + * will be recognized on i40e hardware as packet type combination of,
> > + * RTE_PTYPE_L2_MAC |
> > + * RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
> > + * RTE_PTYPE_TUNNEL_GRENAT |
> > + * RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
> > + * RTE_PTYPE_INNER_L4_UDP.
> > + */
> > +#define RTE_PTYPE_UNKNOWN 0x00000000
> > +/**
> > + * MAC (Media Access Control) packet type.
> > + * It is used for outer packet for tunneling cases.
> > + *
> > + * Packet format:
> > + * <'ether type'=[0x0800|0x86DD|others]> */
> > +#define RTE_PTYPE_L2_MAC 0x00000001
>
> I'm wondering if RTE_PTYPE_L2_ETHER is not a better name?
Ethernet includes both Data Link Layer and Physical Layer, while MAC is for Data Link
Layer only. I would prefer to keep 'MAC' in the names, rather than 'ether'.
Any opinions from others?
Regards,
Helin
>
>
> > +/**
> > + * MAC (Media Access Control) packet type for time sync.
> > + *
> > + * Packet format:
> > + * <'ether type'=0x88F7>
> > + */
> > +#define RTE_PTYPE_L2_MAC_TIMESYNC 0x00000002
> > +/**
> > + * ARP (Address Resolution Protocol) packet type.
> > + *
> > + * Packet format:
> > + * <'ether type'=0x0806>
> > + */
> > +#define RTE_PTYPE_L2_ARP 0x00000003
> > +/**
> > + * LLDP (Link Layer Discovery Protocol) packet type.
> > + *
> > + * Packet format:
> > + * <'ether type'=0x88CC>
> > + */
> > +#define RTE_PTYPE_L2_LLDP 0x00000004
>
> Maybe ETHER should appear in these names too, what do you think?
Same as above.
>
>
>
>
> > +/**
> > + * Mask of layer 2 packet types.
> > + * It is used for outer packet for tunneling cases.
> > + */
> > +#define RTE_PTYPE_L2_MASK 0x0000000f
> > +/**
> > + * IP (Internet Protocol) version 4 packet type.
> > + * It is used for outer packet for tunneling cases, and does not
> > +contain any
> > + * header option.
> > + *
> > + * Packet format:
> > + * <'ether type'=0x0800
> > + * | 'version'=4, 'ihl'=5>
> > + */
> > +#define RTE_PTYPE_L3_IPV4 0x00000010
> > +/**
> > + * IP (Internet Protocol) version 4 packet type.
> > + * It is used for outer packet for tunneling cases, and contains
> > +header
> > + * options.
> > + *
> > + * Packet format:
> > + * <'ether type'=0x0800
> > + * | 'version'=4, 'ihl'=[6-15], 'options'> */
> > +#define RTE_PTYPE_L3_IPV4_EXT 0x00000030
> > +/**
> > + * IP (Internet Protocol) version 6 packet type.
> > + * It is used for outer packet for tunneling cases, and does not
> > +contain any
> > + * extension header.
> > + *
> > + * Packet format:
> > + * <'ether type'=0x86DD
> > + * | 'version'=6, 'next header'=0x3B> */
> > +#define RTE_PTYPE_L3_IPV6 0x00000040
> > +/**
> > + * IP (Internet Protocol) version 4 packet type.
> > + * It is used for outer packet for tunneling cases, and may or maynot
> > +contain
> > + * header options.
> > + *
> > + * Packet format:
> > + * <'ether type'=0x0800
> > + * | 'version'=4, 'ihl'=[5-15], <'options'>> */
> > +#define RTE_PTYPE_L3_IPV4_EXT_UNKNOWN 0x00000090
> > +/**
> > + * IP (Internet Protocol) version 6 packet type.
> > + * It is used for outer packet for tunneling cases, and contains
> > +extension
> > + * headers.
> > + *
> > + * Packet format:
> > + * <'ether type'=0x86DD
> > + * | 'version'=6, 'next header'=[0x0|0x2B|0x2C|0x32|0x33|0x3C|0x87],
> > + * 'extension headers'>
> > + */
> > +#define RTE_PTYPE_L3_IPV6_EXT 0x000000c0
> > +/**
> > + * IP (Internet Protocol) version 6 packet type.
> > + * It is used for outer packet for tunneling cases, and may or maynot
> > +contain
> > + * extension headers.
> > + *
> > + * Packet format:
> > + * <'ether type'=0x86DD
> > + * | 'version'=6, 'next header'=[0x3B|0x0|0x2B|0x2C|0x32|0x33|0x3C|0x87],
> > + * <'extension headers'>>
> > + */
> > +#define RTE_PTYPE_L3_IPV6_EXT_UNKNOWN 0x000000e0
> > +/**
> > + * Mask of layer 3 packet types.
> > + * It is used for outer packet for tunneling cases.
> > + */
> > +#define RTE_PTYPE_L3_MASK 0x000000f0
> > +/**
> > + * TCP (Transmission Control Protocol) packet type.
> > + * It is used for outer packet for tunneling cases.
> > + *
> > + * Packet format:
> > + * <'ether type'=0x0800
> > + * | 'version'=4, 'protocol'=6, 'MF'=0>
> > + * or,
> > + * <'ether type'=0x86DD
> > + * | 'version'=6, 'next header'=6>
> > + */
> > +#define RTE_PTYPE_L4_TCP 0x00000100
> > +/**
> > + * UDP (User Datagram Protocol) packet type.
> > + * It is used for outer packet for tunneling cases.
> > + *
> > + * Packet format:
> > + * <'ether type'=0x0800
> > + * | 'version'=4, 'protocol'=17, 'MF'=0>
> > + * or,
> > + * <'ether type'=0x86DD
> > + * | 'version'=6, 'next header'=17>
> > + */
> > +#define RTE_PTYPE_L4_UDP 0x00000200
> > +/**
> > + * Fragmented IP (Internet Protocol) packet type.
> > + * It is used for outer packet for tunneling cases.
> > + *
> > + * It refers to those packets of any IP types, which can be
> > +recognized as
> > + * fragmented. A fragmented packet cannot be recognized as any other
> > +L4 types
> > + * (RTE_PTYPE_L4_TCP, RTE_PTYPE_L4_UDP, RTE_PTYPE_L4_SCTP,
> > +RTE_PTYPE_L4_ICMP,
> > + * RTE_PTYPE_L4_NONFRAG).
> > + *
> > + * Packet format:
> > + * <'ether type'=0x0800
> > + * | 'version'=4, 'MF'=1>
> > + * or,
> > + * <'ether type'=0x86DD
> > + * | 'version'=6, 'next header'=44>
> > + */
> > +#define RTE_PTYPE_L4_FRAG 0x00000300
> > +/**
> > + * SCTP (Stream Control Transmission Protocol) packet type.
> > + * It is used for outer packet for tunneling cases.
> > + *
> > + * Packet format:
> > + * <'ether type'=0x0800
> > + * | 'version'=4, 'protocol'=132, 'MF'=0>
> > + * or,
> > + * <'ether type'=0x86DD
> > + * | 'version'=6, 'next header'=132>
> > + */
> > +#define RTE_PTYPE_L4_SCTP 0x00000400
> > +/**
> > + * ICMP (Internet Control Message Protocol) packet type.
> > + * It is used for outer packet for tunneling cases.
> > + *
> > + * Packet format:
> > + * <'ether type'=0x0800
> > + * | 'version'=4, 'protocol'=1, 'MF'=0>
> > + * or,
> > + * <'ether type'=0x86DD
> > + * | 'version'=6, 'next header'=1>
> > + */
> > +#define RTE_PTYPE_L4_ICMP 0x00000500
> > +/**
> > + * Non-fragmented IP (Internet Protocol) packet type.
> > + * It is used for outer packet for tunneling cases.
> > + *
> > + * It refers to those packets of any IP types, while cannot be
> > +recognized as
> > + * any of above L4 types (RTE_PTYPE_L4_TCP, RTE_PTYPE_L4_UDP,
> > + * RTE_PTYPE_L4_FRAG, RTE_PTYPE_L4_SCTP, RTE_PTYPE_L4_ICMP).
> > + *
> > + * Packet format:
> > + * <'ether type'=0x0800
> > + * | 'version'=4, 'protocol'!=[6|17|132|1], 'MF'=0>
> > + * or,
> > + * <'ether type'=0x86DD
> > + * | 'version'=6, 'next header'!=[6|17|44|132|1]> */
> > +#define RTE_PTYPE_L4_NONFRAG 0x00000600
> > +/**
> > + * Mask of layer 4 packet types.
> > + * It is used for outer packet for tunneling cases.
> > + */
> > +#define RTE_PTYPE_L4_MASK 0x00000f00
> > +/**
> > + * IP (Internet Protocol) in IP (Internet Protocol) tunneling packet type.
> > + *
> > + * Packet format:
> > + * <'ether type'=0x0800
> > + * | 'version'=4, 'protocol'=[4|41]>
> > + * or,
> > + * <'ether type'=0x86DD
> > + * | 'version'=6, 'next header'=[4|41]> */
> > +#define RTE_PTYPE_TUNNEL_IP 0x00001000
> > +/**
> > + * GRE (Generic Routing Encapsulation) tunneling packet type.
> > + *
> > + * Packet format:
> > + * <'ether type'=0x0800
> > + * | 'version'=4, 'protocol'=47>
> > + * or,
> > + * <'ether type'=0x86DD
> > + * | 'version'=6, 'next header'=47>
> > + */
> > +#define RTE_PTYPE_TUNNEL_GRE 0x00002000
> > +/**
> > + * VXLAN (Virtual eXtensible Local Area Network) tunneling packet type.
> > + *
> > + * Packet format:
> > + * <'ether type'=0x0800
> > + * | 'version'=4, 'protocol'=17
> > + * | 'destination port'=4798>
> > + * or,
> > + * <'ether type'=0x86DD
> > + * | 'version'=6, 'next header'=17
> > + * | 'destination port'=4798>
> > + */
> > +#define RTE_PTYPE_TUNNEL_VXLAN 0x00003000
> > +/**
> > + * NVGRE (Network Virtualization using Generic Routing Encapsulation)
> > +tunneling
> > + * packet type.
> > + *
> > + * Packet format:
> > + * <'ether type'=0x0800
> > + * | 'version'=4, 'protocol'=47
> > + * | 'protocol type'=0x6558>
> > + * or,
> > + * <'ether type'=0x86DD
> > + * | 'version'=6, 'next header'=47
> > + * | 'protocol type'=0x6558'>
> > + */
> > +#define RTE_PTYPE_TUNNEL_NVGRE 0x00004000
> > +/**
> > + * GENEVE (Generic Network Virtualization Encapsulation) tunneling packet
> type.
> > + *
> > + * Packet format:
> > + * <'ether type'=0x0800
> > + * | 'version'=4, 'protocol'=17
> > + * | 'destination port'=6081>
> > + * or,
> > + * <'ether type'=0x86DD
> > + * | 'version'=6, 'next header'=17
> > + * | 'destination port'=6081>
> > + */
> > +#define RTE_PTYPE_TUNNEL_GENEVE 0x00005000
> > +/**
> > + * Tunneling packet type of Teredo, VXLAN (Virtual eXtensible Local
> > +Area
> > + * Network) or GRE (Generic Routing Encapsulation) could be
> > +recognized as this
> > + * packet type, if they can not be recognized independently as of
> > +hardware
> > + * capability.
> > + */
> > +#define RTE_PTYPE_TUNNEL_GRENAT 0x00006000
> > +/**
> > + * Mask of tunneling packet types.
> > + */
> > +#define RTE_PTYPE_TUNNEL_MASK 0x0000f000
> > +/**
> > + * MAC (Media Access Control) packet type.
> > + * It is used for inner packet type only.
> > + *
> > + * Packet format (inner only):
> > + * <'ether type'=[0x800|0x86DD]>
> > + */
> > +#define RTE_PTYPE_INNER_L2_MAC 0x00010000
> > +/**
> > + * MAC (Media Access Control) packet type with VLAN (Virtual Local
> > +Area
> > + * Network) tag.
> > + *
> > + * Packet format (inner only):
> > + * <'ether type'=[0x800|0x86DD], vlan=[1-4095]> */
> > +#define RTE_PTYPE_INNER_L2_MAC_VLAN 0x00020000
> > +/**
> > + * Mask of inner layer 2 packet types.
> > + */
> > +#define RTE_PTYPE_INNER_L2_MASK 0x000f0000
> > +/**
> > + * IP (Internet Protocol) version 4 packet type.
> > + * It is used for inner packet only, and does not contain any header option.
> > + *
> > + * Packet format (inner only):
> > + * <'ether type'=0x0800
> > + * | 'version'=4, 'ihl'=5>
> > + */
> > +#define RTE_PTYPE_INNER_L3_IPV4 0x00100000
> > +/**
> > + * IP (Internet Protocol) version 4 packet type.
> > + * It is used for inner packet only, and contains header options.
> > + *
> > + * Packet format (inner only):
> > + * <'ether type'=0x0800
> > + * | 'version'=4, 'ihl'=[6-15], 'options'> */
> > +#define RTE_PTYPE_INNER_L3_IPV4_EXT 0x00200000
> > +/**
> > + * IP (Internet Protocol) version 6 packet type.
> > + * It is used for inner packet only, and does not contain any extension header.
> > + *
> > + * Packet format (inner only):
> > + * <'ether type'=0x86DD
> > + * | 'version'=6, 'next header'=0x3B> */
> > +#define RTE_PTYPE_INNER_L3_IPV6 0x00300000
> > +/**
> > + * IP (Internet Protocol) version 4 packet type.
> > + * It is used for inner packet only, and may or maynot contain header options.
> > + *
> > + * Packet format (inner only):
> > + * <'ether type'=0x0800
> > + * | 'version'=4, 'ihl'=[5-15], <'options'>> */ #define
> > +RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN 0x00400000
> > +/**
> > + * IP (Internet Protocol) version 6 packet type.
> > + * It is used for inner packet only, and contains extension headers.
> > + *
> > + * Packet format (inner only):
> > + * <'ether type'=0x86DD
> > + * | 'version'=6, 'next header'=[0x0|0x2B|0x2C|0x32|0x33|0x3C|0x87],
> > + * 'extension headers'>
> > + */
> > +#define RTE_PTYPE_INNER_L3_IPV6_EXT 0x00500000
> > +/**
> > + * IP (Internet Protocol) version 6 packet type.
> > + * It is used for inner packet only, and may or maynot contain
> > +extension
> > + * headers.
> > + *
> > + * Packet format (inner only):
> > + * <'ether type'=0x86DD
> > + * | 'version'=6, 'next header'=[0x3B|0x0|0x2B|0x2C|0x32|0x33|0x3C|0x87],
> > + * <'extension headers'>>
> > + */
> > +#define RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN 0x00600000
> > +/**
> > + * Mask of inner layer 3 packet types.
> > + */
> > +#define RTE_PTYPE_INNER_INNER_L3_MASK 0x00f00000
> > +/**
> > + * TCP (Transmission Control Protocol) packet type.
> > + * It is used for inner packet only.
> > + *
> > + * Packet format (inner only):
> > + * <'ether type'=0x0800
> > + * | 'version'=4, 'protocol'=6, 'MF'=0>
> > + * or,
> > + * <'ether type'=0x86DD
> > + * | 'version'=6, 'next header'=6>
> > + */
> > +#define RTE_PTYPE_INNER_L4_TCP 0x01000000
> > +/**
> > + * UDP (User Datagram Protocol) packet type.
> > + * It is used for inner packet only.
> > + *
> > + * Packet format (inner only):
> > + * <'ether type'=0x0800
> > + * | 'version'=4, 'protocol'=17, 'MF'=0>
> > + * or,
> > + * <'ether type'=0x86DD
> > + * | 'version'=6, 'next header'=17>
> > + */
> > +#define RTE_PTYPE_INNER_L4_UDP 0x02000000
> > +/**
> > + * Fragmented IP (Internet Protocol) packet type.
> > + * It is used for inner packet only, and may or maynot have layer 4 packet.
> > + *
> > + * Packet format (inner only):
> > + * <'ether type'=0x0800
> > + * | 'version'=4, 'MF'=1>
> > + * or,
> > + * <'ether type'=0x86DD
> > + * | 'version'=6, 'next header'=44>
> > + */
> > +#define RTE_PTYPE_INNER_L4_FRAG 0x03000000
> > +/**
> > + * SCTP (Stream Control Transmission Protocol) packet type.
> > + * It is used for inner packet only.
> > + *
> > + * Packet format (inner only):
> > + * <'ether type'=0x0800
> > + * | 'version'=4, 'protocol'=132, 'MF'=0>
> > + * or,
> > + * <'ether type'=0x86DD
> > + * | 'version'=6, 'next header'=132>
> > + */
> > +#define RTE_PTYPE_INNER_L4_SCTP 0x04000000
> > +/**
> > + * ICMP (Internet Control Message Protocol) packet type.
> > + * It is used for inner packet only.
> > + *
> > + * Packet format (inner only):
> > + * <'ether type'=0x0800
> > + * | 'version'=4, 'protocol'=1, 'MF'=0>
> > + * or,
> > + * <'ether type'=0x86DD
> > + * | 'version'=6, 'next header'=1>
> > + */
> > +#define RTE_PTYPE_INNER_L4_ICMP 0x05000000
> > +/**
> > + * Non-fragmented IP (Internet Protocol) packet type.
> > + * It is used for inner packet only, and may or maynot have other
> > +unknown layer
> > + * 4 packet types.
> > + *
> > + * Packet format (inner only):
> > + * <'ether type'=0x0800
> > + * | 'version'=4, 'protocol'!=[6|17|132|1], 'MF'=0>
> > + * or,
> > + * <'ether type'=0x86DD
> > + * | 'version'=6, 'next header'!=[6|17|44|132|1]> */
> > +#define RTE_PTYPE_INNER_L4_NONFRAG 0x06000000
> > +/**
> > + * Mask of inner layer 4 packet types.
> > + */
> > +#define RTE_PTYPE_INNER_L4_MASK 0x0f000000
> > +
> > +/**
> > + * Check if the (outer) L3 header is IPv4. To avoid comparing IPv4
> > +types one by
> > + * one, bit 4 is selected to be used for IPv4 only. Then checking bit
> > +4 can
> > + * determin if it is an IPV4 packet.
> > + */
> > +#define RTE_ETH_IS_IPV4_HDR(ptype) ((ptype) & RTE_PTYPE_L3_IPV4)
> > +
> > +/**
> > + * Check if the (outer) L3 header is IPv4. To avoid comparing IPv4
> > +types one by
> > + * one, bit 6 is selected to be used for IPv4 only. Then checking bit
> > +6 can
> > + * determin if it is an IPV4 packet.
> > + */
> > +#define RTE_ETH_IS_IPV6_HDR(ptype) ((ptype) & RTE_PTYPE_L3_IPV6)
> > +
> > +/* Check if it is a tunneling packet */ #define
> > +RTE_ETH_IS_TUNNEL_PKT(ptype) ((ptype) & RTE_PTYPE_TUNNEL_MASK)
> #endif
> > +/* RTE_NEXT_ABI */
> > +
> > /**
> > * Get the name of a RX offload flag
> > *
> >
^ permalink raw reply [flat|nested] 257+ messages in thread
* Re: [dpdk-dev] [PATCH v8 00/18] unified packet type
2015-06-23 16:13 ` [dpdk-dev] [PATCH v8 00/18] unified packet type Ananyev, Konstantin
@ 2015-07-02 8:45 ` Liu, Yong
0 siblings, 0 replies; 257+ messages in thread
From: Liu, Yong @ 2015-07-02 8:45 UTC (permalink / raw)
To: Ananyev, Konstantin, Zhang, Helin, dev
Tested-by: Yong Liu <yong.liu@intel.com>
- Tested Commit: 7e1fa1de8a536c68f6af76cf8d222a9e948c93ba
- OS: Fedora20 3.15.5
- GCC: gcc version 4.8.3 20140911
- CPU: Intel(R) Xeon(R) CPU E5-2699 v3 @ 2.30GHz
- NIC: Intel Corporation XL710 10-Gigabit SFI/SFP+ [8086:1572]
- NIC: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ [8086:10fb]
- NIC: Intel Corporation I350 Gigabit Network Connection [8086:1521]
- Default x86_64-native-linuxapp-gcc configuration
- Prerequisites: Enable CONFIG_RTE_NEXT_ABI in dpdk configuration
Disable CONFIG_RTE_IXGBE_INC_VECTOR in dpdk configuration
- Total 10 cases, 10 passed, 0 failed
- Case: L2 packet type detect
Description: check L2 only packet can be normally detected by Fortville
Command / instruction:
Start testpmd and enable rxonly verbose mode
./x86_64-native-linuxapp-gcc/app/testpmd -c f -n 4 -- -i --txqflags=0x0
set fwd rxonly
set verbose 1
start
Send time sync packet and verify Timesync packet detected
Send ARP packet and verify ARP packet detected
Send LLDP packet and verify LLDP packet detected
- Case: IPv4&L4 packet type detect
Description: check L3 and L4 packet can be normally detected
Fortville did not detect whether packet contian ipv4 header options, so L3
type will be shown as IPV4_EXT_UNKNOWN
Command / instruction:
Start testpmd and enable rxonly verbose mode
Send IP only packet and verify L3 packet detected
Send IP+UDP packet and verify L3&L4 packet detected
Send IP+TCP packet and verify L3&L4 packet detected
Send IP+SCTP packet and verify L3&L4 packet detected
Send IP+ICMP packet and verify L3&L4 packet detected
Send IP fragment+TCP packet and verify L3&L4 packet detected
- Case: IPv6&L4 packet type detect
Description: check IPv6 and L4 packet can be normally detected
Fortville did not detect whether packet contian ipv6 extension options, so
L3 type will be shown as IPV6_EXT_UNKNOWN
Command / instruction:
Start testpmd and enable rxonly verbose mode
Send IPv6 only packet and verify L3 packet detected
Send IPv6+UDP packet and verify L3&L4 packet detected
Send IPv6+TCP packet and verify L3&L4 packet detected
Send IPv6 fragment+TCP packet and verify L3&L4 packet detected
- Case: IP in IPv4 tunnel packet type detect
Description: check tunnel packet can be normally detected by Fortville
Command / instruction:
Send IPv4+IPv4 fragment packet and verify tunnel packet detected
Send IPv4+IPv4 packet and verify tunnel packet detected
Send IPv4+IPv4+UDP packet and verify tunnel packet detected
Send IPv4+IPv4+TCP packet and verify tunnel packet detected
Send IPv4+IPv4+SCTP packet and verify tunnel packet detected
Send IPv4+IPv4+ICMP packet and verify tunnel packet detected
Send IPv4+IPv6 fragment packet and verify tunnel packet detected
Send IPv4+IPv6 packet and verify tunnel packet detected
Send IPv4+IPv6+UDP packet and verify tunnel packet detected
Send IPv4+IPv6+TCP packet and verify tunnel packet detected
Send IPv4+IPv6+SCTP packet and verify tunnel packet detected
Send IPv4+IPv6+ICMP packet and verify tunnel packet detected
- Case: IPv6 in IPv4 tunnel packet type detect by niantic
Description: check tunnel packet can be normally detected by Niantic
Niantic only can detect few types of ip in ip tunnel packets, this case
designed to test them.
Command / instruction:
Send IPv4+IPv6 packet and verify tunnel packet detected
Send IPv4+IPv6_EXT packet and verify tunnel packet detected
Send IPv4+IPv6+UDP packet and verify tunnel packet detected
Send IPv4+IPv6+TCP packet and verify tunnel packet detected
Send IPv4+IPv6_EXT+UDP packet and verify tunnel packet detected
Send IPv4+IPv6_EXT+TCP packet and verify tunnel packet detected
- Case: IP in IPv6 tunnel packet type detect
Description: check tunnel packet can be normally detected by Fortville
Command / instruction:
Send IPv6+IPv4 fragment packet and verify tunnel packet detected
Send IPv6+IPv4 packet and verify tunnel packet detected
Send IPv6+IPv4+UDP packet and verify tunnel packet detected
Send IPv6+IPv4+TCP packet and verify tunnel packet detected
Send IPv6+IPv4+SCTP packet and verify tunnel packet detected
Send IPv6+IPv4+ICMP packet and verify tunnel packet detected
Send IPv6+IPv6 fragment packet and verify tunnel packet detected
Send IPv6+IPv6 packet and verify tunnel packet detected
Send IPv6+IPv6+UDP packet and verify tunnel packet detected
Send IPv6+IPv6+TCP packet and verify tunnel packet detected
Send IPv6+IPv6+SCTP packet and verify tunnel packet detected
Send IPv6+IPv6+ICMP packet and verify tunnel packet detected
- Case: NVGRE tunnel packet type detect
Description: check tunnel packet can be normally detected by Fortville
Fortville did not distinguish GRE/Teredo/Vxlan packets, all those types
will be displayed as GRENAT
Command / instruction:
Send IPv4+NVGRE fragment packet and verify tunnel packet detected
Send IPV4+NVGRE+MAC packet and verify tunnel packet detected
Send IPv4+NVGRE+MAC_VLAN packet and verify tunnel packet detected
Send IPv4+NVGRE+MAC_VLAN+IPv4 fragment packet and verify tunnel packet
detected
Send IPv4+NVGRE+MAC_VLAN+IPv4 packet and verify tunnel packet detected
Send IPv4+NVGRE+MAC_VLAN+IPv4+UDP packet and verify tunnel packet detected
Send IPv4+NVGRE+MAC_VLAN+IPv4+TCP packet and verify tunnel packet detected
Send IPv4+NVGRE+MAC_VLAN+IPv4+SCTP packet and verify tunnel packet
detected
Send IPv4+NVGRE+MAC_VLAN+IPv4+ICMP packet and verify tunnel packet
detected
Send IPv4+NVGRE+MAC_VLAN+IPv6+IPv6 fragment acket and verify tunnel packet
detected
Send IPv4+NVGRE+MAC_VLAN+IPv6 packet and verify tunnel packet detected
Send IPv4+NVGRE+MAC_VLAN+IPv6+UDP packet and verify tunnel packet detected
Send IPv4+NVGRE+MAC_VLAN+IPv6+TCP packet and verify tunnel packet detected
Send IPv4+NVGRE+MAC_VLAN+IPv6+SCTP packet and verify tunnel packet
detected
Send IPv4+NVGRE+MAC_VLAN+IPv6+ICMP packet and verify tunnel packet
detected
- Case: NVGRE in IPv6 tunnel packet type detect
Description: check tunnel packet can be normally detected by Fortville
Fortville did not distinguish GRE/Teredo/Vxlan packets, all those types
will be displayed as GRENAT
Command / instruction:
Send IPV6+NVGRE+MAC packet and verify tunnel packet detected
Send IPV6+NVGRE+MAC+IPv4 fragment packet and verify tunnel packet detected
Send IPV6+NVGRE+MAC+IPv4 packet and verify tunnel packet detected
Send IPV6+NVGRE+MAC+IPv4+UDP packet and verify tunnel packet detected
Send IPV6+NVGRE+MAC+IPv4+TCP packet and verify tunnel packet detected
Send IPV6+NVGRE+MAC+IPv4+SCTP packet and verify tunnel packet detected
Send IPV6+NVGRE+MAC+IPv4+ICMP packet and verify tunnel packet detected
Send IPV6+NVGRE+MAC+IPv6 fragment packet and verify tunnel packet detected
Send IPV6+NVGRE+MAC+IPv6 packet and verify tunnel packet detected
Send IPV6+NVGRE+MAC+IPv6+UDP packet and verify tunnel packet detected
Send IPV6+NVGRE+MAC+IPv6+TCP packet and verify tunnel packet detected
Send IPV6+NVGRE+MAC+IPv6+SCTP packet and verify tunnel packet detected
Send IPV6+NVGRE+MAC+IPv6+ICMP packet and verify tunnel packet detected
Send IPV6+NVGRE+MAC_VLAN+IPv4 fragment packet and verify tunnel packet
detected
Send IPV6+NVGRE+MAC_VLAN+IPv4 packet and verify tunnel packet detected
Send IPV6+NVGRE+MAC_VLAN+IPv4+UDP packet and verify tunnel packet detected
Send IPV6+NVGRE+MAC_VLAN+IPv4+TCP packet and verify tunnel packet detected
Send IPV6+NVGRE+MAC_VLAN+IPv4+SCTP packet and verify tunnel packet
detected
Send IPV6+NVGRE+MAC_VLAN+IPv4+ICMP packet and verify tunnel packet
detected
Send IPV6+NVGRE+MAC_VLAN+IPv6 fragment packet and verify tunnel packet
detected
Send IPV6+NVGRE+MAC_VLAN+IPv6 packet and verify tunnel packet detected
Send IPV6+NVGRE+MAC_VLAN+IPv6+UDP packet and verify tunnel packet detected
Send IPV6+NVGRE+MAC_VLAN+IPv6+TCP packet and verify tunnel packet detected
Send IPV6+NVGRE+MAC_VLAN+IPv6+SCTP packet and verify tunnel packet
detected
Send IPV6+NVGRE+MAC_VLAN+IPv6+ICMP packet and verify tunnel packet
detected
Send IPV6+NVGRE+MAC_VLAN+IPv6 fragment packet and verify tunnel packet
detected
- Case: GRE tunnel packet type detect
Description: check tunnel packet can be normally detected by Fortville
Fortville did not distinguish GRE/Teredo/Vxlan packets, all those types
will be displayed as GRENAT
Command / instruction:
Send IPv4+GRE+IPv4 fragment packet and verify tunnel packet detected
Send IPv4+GRE+IPv4 packet and verify tunnel packet detected
Send IPv4+GRE+IPv4+UDP packet and verify tunnel packet detected
Send IPv4+GRE+IPv4+TCP packet and verify tunnel packet detected
Send IPv4+GRE+IPv4+SCTP packet and verify tunnel packet detected
Send IPv4+GRE+IPv4+ICMP packet and verify tunnel packet detected
Send IPv4+GRE packet and verify tunnel packet detected
- Case: Vxlan tunnel packet type detect
Description: check tunnel packet can be normally detected by Fortville
Fortville did not distinguish GRE/Teredo/Vxlan packets, all those types
will be displayed as GRENAT
Command / instruction:
Add vxlan tunnle port filter on receive port
rx_vxlan_port add 4789 0
Send IPv4+Vxlan+MAC+IPv4 fragment packet and verify tunnel packet detected
Send IPv4+Vxlan+MAC+IPv4 packet and verify tunnel packet detected
Send IPv4+Vxlan+MAC+IPv4+UDP packet and verify tunnel packet detected
Send IPv4+Vxlan+MAC+IPv4+TCP packet and verify tunnel packet detected
Send IPv4+Vxlan+MAC+IPv4+SCTP packet and verify tunnel packet detected
Send IPv4+Vxlan+MAC+IPv4+ICMP packet and verify tunnel packet detected
Send IPv4+Vxlan+MAC+IPv6 fragment packet and verify tunnel packet detected
Send IPv4+Vxlan+MAC+IPv6 packet and verify tunnel packet detected
Send IPv4+Vxlan+MAC+IPv6+UDP packet and verify tunnel packet detected
Send IPv4+Vxlan+MAC+IPv6+TCP packet and verify tunnel packet detected
Send IPv4+Vxlan+MAC+IPv6+SCTP packet and verify tunnel packet detected
Send IPv4+Vxlan+MAC+IPv6+ICMP packet and verify tunnel packet detected
Send IPv4+Vxlan+MAC packet and verify tunnel packet detected
> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Ananyev, Konstantin
> Sent: Wednesday, June 24, 2015 12:14 AM
> To: Zhang, Helin; dev@dpdk.org
> Subject: Re: [dpdk-dev] [PATCH v8 00/18] unified packet type
>
>
>
> > -----Original Message-----
> > From: Zhang, Helin
> > Sent: Tuesday, June 23, 2015 2:50 AM
> > To: dev@dpdk.org
> > Cc: Cao, Waterman; Liang, Cunming; Liu, Jijiang; Ananyev, Konstantin;
> Richardson, Bruce; yongwang@vmware.com;
> > olivier.matz@6wind.com; Wu, Jingjing; Zhang, Helin
> > Subject: [PATCH v8 00/18] unified packet type
> >
> > Currently only 6 bits which are stored in ol_flags are used to indicate
> the
> > packet types. This is not enough, as some NIC hardware can recognize
> quite
> > a lot of packet types, e.g i40e hardware can recognize more than 150
> packet
> > types. Hiding those packet types hides hardware offload capabilities
> which
> > could be quite useful for improving performance and for end users.
> > So an unified packet types are needed to support all possible PMDs. A 16
> > bits packet_type in mbuf structure can be changed to 32 bits and used
> for
> > this purpose. In addition, all packet types stored in ol_flag field
> should
> > be deleted at all, and 6 bits of ol_flags can be save as the benifit.
> >
> > Initially, 32 bits of packet_type can be divided into several sub fields
> to
> > indicate different packet type information of a packet. The initial
> design
> > is to divide those bits into fields for L2 types, L3 types, L4 types,
> tunnel
> > types, inner L2 types, inner L3 types and inner L4 types. All PMDs
> should
> > translate the offloaded packet types into these 7 fields of information,
> for
> > user applications.
> >
> > To avoid breaking ABI compatibility, currently all the code changes for
> > unified packet type are disabled at compile time by default. Users can
> enable
> > it manually by defining the macro of RTE_NEXT_ABI. The code changes will
> be
> > valid by default in a future release, and the old version will be
> deleted
> > accordingly, after the ABI change process is done.
> >
> > Note that this patch set should be integrated after another patch set
> for
> > '[PATCH v3 0/7] support i40e QinQ stripping and insertion', to clearly
> solve
> > the conflict during integration. As both patch sets modified 'struct
> rte_mbuf',
> > and the final layout of the 'struct rte_mbuf' is key to vectorized ixgbe
> PMD.
> >
> > v2 changes:
> > * Enlarged the packet_type field from 16 bits to 32 bits.
> > * Redefined the packet type sub-fields.
> > * Updated the 'struct rte_kni_mbuf' for KNI according to the mbuf
> changes.
> > * Used redefined packet types and enlarged packet_type field for all
> PMDs
> > and corresponding applications.
> > * Removed changes in bond and its relevant application, as there is no
> need
> > at all according to the recent bond changes.
> >
> > v3 changes:
> > * Put the mbuf layout changes into a single patch.
> > * Put vector ixgbe changes right after mbuf changes.
> > * Disabled vector ixgbe PMD by default, as mbuf layout changed, and then
> > re-enabled it after vector ixgbe PMD updated.
> > * Put the definitions of unified packet type into a single patch.
> > * Minor bug fixes and enhancements in l3fwd example.
> >
> > v4 changes:
> > * Added detailed description of each packet types.
> > * Supported unified packet type of fm10k.
> > * Added printing logs of packet types of each received packet for rxonly
> > mode in testpmd.
> > * Removed several useless code lines which block packet type unification
> from
> > app/test/packet_burst_generator.c.
> >
> > v5 changes:
> > * Added more detailed description for each packet types, together with
> examples.
> > * Rolled back the macro definitions of RX packet flags, for ABI
> compitability.
> >
> > v6 changes:
> > * Disabled the code changes for unified packet type by default, to
> > avoid breaking ABI compatibility.
> >
> > v7 changes:
> > * Renamed RTE_UNIFIED_PKT_TYPE to RTE_NEXT_ABI.
> > * Integrated with patch set for '[PATCH v3 0/7] support i40e QinQ
> stripping
> > and insertion', to clearly solve the conflicts during merging.
> >
> > v8 changes:
> > * Moved the field of 'vlan_tci_outer' in 'struct rte_mbuf' to the end of
> the 1st
> > cache line, to avoid breaking any vectorized PMD storing, as fields of
> > 'packet_type, pkt_len, data_len, vlan_tci, rss' should be in an
> contiguous 128
> > bits.
> >
> > Helin Zhang (18):
> > mbuf: redefine packet_type in rte_mbuf
> > ixgbe: support unified packet type in vectorized PMD
> > mbuf: add definitions of unified packet types
> > e1000: replace bit mask based packet type with unified packet type
> > ixgbe: replace bit mask based packet type with unified packet type
> > i40e: replace bit mask based packet type with unified packet type
> > enic: replace bit mask based packet type with unified packet type
> > vmxnet3: replace bit mask based packet type with unified packet type
> > fm10k: replace bit mask based packet type with unified packet type
> > app/test-pipeline: replace bit mask based packet type with unified
> > packet type
> > app/testpmd: replace bit mask based packet type with unified packet
> > type
> > app/test: Remove useless code
> > examples/ip_fragmentation: replace bit mask based packet type with
> > unified packet type
> > examples/ip_reassembly: replace bit mask based packet type with
> > unified packet type
> > examples/l3fwd-acl: replace bit mask based packet type with unified
> > packet type
> > examples/l3fwd-power: replace bit mask based packet type with unified
> > packet type
> > examples/l3fwd: replace bit mask based packet type with unified packet
> > type
> > mbuf: remove old packet type bit masks
> >
> > app/test-pipeline/pipeline_hash.c | 13 +
> > app/test-pmd/csumonly.c | 14 +
> > app/test-pmd/rxonly.c | 183 +++++++
> > app/test/packet_burst_generator.c | 6 +-
> > drivers/net/e1000/igb_rxtx.c | 102 ++++
> > drivers/net/enic/enic_main.c | 26 +
> > drivers/net/fm10k/fm10k_rxtx.c | 27 ++
> > drivers/net/i40e/i40e_rxtx.c | 528
> +++++++++++++++++++++
> > drivers/net/ixgbe/ixgbe_rxtx.c | 163 +++++++
> > drivers/net/ixgbe/ixgbe_rxtx_vec.c | 75 ++-
> > drivers/net/vmxnet3/vmxnet3_rxtx.c | 8 +
> > examples/ip_fragmentation/main.c | 9 +
> > examples/ip_reassembly/main.c | 9 +
> > examples/l3fwd-acl/main.c | 29 +-
> > examples/l3fwd-power/main.c | 8 +
> > examples/l3fwd/main.c | 123 ++++-
> > .../linuxapp/eal/include/exec-env/rte_kni_common.h | 6 +
> > lib/librte_mbuf/rte_mbuf.c | 4 +
> > lib/librte_mbuf/rte_mbuf.h | 517
> ++++++++++++++++++++
> > 19 files changed, 1837 insertions(+), 13 deletions(-)
> >
> > --
>
> Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
>
> > 1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* Re: [dpdk-dev] [PATCH v8 01/18] mbuf: redefine packet_type in rte_mbuf
2015-06-23 1:50 ` [dpdk-dev] [PATCH v8 01/18] mbuf: redefine packet_type in rte_mbuf Helin Zhang
@ 2015-07-02 9:03 ` Thomas Monjalon
2015-07-03 1:11 ` Zhang, Helin
0 siblings, 1 reply; 257+ messages in thread
From: Thomas Monjalon @ 2015-07-02 9:03 UTC (permalink / raw)
To: Helin Zhang; +Cc: dev
2015-06-23 09:50, Helin Zhang:
> In order to unify the packet type, the field of 'packet_type' in
> 'struct rte_mbuf' needs to be extended from 16 to 32 bits.
> Accordingly, some fields in 'struct rte_mbuf' are re-organized to
> support this change for Vector PMD. As 'struct rte_kni_mbuf' for
> KNI should be right mapped to 'struct rte_mbuf', it should be
> modified accordingly. In addition, Vector PMD of ixgbe is disabled
> by default, as 'struct rte_mbuf' changed.
[...]
> -CONFIG_RTE_IXGBE_INC_VECTOR=y
> +CONFIG_RTE_IXGBE_INC_VECTOR=n
It is the default configuration. Disabling it do not prevent from
build break during a "git bisect".
Please merge the changes for vector ixgbe in this patch.
^ permalink raw reply [flat|nested] 257+ messages in thread
* Re: [dpdk-dev] [PATCH v8 03/18] mbuf: add definitions of unified packet types
2015-07-02 1:30 ` Zhang, Helin
@ 2015-07-02 9:31 ` Olivier MATZ
2015-07-03 1:30 ` Zhang, Helin
0 siblings, 1 reply; 257+ messages in thread
From: Olivier MATZ @ 2015-07-02 9:31 UTC (permalink / raw)
To: Zhang, Helin, dev
Hi Helin,
On 07/02/2015 03:30 AM, Zhang, Helin wrote:
> Hi Oliver
>
> Thanks for your helps!
>
>> -----Original Message-----
>> From: Olivier MATZ [mailto:olivier.matz@6wind.com]
>> Sent: Tuesday, June 30, 2015 4:44 PM
>> To: Zhang, Helin; dev@dpdk.org
>> Cc: Cao, Waterman; Liang, Cunming; Liu, Jijiang; Ananyev, Konstantin; Richardson,
>> Bruce; yongwang@vmware.com; Wu, Jingjing
>> Subject: Re: [PATCH v8 03/18] mbuf: add definitions of unified packet types
>>
>> Hi Helin,
>>
>> This is greatly documented, thanks!
>> Please find a small comment below.
>>
>> On 06/23/2015 03:50 AM, Helin Zhang wrote:
>>> As there are only 6 bit flags in ol_flags for indicating packet types,
>>> which is not enough to describe all the possible packet types hardware
>>> can recognize. For example, i40e hardware can recognize more than 150
>>> packet types. Unified packet type is composed of L2 type, L3 type, L4
>>> type, tunnel type, inner L2 type, inner L3 type and inner L4 type
>>> fields, and can be stored in 'struct rte_mbuf' of 32 bits field
>>> 'packet_type'.
>>> To avoid breaking ABI compatibility, all the changes would be enabled
>>> by RTE_NEXT_ABI, which is disabled by default.
>>>
>>> [...]
>>> diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
>>> index 0315561..0ee0c55 100644
>>> --- a/lib/librte_mbuf/rte_mbuf.h
>>> +++ b/lib/librte_mbuf/rte_mbuf.h
>>> @@ -201,6 +201,493 @@ extern "C" {
>>> /* Use final bit of flags to indicate a control mbuf */
>>> #define CTRL_MBUF_FLAG (1ULL << 63) /**< Mbuf contains control
>> data */
>>>
>>> +#ifdef RTE_NEXT_ABI
>>> +/*
>>> + * 32 bits are divided into several fields to mark packet types. Note
>>> +that
>>> + * each field is indexical.
>>> + * - Bit 3:0 is for L2 types.
>>> + * - Bit 7:4 is for L3 or outer L3 (for tunneling case) types.
>>> + * - Bit 11:8 is for L4 or outer L4 (for tunneling case) types.
>>> + * - Bit 15:12 is for tunnel types.
>>> + * - Bit 19:16 is for inner L2 types.
>>> + * - Bit 23:20 is for inner L3 types.
>>> + * - Bit 27:24 is for inner L4 types.
>>> + * - Bit 31:28 is reserved.
>>> + *
>>> + * To be compatible with Vector PMD, RTE_PTYPE_L3_IPV4,
>>> +RTE_PTYPE_L3_IPV4_EXT,
>>> + * RTE_PTYPE_L3_IPV6, RTE_PTYPE_L3_IPV6_EXT, RTE_PTYPE_L4_TCP,
>>> +RTE_PTYPE_L4_UDP
>>> + * and RTE_PTYPE_L4_SCTP should be kept as below in a contiguous 7 bits.
>>> + *
>>> + * Note that L3 types values are selected for checking IPV4/IPV6
>>> +header from
>>> + * performance point of view. Reading annotations of
>>> +RTE_ETH_IS_IPV4_HDR and
>>> + * RTE_ETH_IS_IPV6_HDR is needed for any future changes of L3 type values.
>>> + *
>>> + * Note that the packet types of the same packet recognized by
>>> +different
>>> + * hardware may be different, as different hardware may have
>>> +different
>>> + * capability of packet type recognition.
>>> + *
>>> + * examples:
>>> + * <'ether type'=0x0800
>>> + * | 'version'=4, 'protocol'=0x29
>>> + * | 'version'=6, 'next header'=0x3A
>>> + * | 'ICMPv6 header'>
>>> + * will be recognized on i40e hardware as packet type combination of,
>>> + * RTE_PTYPE_L2_MAC |
>>> + * RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
>>> + * RTE_PTYPE_TUNNEL_IP |
>>> + * RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
>>> + * RTE_PTYPE_INNER_L4_ICMP.
>>> + *
>>> + * <'ether type'=0x86DD
>>> + * | 'version'=6, 'next header'=0x2F
>>> + * | 'GRE header'
>>> + * | 'version'=6, 'next header'=0x11
>>> + * | 'UDP header'>
>>> + * will be recognized on i40e hardware as packet type combination of,
>>> + * RTE_PTYPE_L2_MAC |
>>> + * RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
>>> + * RTE_PTYPE_TUNNEL_GRENAT |
>>> + * RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
>>> + * RTE_PTYPE_INNER_L4_UDP.
>>> + */
>>> +#define RTE_PTYPE_UNKNOWN 0x00000000
>>> +/**
>>> + * MAC (Media Access Control) packet type.
>>> + * It is used for outer packet for tunneling cases.
>>> + *
>>> + * Packet format:
>>> + * <'ether type'=[0x0800|0x86DD|others]> */
>>> +#define RTE_PTYPE_L2_MAC 0x00000001
>>
>> I'm wondering if RTE_PTYPE_L2_ETHER is not a better name?
> Ethernet includes both Data Link Layer and Physical Layer, while MAC is for Data Link
> Layer only. I would prefer to keep 'MAC' in the names, rather than 'ether'.
> Any opinions from others?
Just to precise what I'm saying: MAC is the interface between
the logical link and the physical layer. It is different
depending on the physical media (Ethernet, Token Ring, WLAN, ...).
Every packet has a MAC layer and I think "MAC" does not bring
any information.
Having "ETHER" in the name would inform the software that
it can expect an ethernet header. In the future, I would expect
to have more L2 types like PPP.
I also have another question about RTE_PTYPE_L2_MAC. You
describe it as "<'ether type'=[0x0800|0x86DD|others]>".
What is the meaning of "others"? Does it mean that it is
valid to set RTE_PTYPE_L2_MAC for any received packet?
For instance, an ARP packet. The driver has the choice
to set:
A- RTE_PTYPE_UNKNOWN: the driver does not know the L2 packet
type
B- RTE_PTYPE_L2_MAC: the driver knows it's an ethernet packet
(it should be the case for all received packets today as
dpdk only supports ethernet ports)
C- RTE_PTYPE_L2_ARP: the driver knows that the packet carries
an ARP header after the ethernet header.
Is it correct for a driver to always set B- for all received
packets?
Another thing that bothers me a bit is that L2_ARP, L2_LLDP,
L2_MAC_TIMESYNC, (...) are not really L2 types. The L2 type is
Ethernet. On the other hand, they are not L3 type either.
So, I have no other solution. The OSI model is probably a
bit too theorical, and we have to choose the solution that
is the most useful for the applications, even if it does not
absolutely matches the theory ;)
Regards,
Olivier
>
> Regards,
> Helin
>
>>
>>
>>> +/**
>>> + * MAC (Media Access Control) packet type for time sync.
>>> + *
>>> + * Packet format:
>>> + * <'ether type'=0x88F7>
>>> + */
>>> +#define RTE_PTYPE_L2_MAC_TIMESYNC 0x00000002
>>> +/**
>>> + * ARP (Address Resolution Protocol) packet type.
>>> + *
>>> + * Packet format:
>>> + * <'ether type'=0x0806>
>>> + */
>>> +#define RTE_PTYPE_L2_ARP 0x00000003
>>> +/**
>>> + * LLDP (Link Layer Discovery Protocol) packet type.
>>> + *
>>> + * Packet format:
>>> + * <'ether type'=0x88CC>
>>> + */
>>> +#define RTE_PTYPE_L2_LLDP 0x00000004
>>
>> Maybe ETHER should appear in these names too, what do you think?
> Same as above.
>
>>
>>
>>
>>
>>> +/**
>>> + * Mask of layer 2 packet types.
>>> + * It is used for outer packet for tunneling cases.
>>> + */
>>> +#define RTE_PTYPE_L2_MASK 0x0000000f
>>> +/**
>>> + * IP (Internet Protocol) version 4 packet type.
>>> + * It is used for outer packet for tunneling cases, and does not
>>> +contain any
>>> + * header option.
>>> + *
>>> + * Packet format:
>>> + * <'ether type'=0x0800
>>> + * | 'version'=4, 'ihl'=5>
>>> + */
>>> +#define RTE_PTYPE_L3_IPV4 0x00000010
>>> +/**
>>> + * IP (Internet Protocol) version 4 packet type.
>>> + * It is used for outer packet for tunneling cases, and contains
>>> +header
>>> + * options.
>>> + *
>>> + * Packet format:
>>> + * <'ether type'=0x0800
>>> + * | 'version'=4, 'ihl'=[6-15], 'options'> */
>>> +#define RTE_PTYPE_L3_IPV4_EXT 0x00000030
>>> +/**
>>> + * IP (Internet Protocol) version 6 packet type.
>>> + * It is used for outer packet for tunneling cases, and does not
>>> +contain any
>>> + * extension header.
>>> + *
>>> + * Packet format:
>>> + * <'ether type'=0x86DD
>>> + * | 'version'=6, 'next header'=0x3B> */
>>> +#define RTE_PTYPE_L3_IPV6 0x00000040
>>> +/**
>>> + * IP (Internet Protocol) version 4 packet type.
>>> + * It is used for outer packet for tunneling cases, and may or maynot
>>> +contain
>>> + * header options.
>>> + *
>>> + * Packet format:
>>> + * <'ether type'=0x0800
>>> + * | 'version'=4, 'ihl'=[5-15], <'options'>> */
>>> +#define RTE_PTYPE_L3_IPV4_EXT_UNKNOWN 0x00000090
>>> +/**
>>> + * IP (Internet Protocol) version 6 packet type.
>>> + * It is used for outer packet for tunneling cases, and contains
>>> +extension
>>> + * headers.
>>> + *
>>> + * Packet format:
>>> + * <'ether type'=0x86DD
>>> + * | 'version'=6, 'next header'=[0x0|0x2B|0x2C|0x32|0x33|0x3C|0x87],
>>> + * 'extension headers'>
>>> + */
>>> +#define RTE_PTYPE_L3_IPV6_EXT 0x000000c0
>>> +/**
>>> + * IP (Internet Protocol) version 6 packet type.
>>> + * It is used for outer packet for tunneling cases, and may or maynot
>>> +contain
>>> + * extension headers.
>>> + *
>>> + * Packet format:
>>> + * <'ether type'=0x86DD
>>> + * | 'version'=6, 'next header'=[0x3B|0x0|0x2B|0x2C|0x32|0x33|0x3C|0x87],
>>> + * <'extension headers'>>
>>> + */
>>> +#define RTE_PTYPE_L3_IPV6_EXT_UNKNOWN 0x000000e0
>>> +/**
>>> + * Mask of layer 3 packet types.
>>> + * It is used for outer packet for tunneling cases.
>>> + */
>>> +#define RTE_PTYPE_L3_MASK 0x000000f0
>>> +/**
>>> + * TCP (Transmission Control Protocol) packet type.
>>> + * It is used for outer packet for tunneling cases.
>>> + *
>>> + * Packet format:
>>> + * <'ether type'=0x0800
>>> + * | 'version'=4, 'protocol'=6, 'MF'=0>
>>> + * or,
>>> + * <'ether type'=0x86DD
>>> + * | 'version'=6, 'next header'=6>
>>> + */
>>> +#define RTE_PTYPE_L4_TCP 0x00000100
>>> +/**
>>> + * UDP (User Datagram Protocol) packet type.
>>> + * It is used for outer packet for tunneling cases.
>>> + *
>>> + * Packet format:
>>> + * <'ether type'=0x0800
>>> + * | 'version'=4, 'protocol'=17, 'MF'=0>
>>> + * or,
>>> + * <'ether type'=0x86DD
>>> + * | 'version'=6, 'next header'=17>
>>> + */
>>> +#define RTE_PTYPE_L4_UDP 0x00000200
>>> +/**
>>> + * Fragmented IP (Internet Protocol) packet type.
>>> + * It is used for outer packet for tunneling cases.
>>> + *
>>> + * It refers to those packets of any IP types, which can be
>>> +recognized as
>>> + * fragmented. A fragmented packet cannot be recognized as any other
>>> +L4 types
>>> + * (RTE_PTYPE_L4_TCP, RTE_PTYPE_L4_UDP, RTE_PTYPE_L4_SCTP,
>>> +RTE_PTYPE_L4_ICMP,
>>> + * RTE_PTYPE_L4_NONFRAG).
>>> + *
>>> + * Packet format:
>>> + * <'ether type'=0x0800
>>> + * | 'version'=4, 'MF'=1>
>>> + * or,
>>> + * <'ether type'=0x86DD
>>> + * | 'version'=6, 'next header'=44>
>>> + */
>>> +#define RTE_PTYPE_L4_FRAG 0x00000300
>>> +/**
>>> + * SCTP (Stream Control Transmission Protocol) packet type.
>>> + * It is used for outer packet for tunneling cases.
>>> + *
>>> + * Packet format:
>>> + * <'ether type'=0x0800
>>> + * | 'version'=4, 'protocol'=132, 'MF'=0>
>>> + * or,
>>> + * <'ether type'=0x86DD
>>> + * | 'version'=6, 'next header'=132>
>>> + */
>>> +#define RTE_PTYPE_L4_SCTP 0x00000400
>>> +/**
>>> + * ICMP (Internet Control Message Protocol) packet type.
>>> + * It is used for outer packet for tunneling cases.
>>> + *
>>> + * Packet format:
>>> + * <'ether type'=0x0800
>>> + * | 'version'=4, 'protocol'=1, 'MF'=0>
>>> + * or,
>>> + * <'ether type'=0x86DD
>>> + * | 'version'=6, 'next header'=1>
>>> + */
>>> +#define RTE_PTYPE_L4_ICMP 0x00000500
>>> +/**
>>> + * Non-fragmented IP (Internet Protocol) packet type.
>>> + * It is used for outer packet for tunneling cases.
>>> + *
>>> + * It refers to those packets of any IP types, while cannot be
>>> +recognized as
>>> + * any of above L4 types (RTE_PTYPE_L4_TCP, RTE_PTYPE_L4_UDP,
>>> + * RTE_PTYPE_L4_FRAG, RTE_PTYPE_L4_SCTP, RTE_PTYPE_L4_ICMP).
>>> + *
>>> + * Packet format:
>>> + * <'ether type'=0x0800
>>> + * | 'version'=4, 'protocol'!=[6|17|132|1], 'MF'=0>
>>> + * or,
>>> + * <'ether type'=0x86DD
>>> + * | 'version'=6, 'next header'!=[6|17|44|132|1]> */
>>> +#define RTE_PTYPE_L4_NONFRAG 0x00000600
>>> +/**
>>> + * Mask of layer 4 packet types.
>>> + * It is used for outer packet for tunneling cases.
>>> + */
>>> +#define RTE_PTYPE_L4_MASK 0x00000f00
>>> +/**
>>> + * IP (Internet Protocol) in IP (Internet Protocol) tunneling packet type.
>>> + *
>>> + * Packet format:
>>> + * <'ether type'=0x0800
>>> + * | 'version'=4, 'protocol'=[4|41]>
>>> + * or,
>>> + * <'ether type'=0x86DD
>>> + * | 'version'=6, 'next header'=[4|41]> */
>>> +#define RTE_PTYPE_TUNNEL_IP 0x00001000
>>> +/**
>>> + * GRE (Generic Routing Encapsulation) tunneling packet type.
>>> + *
>>> + * Packet format:
>>> + * <'ether type'=0x0800
>>> + * | 'version'=4, 'protocol'=47>
>>> + * or,
>>> + * <'ether type'=0x86DD
>>> + * | 'version'=6, 'next header'=47>
>>> + */
>>> +#define RTE_PTYPE_TUNNEL_GRE 0x00002000
>>> +/**
>>> + * VXLAN (Virtual eXtensible Local Area Network) tunneling packet type.
>>> + *
>>> + * Packet format:
>>> + * <'ether type'=0x0800
>>> + * | 'version'=4, 'protocol'=17
>>> + * | 'destination port'=4798>
>>> + * or,
>>> + * <'ether type'=0x86DD
>>> + * | 'version'=6, 'next header'=17
>>> + * | 'destination port'=4798>
>>> + */
>>> +#define RTE_PTYPE_TUNNEL_VXLAN 0x00003000
>>> +/**
>>> + * NVGRE (Network Virtualization using Generic Routing Encapsulation)
>>> +tunneling
>>> + * packet type.
>>> + *
>>> + * Packet format:
>>> + * <'ether type'=0x0800
>>> + * | 'version'=4, 'protocol'=47
>>> + * | 'protocol type'=0x6558>
>>> + * or,
>>> + * <'ether type'=0x86DD
>>> + * | 'version'=6, 'next header'=47
>>> + * | 'protocol type'=0x6558'>
>>> + */
>>> +#define RTE_PTYPE_TUNNEL_NVGRE 0x00004000
>>> +/**
>>> + * GENEVE (Generic Network Virtualization Encapsulation) tunneling packet
>> type.
>>> + *
>>> + * Packet format:
>>> + * <'ether type'=0x0800
>>> + * | 'version'=4, 'protocol'=17
>>> + * | 'destination port'=6081>
>>> + * or,
>>> + * <'ether type'=0x86DD
>>> + * | 'version'=6, 'next header'=17
>>> + * | 'destination port'=6081>
>>> + */
>>> +#define RTE_PTYPE_TUNNEL_GENEVE 0x00005000
>>> +/**
>>> + * Tunneling packet type of Teredo, VXLAN (Virtual eXtensible Local
>>> +Area
>>> + * Network) or GRE (Generic Routing Encapsulation) could be
>>> +recognized as this
>>> + * packet type, if they can not be recognized independently as of
>>> +hardware
>>> + * capability.
>>> + */
>>> +#define RTE_PTYPE_TUNNEL_GRENAT 0x00006000
>>> +/**
>>> + * Mask of tunneling packet types.
>>> + */
>>> +#define RTE_PTYPE_TUNNEL_MASK 0x0000f000
>>> +/**
>>> + * MAC (Media Access Control) packet type.
>>> + * It is used for inner packet type only.
>>> + *
>>> + * Packet format (inner only):
>>> + * <'ether type'=[0x800|0x86DD]>
>>> + */
>>> +#define RTE_PTYPE_INNER_L2_MAC 0x00010000
>>> +/**
>>> + * MAC (Media Access Control) packet type with VLAN (Virtual Local
>>> +Area
>>> + * Network) tag.
>>> + *
>>> + * Packet format (inner only):
>>> + * <'ether type'=[0x800|0x86DD], vlan=[1-4095]> */
>>> +#define RTE_PTYPE_INNER_L2_MAC_VLAN 0x00020000
>>> +/**
>>> + * Mask of inner layer 2 packet types.
>>> + */
>>> +#define RTE_PTYPE_INNER_L2_MASK 0x000f0000
>>> +/**
>>> + * IP (Internet Protocol) version 4 packet type.
>>> + * It is used for inner packet only, and does not contain any header option.
>>> + *
>>> + * Packet format (inner only):
>>> + * <'ether type'=0x0800
>>> + * | 'version'=4, 'ihl'=5>
>>> + */
>>> +#define RTE_PTYPE_INNER_L3_IPV4 0x00100000
>>> +/**
>>> + * IP (Internet Protocol) version 4 packet type.
>>> + * It is used for inner packet only, and contains header options.
>>> + *
>>> + * Packet format (inner only):
>>> + * <'ether type'=0x0800
>>> + * | 'version'=4, 'ihl'=[6-15], 'options'> */
>>> +#define RTE_PTYPE_INNER_L3_IPV4_EXT 0x00200000
>>> +/**
>>> + * IP (Internet Protocol) version 6 packet type.
>>> + * It is used for inner packet only, and does not contain any extension header.
>>> + *
>>> + * Packet format (inner only):
>>> + * <'ether type'=0x86DD
>>> + * | 'version'=6, 'next header'=0x3B> */
>>> +#define RTE_PTYPE_INNER_L3_IPV6 0x00300000
>>> +/**
>>> + * IP (Internet Protocol) version 4 packet type.
>>> + * It is used for inner packet only, and may or maynot contain header options.
>>> + *
>>> + * Packet format (inner only):
>>> + * <'ether type'=0x0800
>>> + * | 'version'=4, 'ihl'=[5-15], <'options'>> */ #define
>>> +RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN 0x00400000
>>> +/**
>>> + * IP (Internet Protocol) version 6 packet type.
>>> + * It is used for inner packet only, and contains extension headers.
>>> + *
>>> + * Packet format (inner only):
>>> + * <'ether type'=0x86DD
>>> + * | 'version'=6, 'next header'=[0x0|0x2B|0x2C|0x32|0x33|0x3C|0x87],
>>> + * 'extension headers'>
>>> + */
>>> +#define RTE_PTYPE_INNER_L3_IPV6_EXT 0x00500000
>>> +/**
>>> + * IP (Internet Protocol) version 6 packet type.
>>> + * It is used for inner packet only, and may or maynot contain
>>> +extension
>>> + * headers.
>>> + *
>>> + * Packet format (inner only):
>>> + * <'ether type'=0x86DD
>>> + * | 'version'=6, 'next header'=[0x3B|0x0|0x2B|0x2C|0x32|0x33|0x3C|0x87],
>>> + * <'extension headers'>>
>>> + */
>>> +#define RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN 0x00600000
>>> +/**
>>> + * Mask of inner layer 3 packet types.
>>> + */
>>> +#define RTE_PTYPE_INNER_INNER_L3_MASK 0x00f00000
>>> +/**
>>> + * TCP (Transmission Control Protocol) packet type.
>>> + * It is used for inner packet only.
>>> + *
>>> + * Packet format (inner only):
>>> + * <'ether type'=0x0800
>>> + * | 'version'=4, 'protocol'=6, 'MF'=0>
>>> + * or,
>>> + * <'ether type'=0x86DD
>>> + * | 'version'=6, 'next header'=6>
>>> + */
>>> +#define RTE_PTYPE_INNER_L4_TCP 0x01000000
>>> +/**
>>> + * UDP (User Datagram Protocol) packet type.
>>> + * It is used for inner packet only.
>>> + *
>>> + * Packet format (inner only):
>>> + * <'ether type'=0x0800
>>> + * | 'version'=4, 'protocol'=17, 'MF'=0>
>>> + * or,
>>> + * <'ether type'=0x86DD
>>> + * | 'version'=6, 'next header'=17>
>>> + */
>>> +#define RTE_PTYPE_INNER_L4_UDP 0x02000000
>>> +/**
>>> + * Fragmented IP (Internet Protocol) packet type.
>>> + * It is used for inner packet only, and may or maynot have layer 4 packet.
>>> + *
>>> + * Packet format (inner only):
>>> + * <'ether type'=0x0800
>>> + * | 'version'=4, 'MF'=1>
>>> + * or,
>>> + * <'ether type'=0x86DD
>>> + * | 'version'=6, 'next header'=44>
>>> + */
>>> +#define RTE_PTYPE_INNER_L4_FRAG 0x03000000
>>> +/**
>>> + * SCTP (Stream Control Transmission Protocol) packet type.
>>> + * It is used for inner packet only.
>>> + *
>>> + * Packet format (inner only):
>>> + * <'ether type'=0x0800
>>> + * | 'version'=4, 'protocol'=132, 'MF'=0>
>>> + * or,
>>> + * <'ether type'=0x86DD
>>> + * | 'version'=6, 'next header'=132>
>>> + */
>>> +#define RTE_PTYPE_INNER_L4_SCTP 0x04000000
>>> +/**
>>> + * ICMP (Internet Control Message Protocol) packet type.
>>> + * It is used for inner packet only.
>>> + *
>>> + * Packet format (inner only):
>>> + * <'ether type'=0x0800
>>> + * | 'version'=4, 'protocol'=1, 'MF'=0>
>>> + * or,
>>> + * <'ether type'=0x86DD
>>> + * | 'version'=6, 'next header'=1>
>>> + */
>>> +#define RTE_PTYPE_INNER_L4_ICMP 0x05000000
>>> +/**
>>> + * Non-fragmented IP (Internet Protocol) packet type.
>>> + * It is used for inner packet only, and may or maynot have other
>>> +unknown layer
>>> + * 4 packet types.
>>> + *
>>> + * Packet format (inner only):
>>> + * <'ether type'=0x0800
>>> + * | 'version'=4, 'protocol'!=[6|17|132|1], 'MF'=0>
>>> + * or,
>>> + * <'ether type'=0x86DD
>>> + * | 'version'=6, 'next header'!=[6|17|44|132|1]> */
>>> +#define RTE_PTYPE_INNER_L4_NONFRAG 0x06000000
>>> +/**
>>> + * Mask of inner layer 4 packet types.
>>> + */
>>> +#define RTE_PTYPE_INNER_L4_MASK 0x0f000000
>>> +
>>> +/**
>>> + * Check if the (outer) L3 header is IPv4. To avoid comparing IPv4
>>> +types one by
>>> + * one, bit 4 is selected to be used for IPv4 only. Then checking bit
>>> +4 can
>>> + * determin if it is an IPV4 packet.
>>> + */
>>> +#define RTE_ETH_IS_IPV4_HDR(ptype) ((ptype) & RTE_PTYPE_L3_IPV4)
>>> +
>>> +/**
>>> + * Check if the (outer) L3 header is IPv4. To avoid comparing IPv4
>>> +types one by
>>> + * one, bit 6 is selected to be used for IPv4 only. Then checking bit
>>> +6 can
>>> + * determin if it is an IPV4 packet.
>>> + */
>>> +#define RTE_ETH_IS_IPV6_HDR(ptype) ((ptype) & RTE_PTYPE_L3_IPV6)
>>> +
>>> +/* Check if it is a tunneling packet */ #define
>>> +RTE_ETH_IS_TUNNEL_PKT(ptype) ((ptype) & RTE_PTYPE_TUNNEL_MASK)
>> #endif
>>> +/* RTE_NEXT_ABI */
>>> +
>>> /**
>>> * Get the name of a RX offload flag
>>> *
>>>
>
^ permalink raw reply [flat|nested] 257+ messages in thread
* Re: [dpdk-dev] [PATCH v8 01/18] mbuf: redefine packet_type in rte_mbuf
2015-07-02 9:03 ` Thomas Monjalon
@ 2015-07-03 1:11 ` Zhang, Helin
0 siblings, 0 replies; 257+ messages in thread
From: Zhang, Helin @ 2015-07-03 1:11 UTC (permalink / raw)
To: Thomas Monjalon; +Cc: dev
Hi Thomas
> -----Original Message-----
> From: Thomas Monjalon [mailto:thomas.monjalon@6wind.com]
> Sent: Thursday, July 2, 2015 5:03 PM
> To: Zhang, Helin
> Cc: dev@dpdk.org
> Subject: Re: [dpdk-dev] [PATCH v8 01/18] mbuf: redefine packet_type in
> rte_mbuf
>
> 2015-06-23 09:50, Helin Zhang:
> > In order to unify the packet type, the field of 'packet_type' in
> > 'struct rte_mbuf' needs to be extended from 16 to 32 bits.
> > Accordingly, some fields in 'struct rte_mbuf' are re-organized to
> > support this change for Vector PMD. As 'struct rte_kni_mbuf' for KNI
> > should be right mapped to 'struct rte_mbuf', it should be modified
> > accordingly. In addition, Vector PMD of ixgbe is disabled by default,
> > as 'struct rte_mbuf' changed.
> [...]
> > -CONFIG_RTE_IXGBE_INC_VECTOR=y
> > +CONFIG_RTE_IXGBE_INC_VECTOR=n
>
> It is the default configuration. Disabling it do not prevent from build break during
> a "git bisect".
> Please merge the changes for vector ixgbe in this patch.
Sure, no problem!
V9 will be sent soon. Thanks!
- Helin
^ permalink raw reply [flat|nested] 257+ messages in thread
* Re: [dpdk-dev] [PATCH v8 03/18] mbuf: add definitions of unified packet types
2015-07-02 9:31 ` Olivier MATZ
@ 2015-07-03 1:30 ` Zhang, Helin
0 siblings, 0 replies; 257+ messages in thread
From: Zhang, Helin @ 2015-07-03 1:30 UTC (permalink / raw)
To: Olivier MATZ, dev
> -----Original Message-----
> From: Olivier MATZ [mailto:olivier.matz@6wind.com]
> Sent: Thursday, July 2, 2015 5:32 PM
> To: Zhang, Helin; dev@dpdk.org
> Cc: Cao, Waterman; Liang, Cunming; Liu, Jijiang; Ananyev, Konstantin; Richardson,
> Bruce; yongwang@vmware.com; Wu, Jingjing
> Subject: Re: [PATCH v8 03/18] mbuf: add definitions of unified packet types
>
> Hi Helin,
>
> On 07/02/2015 03:30 AM, Zhang, Helin wrote:
> > Hi Oliver
> >
> > Thanks for your helps!
> >
> >> -----Original Message-----
> >> From: Olivier MATZ [mailto:olivier.matz@6wind.com]
> >> Sent: Tuesday, June 30, 2015 4:44 PM
> >> To: Zhang, Helin; dev@dpdk.org
> >> Cc: Cao, Waterman; Liang, Cunming; Liu, Jijiang; Ananyev, Konstantin;
> >> Richardson, Bruce; yongwang@vmware.com; Wu, Jingjing
> >> Subject: Re: [PATCH v8 03/18] mbuf: add definitions of unified packet
> >> types
> >>
> >> Hi Helin,
> >>
> >> This is greatly documented, thanks!
> >> Please find a small comment below.
> >>
> >> On 06/23/2015 03:50 AM, Helin Zhang wrote:
> >>> As there are only 6 bit flags in ol_flags for indicating packet
> >>> types, which is not enough to describe all the possible packet types
> >>> hardware can recognize. For example, i40e hardware can recognize
> >>> more than 150 packet types. Unified packet type is composed of L2
> >>> type, L3 type, L4 type, tunnel type, inner L2 type, inner L3 type
> >>> and inner L4 type fields, and can be stored in 'struct rte_mbuf' of
> >>> 32 bits field 'packet_type'.
> >>> To avoid breaking ABI compatibility, all the changes would be
> >>> enabled by RTE_NEXT_ABI, which is disabled by default.
> >>>
> >>> [...]
> >>> diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
> >>> index 0315561..0ee0c55 100644
> >>> --- a/lib/librte_mbuf/rte_mbuf.h
> >>> +++ b/lib/librte_mbuf/rte_mbuf.h
> >>> @@ -201,6 +201,493 @@ extern "C" {
> >>> /* Use final bit of flags to indicate a control mbuf */
> >>> #define CTRL_MBUF_FLAG (1ULL << 63) /**< Mbuf contains
> control
> >> data */
> >>>
> >>> +#ifdef RTE_NEXT_ABI
> >>> +/*
> >>> + * 32 bits are divided into several fields to mark packet types.
> >>> +Note that
> >>> + * each field is indexical.
> >>> + * - Bit 3:0 is for L2 types.
> >>> + * - Bit 7:4 is for L3 or outer L3 (for tunneling case) types.
> >>> + * - Bit 11:8 is for L4 or outer L4 (for tunneling case) types.
> >>> + * - Bit 15:12 is for tunnel types.
> >>> + * - Bit 19:16 is for inner L2 types.
> >>> + * - Bit 23:20 is for inner L3 types.
> >>> + * - Bit 27:24 is for inner L4 types.
> >>> + * - Bit 31:28 is reserved.
> >>> + *
> >>> + * To be compatible with Vector PMD, RTE_PTYPE_L3_IPV4,
> >>> +RTE_PTYPE_L3_IPV4_EXT,
> >>> + * RTE_PTYPE_L3_IPV6, RTE_PTYPE_L3_IPV6_EXT, RTE_PTYPE_L4_TCP,
> >>> +RTE_PTYPE_L4_UDP
> >>> + * and RTE_PTYPE_L4_SCTP should be kept as below in a contiguous 7 bits.
> >>> + *
> >>> + * Note that L3 types values are selected for checking IPV4/IPV6
> >>> +header from
> >>> + * performance point of view. Reading annotations of
> >>> +RTE_ETH_IS_IPV4_HDR and
> >>> + * RTE_ETH_IS_IPV6_HDR is needed for any future changes of L3 type
> values.
> >>> + *
> >>> + * Note that the packet types of the same packet recognized by
> >>> +different
> >>> + * hardware may be different, as different hardware may have
> >>> +different
> >>> + * capability of packet type recognition.
> >>> + *
> >>> + * examples:
> >>> + * <'ether type'=0x0800
> >>> + * | 'version'=4, 'protocol'=0x29
> >>> + * | 'version'=6, 'next header'=0x3A
> >>> + * | 'ICMPv6 header'>
> >>> + * will be recognized on i40e hardware as packet type combination
> >>> +of,
> >>> + * RTE_PTYPE_L2_MAC |
> >>> + * RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
> >>> + * RTE_PTYPE_TUNNEL_IP |
> >>> + * RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
> >>> + * RTE_PTYPE_INNER_L4_ICMP.
> >>> + *
> >>> + * <'ether type'=0x86DD
> >>> + * | 'version'=6, 'next header'=0x2F
> >>> + * | 'GRE header'
> >>> + * | 'version'=6, 'next header'=0x11
> >>> + * | 'UDP header'>
> >>> + * will be recognized on i40e hardware as packet type combination
> >>> +of,
> >>> + * RTE_PTYPE_L2_MAC |
> >>> + * RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
> >>> + * RTE_PTYPE_TUNNEL_GRENAT |
> >>> + * RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
> >>> + * RTE_PTYPE_INNER_L4_UDP.
> >>> + */
> >>> +#define RTE_PTYPE_UNKNOWN 0x00000000
> >>> +/**
> >>> + * MAC (Media Access Control) packet type.
> >>> + * It is used for outer packet for tunneling cases.
> >>> + *
> >>> + * Packet format:
> >>> + * <'ether type'=[0x0800|0x86DD|others]> */
> >>> +#define RTE_PTYPE_L2_MAC 0x00000001
> >>
> >> I'm wondering if RTE_PTYPE_L2_ETHER is not a better name?
> > Ethernet includes both Data Link Layer and Physical Layer, while MAC
> > is for Data Link Layer only. I would prefer to keep 'MAC' in the names, rather
> than 'ether'.
> > Any opinions from others?
>
> Just to precise what I'm saying: MAC is the interface between the logical link and
> the physical layer. It is different depending on the physical media (Ethernet,
> Token Ring, WLAN, ...).
> Every packet has a MAC layer and I think "MAC" does not bring any information.
>
> Having "ETHER" in the name would inform the software that it can expect an
> ethernet header. In the future, I would expect to have more L2 types like PPP.
OK, good explanation! I will change the name and thanks!
>
> I also have another question about RTE_PTYPE_L2_MAC. You describe it as
> "<'ether type'=[0x0800|0x86DD|others]>".
> What is the meaning of "others"? Does it mean that it is valid to set
> RTE_PTYPE_L2_MAC for any received packet?
OK. I think others should be removed. As ARP/LLDP like packet types are combined
together with MAC type.
>
> For instance, an ARP packet. The driver has the choice to set:
> A- RTE_PTYPE_UNKNOWN: the driver does not know the L2 packet
> type
> B- RTE_PTYPE_L2_MAC: the driver knows it's an ethernet packet
> (it should be the case for all received packets today as
> dpdk only supports ethernet ports)
> C- RTE_PTYPE_L2_ARP: the driver knows that the packet carries
> an ARP header after the ethernet header.
>
> Is it correct for a driver to always set B- for all received packets?
Currently it combines ether type and other ether type based protocols together.
So if it is a L2_LLDP, it could be treated as MAC + LLDP, while MAC means MAC only or MAC + L3.
>
> Another thing that bothers me a bit is that L2_ARP, L2_LLDP, L2_MAC_TIMESYNC,
> (...) are not really L2 types. The L2 type is Ethernet. On the other hand, they are
> not L3 type either.
> So, I have no other solution. The OSI model is probably a bit too theorical, and we
> have to choose the solution that is the most useful for the applications, even if it
> does not absolutely matches the theory ;)
Yes, they are a bit bothering. Currently they are combined together with L2 types.
Maybe it needs to define a MISC PACKET TYPE field, as we still have 4 bits available there.
Thanks,
Helin
>
>
> Regards,
> Olivier
>
>
> >
> > Regards,
> > Helin
> >
> >>
> >>
> >>> +/**
> >>> + * MAC (Media Access Control) packet type for time sync.
> >>> + *
> >>> + * Packet format:
> >>> + * <'ether type'=0x88F7>
> >>> + */
> >>> +#define RTE_PTYPE_L2_MAC_TIMESYNC 0x00000002
> >>> +/**
> >>> + * ARP (Address Resolution Protocol) packet type.
> >>> + *
> >>> + * Packet format:
> >>> + * <'ether type'=0x0806>
> >>> + */
> >>> +#define RTE_PTYPE_L2_ARP 0x00000003
> >>> +/**
> >>> + * LLDP (Link Layer Discovery Protocol) packet type.
> >>> + *
> >>> + * Packet format:
> >>> + * <'ether type'=0x88CC>
> >>> + */
> >>> +#define RTE_PTYPE_L2_LLDP 0x00000004
> >>
> >> Maybe ETHER should appear in these names too, what do you think?
> > Same as above.
> >
> >>
> >>
> >>
> >>
> >>> +/**
> >>> + * Mask of layer 2 packet types.
> >>> + * It is used for outer packet for tunneling cases.
> >>> + */
> >>> +#define RTE_PTYPE_L2_MASK 0x0000000f
> >>> +/**
> >>> + * IP (Internet Protocol) version 4 packet type.
> >>> + * It is used for outer packet for tunneling cases, and does not
> >>> +contain any
> >>> + * header option.
> >>> + *
> >>> + * Packet format:
> >>> + * <'ether type'=0x0800
> >>> + * | 'version'=4, 'ihl'=5>
> >>> + */
> >>> +#define RTE_PTYPE_L3_IPV4 0x00000010
> >>> +/**
> >>> + * IP (Internet Protocol) version 4 packet type.
> >>> + * It is used for outer packet for tunneling cases, and contains
> >>> +header
> >>> + * options.
> >>> + *
> >>> + * Packet format:
> >>> + * <'ether type'=0x0800
> >>> + * | 'version'=4, 'ihl'=[6-15], 'options'> */
> >>> +#define RTE_PTYPE_L3_IPV4_EXT 0x00000030
> >>> +/**
> >>> + * IP (Internet Protocol) version 6 packet type.
> >>> + * It is used for outer packet for tunneling cases, and does not
> >>> +contain any
> >>> + * extension header.
> >>> + *
> >>> + * Packet format:
> >>> + * <'ether type'=0x86DD
> >>> + * | 'version'=6, 'next header'=0x3B> */
> >>> +#define RTE_PTYPE_L3_IPV6 0x00000040
> >>> +/**
> >>> + * IP (Internet Protocol) version 4 packet type.
> >>> + * It is used for outer packet for tunneling cases, and may or
> >>> +maynot contain
> >>> + * header options.
> >>> + *
> >>> + * Packet format:
> >>> + * <'ether type'=0x0800
> >>> + * | 'version'=4, 'ihl'=[5-15], <'options'>> */
> >>> +#define RTE_PTYPE_L3_IPV4_EXT_UNKNOWN 0x00000090
> >>> +/**
> >>> + * IP (Internet Protocol) version 6 packet type.
> >>> + * It is used for outer packet for tunneling cases, and contains
> >>> +extension
> >>> + * headers.
> >>> + *
> >>> + * Packet format:
> >>> + * <'ether type'=0x86DD
> >>> + * | 'version'=6, 'next header'=[0x0|0x2B|0x2C|0x32|0x33|0x3C|0x87],
> >>> + * 'extension headers'>
> >>> + */
> >>> +#define RTE_PTYPE_L3_IPV6_EXT 0x000000c0
> >>> +/**
> >>> + * IP (Internet Protocol) version 6 packet type.
> >>> + * It is used for outer packet for tunneling cases, and may or
> >>> +maynot contain
> >>> + * extension headers.
> >>> + *
> >>> + * Packet format:
> >>> + * <'ether type'=0x86DD
> >>> + * | 'version'=6, 'next
> header'=[0x3B|0x0|0x2B|0x2C|0x32|0x33|0x3C|0x87],
> >>> + * <'extension headers'>>
> >>> + */
> >>> +#define RTE_PTYPE_L3_IPV6_EXT_UNKNOWN 0x000000e0
> >>> +/**
> >>> + * Mask of layer 3 packet types.
> >>> + * It is used for outer packet for tunneling cases.
> >>> + */
> >>> +#define RTE_PTYPE_L3_MASK 0x000000f0
> >>> +/**
> >>> + * TCP (Transmission Control Protocol) packet type.
> >>> + * It is used for outer packet for tunneling cases.
> >>> + *
> >>> + * Packet format:
> >>> + * <'ether type'=0x0800
> >>> + * | 'version'=4, 'protocol'=6, 'MF'=0>
> >>> + * or,
> >>> + * <'ether type'=0x86DD
> >>> + * | 'version'=6, 'next header'=6>
> >>> + */
> >>> +#define RTE_PTYPE_L4_TCP 0x00000100
> >>> +/**
> >>> + * UDP (User Datagram Protocol) packet type.
> >>> + * It is used for outer packet for tunneling cases.
> >>> + *
> >>> + * Packet format:
> >>> + * <'ether type'=0x0800
> >>> + * | 'version'=4, 'protocol'=17, 'MF'=0>
> >>> + * or,
> >>> + * <'ether type'=0x86DD
> >>> + * | 'version'=6, 'next header'=17> */
> >>> +#define RTE_PTYPE_L4_UDP 0x00000200
> >>> +/**
> >>> + * Fragmented IP (Internet Protocol) packet type.
> >>> + * It is used for outer packet for tunneling cases.
> >>> + *
> >>> + * It refers to those packets of any IP types, which can be
> >>> +recognized as
> >>> + * fragmented. A fragmented packet cannot be recognized as any
> >>> +other
> >>> +L4 types
> >>> + * (RTE_PTYPE_L4_TCP, RTE_PTYPE_L4_UDP, RTE_PTYPE_L4_SCTP,
> >>> +RTE_PTYPE_L4_ICMP,
> >>> + * RTE_PTYPE_L4_NONFRAG).
> >>> + *
> >>> + * Packet format:
> >>> + * <'ether type'=0x0800
> >>> + * | 'version'=4, 'MF'=1>
> >>> + * or,
> >>> + * <'ether type'=0x86DD
> >>> + * | 'version'=6, 'next header'=44> */
> >>> +#define RTE_PTYPE_L4_FRAG 0x00000300
> >>> +/**
> >>> + * SCTP (Stream Control Transmission Protocol) packet type.
> >>> + * It is used for outer packet for tunneling cases.
> >>> + *
> >>> + * Packet format:
> >>> + * <'ether type'=0x0800
> >>> + * | 'version'=4, 'protocol'=132, 'MF'=0>
> >>> + * or,
> >>> + * <'ether type'=0x86DD
> >>> + * | 'version'=6, 'next header'=132> */
> >>> +#define RTE_PTYPE_L4_SCTP 0x00000400
> >>> +/**
> >>> + * ICMP (Internet Control Message Protocol) packet type.
> >>> + * It is used for outer packet for tunneling cases.
> >>> + *
> >>> + * Packet format:
> >>> + * <'ether type'=0x0800
> >>> + * | 'version'=4, 'protocol'=1, 'MF'=0>
> >>> + * or,
> >>> + * <'ether type'=0x86DD
> >>> + * | 'version'=6, 'next header'=1>
> >>> + */
> >>> +#define RTE_PTYPE_L4_ICMP 0x00000500
> >>> +/**
> >>> + * Non-fragmented IP (Internet Protocol) packet type.
> >>> + * It is used for outer packet for tunneling cases.
> >>> + *
> >>> + * It refers to those packets of any IP types, while cannot be
> >>> +recognized as
> >>> + * any of above L4 types (RTE_PTYPE_L4_TCP, RTE_PTYPE_L4_UDP,
> >>> + * RTE_PTYPE_L4_FRAG, RTE_PTYPE_L4_SCTP, RTE_PTYPE_L4_ICMP).
> >>> + *
> >>> + * Packet format:
> >>> + * <'ether type'=0x0800
> >>> + * | 'version'=4, 'protocol'!=[6|17|132|1], 'MF'=0>
> >>> + * or,
> >>> + * <'ether type'=0x86DD
> >>> + * | 'version'=6, 'next header'!=[6|17|44|132|1]> */
> >>> +#define RTE_PTYPE_L4_NONFRAG 0x00000600
> >>> +/**
> >>> + * Mask of layer 4 packet types.
> >>> + * It is used for outer packet for tunneling cases.
> >>> + */
> >>> +#define RTE_PTYPE_L4_MASK 0x00000f00
> >>> +/**
> >>> + * IP (Internet Protocol) in IP (Internet Protocol) tunneling packet type.
> >>> + *
> >>> + * Packet format:
> >>> + * <'ether type'=0x0800
> >>> + * | 'version'=4, 'protocol'=[4|41]>
> >>> + * or,
> >>> + * <'ether type'=0x86DD
> >>> + * | 'version'=6, 'next header'=[4|41]> */
> >>> +#define RTE_PTYPE_TUNNEL_IP 0x00001000
> >>> +/**
> >>> + * GRE (Generic Routing Encapsulation) tunneling packet type.
> >>> + *
> >>> + * Packet format:
> >>> + * <'ether type'=0x0800
> >>> + * | 'version'=4, 'protocol'=47>
> >>> + * or,
> >>> + * <'ether type'=0x86DD
> >>> + * | 'version'=6, 'next header'=47> */
> >>> +#define RTE_PTYPE_TUNNEL_GRE 0x00002000
> >>> +/**
> >>> + * VXLAN (Virtual eXtensible Local Area Network) tunneling packet type.
> >>> + *
> >>> + * Packet format:
> >>> + * <'ether type'=0x0800
> >>> + * | 'version'=4, 'protocol'=17
> >>> + * | 'destination port'=4798>
> >>> + * or,
> >>> + * <'ether type'=0x86DD
> >>> + * | 'version'=6, 'next header'=17
> >>> + * | 'destination port'=4798>
> >>> + */
> >>> +#define RTE_PTYPE_TUNNEL_VXLAN 0x00003000
> >>> +/**
> >>> + * NVGRE (Network Virtualization using Generic Routing
> >>> +Encapsulation) tunneling
> >>> + * packet type.
> >>> + *
> >>> + * Packet format:
> >>> + * <'ether type'=0x0800
> >>> + * | 'version'=4, 'protocol'=47
> >>> + * | 'protocol type'=0x6558>
> >>> + * or,
> >>> + * <'ether type'=0x86DD
> >>> + * | 'version'=6, 'next header'=47
> >>> + * | 'protocol type'=0x6558'>
> >>> + */
> >>> +#define RTE_PTYPE_TUNNEL_NVGRE 0x00004000
> >>> +/**
> >>> + * GENEVE (Generic Network Virtualization Encapsulation) tunneling
> >>> +packet
> >> type.
> >>> + *
> >>> + * Packet format:
> >>> + * <'ether type'=0x0800
> >>> + * | 'version'=4, 'protocol'=17
> >>> + * | 'destination port'=6081>
> >>> + * or,
> >>> + * <'ether type'=0x86DD
> >>> + * | 'version'=6, 'next header'=17
> >>> + * | 'destination port'=6081>
> >>> + */
> >>> +#define RTE_PTYPE_TUNNEL_GENEVE 0x00005000
> >>> +/**
> >>> + * Tunneling packet type of Teredo, VXLAN (Virtual eXtensible Local
> >>> +Area
> >>> + * Network) or GRE (Generic Routing Encapsulation) could be
> >>> +recognized as this
> >>> + * packet type, if they can not be recognized independently as of
> >>> +hardware
> >>> + * capability.
> >>> + */
> >>> +#define RTE_PTYPE_TUNNEL_GRENAT 0x00006000
> >>> +/**
> >>> + * Mask of tunneling packet types.
> >>> + */
> >>> +#define RTE_PTYPE_TUNNEL_MASK 0x0000f000
> >>> +/**
> >>> + * MAC (Media Access Control) packet type.
> >>> + * It is used for inner packet type only.
> >>> + *
> >>> + * Packet format (inner only):
> >>> + * <'ether type'=[0x800|0x86DD]>
> >>> + */
> >>> +#define RTE_PTYPE_INNER_L2_MAC 0x00010000
> >>> +/**
> >>> + * MAC (Media Access Control) packet type with VLAN (Virtual Local
> >>> +Area
> >>> + * Network) tag.
> >>> + *
> >>> + * Packet format (inner only):
> >>> + * <'ether type'=[0x800|0x86DD], vlan=[1-4095]> */
> >>> +#define RTE_PTYPE_INNER_L2_MAC_VLAN 0x00020000
> >>> +/**
> >>> + * Mask of inner layer 2 packet types.
> >>> + */
> >>> +#define RTE_PTYPE_INNER_L2_MASK 0x000f0000
> >>> +/**
> >>> + * IP (Internet Protocol) version 4 packet type.
> >>> + * It is used for inner packet only, and does not contain any header option.
> >>> + *
> >>> + * Packet format (inner only):
> >>> + * <'ether type'=0x0800
> >>> + * | 'version'=4, 'ihl'=5>
> >>> + */
> >>> +#define RTE_PTYPE_INNER_L3_IPV4 0x00100000
> >>> +/**
> >>> + * IP (Internet Protocol) version 4 packet type.
> >>> + * It is used for inner packet only, and contains header options.
> >>> + *
> >>> + * Packet format (inner only):
> >>> + * <'ether type'=0x0800
> >>> + * | 'version'=4, 'ihl'=[6-15], 'options'> */
> >>> +#define RTE_PTYPE_INNER_L3_IPV4_EXT 0x00200000
> >>> +/**
> >>> + * IP (Internet Protocol) version 6 packet type.
> >>> + * It is used for inner packet only, and does not contain any extension
> header.
> >>> + *
> >>> + * Packet format (inner only):
> >>> + * <'ether type'=0x86DD
> >>> + * | 'version'=6, 'next header'=0x3B> */
> >>> +#define RTE_PTYPE_INNER_L3_IPV6 0x00300000
> >>> +/**
> >>> + * IP (Internet Protocol) version 4 packet type.
> >>> + * It is used for inner packet only, and may or maynot contain header
> options.
> >>> + *
> >>> + * Packet format (inner only):
> >>> + * <'ether type'=0x0800
> >>> + * | 'version'=4, 'ihl'=[5-15], <'options'>> */ #define
> >>> +RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN 0x00400000
> >>> +/**
> >>> + * IP (Internet Protocol) version 6 packet type.
> >>> + * It is used for inner packet only, and contains extension headers.
> >>> + *
> >>> + * Packet format (inner only):
> >>> + * <'ether type'=0x86DD
> >>> + * | 'version'=6, 'next header'=[0x0|0x2B|0x2C|0x32|0x33|0x3C|0x87],
> >>> + * 'extension headers'>
> >>> + */
> >>> +#define RTE_PTYPE_INNER_L3_IPV6_EXT 0x00500000
> >>> +/**
> >>> + * IP (Internet Protocol) version 6 packet type.
> >>> + * It is used for inner packet only, and may or maynot contain
> >>> +extension
> >>> + * headers.
> >>> + *
> >>> + * Packet format (inner only):
> >>> + * <'ether type'=0x86DD
> >>> + * | 'version'=6, 'next
> header'=[0x3B|0x0|0x2B|0x2C|0x32|0x33|0x3C|0x87],
> >>> + * <'extension headers'>>
> >>> + */
> >>> +#define RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN 0x00600000
> >>> +/**
> >>> + * Mask of inner layer 3 packet types.
> >>> + */
> >>> +#define RTE_PTYPE_INNER_INNER_L3_MASK 0x00f00000
> >>> +/**
> >>> + * TCP (Transmission Control Protocol) packet type.
> >>> + * It is used for inner packet only.
> >>> + *
> >>> + * Packet format (inner only):
> >>> + * <'ether type'=0x0800
> >>> + * | 'version'=4, 'protocol'=6, 'MF'=0>
> >>> + * or,
> >>> + * <'ether type'=0x86DD
> >>> + * | 'version'=6, 'next header'=6>
> >>> + */
> >>> +#define RTE_PTYPE_INNER_L4_TCP 0x01000000
> >>> +/**
> >>> + * UDP (User Datagram Protocol) packet type.
> >>> + * It is used for inner packet only.
> >>> + *
> >>> + * Packet format (inner only):
> >>> + * <'ether type'=0x0800
> >>> + * | 'version'=4, 'protocol'=17, 'MF'=0>
> >>> + * or,
> >>> + * <'ether type'=0x86DD
> >>> + * | 'version'=6, 'next header'=17> */
> >>> +#define RTE_PTYPE_INNER_L4_UDP 0x02000000
> >>> +/**
> >>> + * Fragmented IP (Internet Protocol) packet type.
> >>> + * It is used for inner packet only, and may or maynot have layer 4 packet.
> >>> + *
> >>> + * Packet format (inner only):
> >>> + * <'ether type'=0x0800
> >>> + * | 'version'=4, 'MF'=1>
> >>> + * or,
> >>> + * <'ether type'=0x86DD
> >>> + * | 'version'=6, 'next header'=44> */
> >>> +#define RTE_PTYPE_INNER_L4_FRAG 0x03000000
> >>> +/**
> >>> + * SCTP (Stream Control Transmission Protocol) packet type.
> >>> + * It is used for inner packet only.
> >>> + *
> >>> + * Packet format (inner only):
> >>> + * <'ether type'=0x0800
> >>> + * | 'version'=4, 'protocol'=132, 'MF'=0>
> >>> + * or,
> >>> + * <'ether type'=0x86DD
> >>> + * | 'version'=6, 'next header'=132> */
> >>> +#define RTE_PTYPE_INNER_L4_SCTP 0x04000000
> >>> +/**
> >>> + * ICMP (Internet Control Message Protocol) packet type.
> >>> + * It is used for inner packet only.
> >>> + *
> >>> + * Packet format (inner only):
> >>> + * <'ether type'=0x0800
> >>> + * | 'version'=4, 'protocol'=1, 'MF'=0>
> >>> + * or,
> >>> + * <'ether type'=0x86DD
> >>> + * | 'version'=6, 'next header'=1>
> >>> + */
> >>> +#define RTE_PTYPE_INNER_L4_ICMP 0x05000000
> >>> +/**
> >>> + * Non-fragmented IP (Internet Protocol) packet type.
> >>> + * It is used for inner packet only, and may or maynot have other
> >>> +unknown layer
> >>> + * 4 packet types.
> >>> + *
> >>> + * Packet format (inner only):
> >>> + * <'ether type'=0x0800
> >>> + * | 'version'=4, 'protocol'!=[6|17|132|1], 'MF'=0>
> >>> + * or,
> >>> + * <'ether type'=0x86DD
> >>> + * | 'version'=6, 'next header'!=[6|17|44|132|1]> */
> >>> +#define RTE_PTYPE_INNER_L4_NONFRAG 0x06000000
> >>> +/**
> >>> + * Mask of inner layer 4 packet types.
> >>> + */
> >>> +#define RTE_PTYPE_INNER_L4_MASK 0x0f000000
> >>> +
> >>> +/**
> >>> + * Check if the (outer) L3 header is IPv4. To avoid comparing IPv4
> >>> +types one by
> >>> + * one, bit 4 is selected to be used for IPv4 only. Then checking
> >>> +bit
> >>> +4 can
> >>> + * determin if it is an IPV4 packet.
> >>> + */
> >>> +#define RTE_ETH_IS_IPV4_HDR(ptype) ((ptype) & RTE_PTYPE_L3_IPV4)
> >>> +
> >>> +/**
> >>> + * Check if the (outer) L3 header is IPv4. To avoid comparing IPv4
> >>> +types one by
> >>> + * one, bit 6 is selected to be used for IPv4 only. Then checking
> >>> +bit
> >>> +6 can
> >>> + * determin if it is an IPV4 packet.
> >>> + */
> >>> +#define RTE_ETH_IS_IPV6_HDR(ptype) ((ptype) & RTE_PTYPE_L3_IPV6)
> >>> +
> >>> +/* Check if it is a tunneling packet */ #define
> >>> +RTE_ETH_IS_TUNNEL_PKT(ptype) ((ptype) & RTE_PTYPE_TUNNEL_MASK)
> >> #endif
> >>> +/* RTE_NEXT_ABI */
> >>> +
> >>> /**
> >>> * Get the name of a RX offload flag
> >>> *
> >>>
> >
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v9 00/19] unified packet type
2015-06-23 1:50 ` [dpdk-dev] [PATCH v8 00/18] unified packet type Helin Zhang
` (18 preceding siblings ...)
2015-06-23 16:13 ` [dpdk-dev] [PATCH v8 00/18] unified packet type Ananyev, Konstantin
@ 2015-07-03 8:32 ` Helin Zhang
2015-07-03 8:32 ` [dpdk-dev] [PATCH v9 01/19] mbuf: redefine packet_type in rte_mbuf Helin Zhang
` (19 more replies)
19 siblings, 20 replies; 257+ messages in thread
From: Helin Zhang @ 2015-07-03 8:32 UTC (permalink / raw)
To: dev
Currently only 6 bits which are stored in ol_flags are used to indicate the
packet types. This is not enough, as some NIC hardware can recognize quite
a lot of packet types, e.g i40e hardware can recognize more than 150 packet
types. Hiding those packet types hides hardware offload capabilities which
could be quite useful for improving performance and for end users.
So an unified packet types are needed to support all possible PMDs. A 16 bits
packet_type in mbuf structure can be changed to 32 bits and used for this
purpose. In addition, all packet types stored in ol_flag field should be
deleted at all, and 6 bits of ol_flags can be save as the benifit.
Initially, 32 bits of packet_type can be divided into several sub fields to
indicate different packet type information of a packet. The initial design
is to divide those bits into fields for L2 types, L3 types, L4 types, tunnel
types, inner L2 types, inner L3 types and inner L4 types. All PMDs should
translate the offloaded packet types into these 7 fields of information, for
user applications.
To avoid breaking ABI compatibility, currently all the code changes for
unified packet type are disabled at compile time by default. Users can enable
it manually by defining the macro of RTE_NEXT_ABI. The code changes will be
valid by default in a future release, and the old version will be deleted
accordingly, after the ABI change process is done.
Note that this patch set should be integrated after another patch set for
'[PATCH v3 0/7] support i40e QinQ stripping and insertion', to clearly solve
the conflict during integration. As both patch sets modified 'struct rte_mbuf',
and the final layout of the 'struct rte_mbuf' is key to vectorized ixgbe PMD.
Its v8 version was acked by Konstantin Ananyev <konstantin.ananyev@intel.com>
v2 changes:
* Enlarged the packet_type field from 16 bits to 32 bits.
* Redefined the packet type sub-fields.
* Updated the 'struct rte_kni_mbuf' for KNI according to the mbuf changes.
* Used redefined packet types and enlarged packet_type field for all PMDs
and corresponding applications.
* Removed changes in bond and its relevant application, as there is no need
at all according to the recent bond changes.
v3 changes:
* Put the mbuf layout changes into a single patch.
* Put vector ixgbe changes right after mbuf changes.
* Disabled vector ixgbe PMD by default, as mbuf layout changed, and then
re-enabled it after vector ixgbe PMD updated.
* Put the definitions of unified packet type into a single patch.
* Minor bug fixes and enhancements in l3fwd example.
v4 changes:
* Added detailed description of each packet types.
* Supported unified packet type of fm10k.
* Added printing logs of packet types of each received packet for rxonly
mode in testpmd.
* Removed several useless code lines which block packet type unification from
app/test/packet_burst_generator.c.
v5 changes:
* Added more detailed description for each packet types, together with examples.
* Rolled back the macro definitions of RX packet flags, for ABI compitability.
v6 changes:
* Disabled the code changes for unified packet type by default, to
avoid breaking ABI compatibility.
v7 changes:
* Renamed RTE_UNIFIED_PKT_TYPE to RTE_NEXT_ABI.
* Integrated with patch set for '[PATCH v3 0/7] support i40e QinQ stripping
and insertion', to clearly solve the conflicts during merging.
v8 changes:
* Moved the field of 'vlan_tci_outer' in 'struct rte_mbuf' to the end of the 1st
cache line, to avoid breaking any vectorized PMD storing, as fields of
'packet_type, pkt_len, data_len, vlan_tci, rss' should be in an contiguous 128
bits.
v9 changes:
* Put the mbuf changes and vector PMD changes together, as they are
tightly relevant.
* Renamed MAC to ETHER in packet type names.
* Corrected the packet type explanation of RTE_PTYPE_L2_ETHER.
* Reworked newly added cxgbe driver and tep_termination example application to
support unified packet type, which is disabled by default.
Helin Zhang (19):
mbuf: redefine packet_type in rte_mbuf
mbuf: add definitions of unified packet types
e1000: replace bit mask based packet type with unified packet type
ixgbe: replace bit mask based packet type with unified packet type
i40e: replace bit mask based packet type with unified packet type
enic: replace bit mask based packet type with unified packet type
vmxnet3: replace bit mask based packet type with unified packet type
fm10k: replace bit mask based packet type with unified packet type
cxgbe: replace bit mask based packet type with unified packet type
app/test-pipeline: replace bit mask based packet type with unified
packet type
app/testpmd: replace bit mask based packet type with unified packet
type
app/test: Remove useless code
examples/ip_fragmentation: replace bit mask based packet type with
unified packet type
examples/ip_reassembly: replace bit mask based packet type with
unified packet type
examples/l3fwd-acl: replace bit mask based packet type with unified
packet type
examples/l3fwd-power: replace bit mask based packet type with unified
packet type
examples/l3fwd: replace bit mask based packet type with unified packet
type
examples/tep_termination: replace bit mask based packet type with
unified packet type
mbuf: remove old packet type bit masks
app/test-pipeline/pipeline_hash.c | 13 +
app/test-pmd/csumonly.c | 14 +
app/test-pmd/rxonly.c | 183 +++++++
app/test/packet_burst_generator.c | 6 +-
drivers/net/cxgbe/sge.c | 8 +
drivers/net/e1000/igb_rxtx.c | 104 ++++
drivers/net/enic/enic_main.c | 26 +
drivers/net/fm10k/fm10k_rxtx.c | 27 +
drivers/net/i40e/i40e_rxtx.c | 554 +++++++++++++++++++++
drivers/net/ixgbe/ixgbe_rxtx.c | 163 ++++++
drivers/net/ixgbe/ixgbe_rxtx_vec.c | 75 ++-
drivers/net/vmxnet3/vmxnet3_rxtx.c | 8 +
examples/ip_fragmentation/main.c | 9 +
examples/ip_reassembly/main.c | 9 +
examples/l3fwd-acl/main.c | 29 +-
examples/l3fwd-power/main.c | 8 +
examples/l3fwd/main.c | 123 ++++-
examples/tep_termination/vxlan.c | 4 +
.../linuxapp/eal/include/exec-env/rte_kni_common.h | 6 +
lib/librte_mbuf/rte_mbuf.c | 4 +
lib/librte_mbuf/rte_mbuf.h | 516 +++++++++++++++++++
21 files changed, 1876 insertions(+), 13 deletions(-)
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v9 01/19] mbuf: redefine packet_type in rte_mbuf
2015-07-03 8:32 ` [dpdk-dev] [PATCH v9 00/19] " Helin Zhang
@ 2015-07-03 8:32 ` Helin Zhang
2015-07-03 8:32 ` [dpdk-dev] [PATCH v9 02/19] mbuf: add definitions of unified packet types Helin Zhang
` (18 subsequent siblings)
19 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-07-03 8:32 UTC (permalink / raw)
To: dev
In order to unify the packet type, the field of 'packet_type' in 'struct rte_mbuf'
needs to be extended from 16 to 32 bits. Accordingly, some fields in 'struct rte_mbuf'
are re-organized to support this change for Vector PMD. As 'struct rte_kni_mbuf' for
KNI should be right mapped to 'struct rte_mbuf', it should be modified accordingly.
In ixgbe PMD driver, corresponding changes are added for the mbuf changes, especially
the bit masks of packet type for 'ol_flags' are replaced by unified packet type. In
addition, more packet types (UDP, TCP and SCTP) are supported in vectorized ixgbe PMD.
To avoid breaking ABI compatibility, all the changes would be enabled by RTE_NEXT_ABI,
which is disabled by default.
Note that around 2% performance drop (64B) was observed of doing 4 ports (1 port per
82599 card) IO forwarding on the same SNB core.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Signed-off-by: Cunming Liang <cunming.liang@intel.com>
---
drivers/net/ixgbe/ixgbe_rxtx_vec.c | 75 +++++++++++++++++++++-
.../linuxapp/eal/include/exec-env/rte_kni_common.h | 6 ++
lib/librte_mbuf/rte_mbuf.h | 26 ++++++++
3 files changed, 105 insertions(+), 2 deletions(-)
v2 changes:
* Enlarged the packet_type field from 16 bits to 32 bits.
* Redefined the packet type sub-fields.
* Updated the 'struct rte_kni_mbuf' for KNI according to the mbuf changes.
v3 changes:
* Put the mbuf layout changes into a single patch.
* Disabled vector ixgbe PMD by default, as mbuf layout changed.
v5 changes:
* Re-worded the commit logs.
v6 changes:
* Disabled the code changes for unified packet type by default, to
avoid breaking ABI compatibility.
v7 changes:
* Renamed RTE_UNIFIED_PKT_TYPE to RTE_NEXT_ABI.
* Integrated with changes of QinQ stripping/insertion.
v8 changes:
* Moved the field of 'vlan_tci_outer' in 'struct rte_mbuf' to the end
of the 1st cache line, to avoid breaking any vectorized PMD storing.
v9 changes:
* Put the mbuf changes and vector PMD changes together, as they are
tightly relevant.
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec.c b/drivers/net/ixgbe/ixgbe_rxtx_vec.c
index abd10f6..ccea7cd 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec.c
@@ -134,6 +134,12 @@ ixgbe_rxq_rearm(struct ixgbe_rx_queue *rxq)
*/
#ifdef RTE_IXGBE_RX_OLFLAGS_ENABLE
+#ifdef RTE_NEXT_ABI
+#define OLFLAGS_MASK_V (((uint64_t)PKT_RX_VLAN_PKT << 48) | \
+ ((uint64_t)PKT_RX_VLAN_PKT << 32) | \
+ ((uint64_t)PKT_RX_VLAN_PKT << 16) | \
+ ((uint64_t)PKT_RX_VLAN_PKT))
+#else
#define OLFLAGS_MASK ((uint16_t)(PKT_RX_VLAN_PKT | PKT_RX_IPV4_HDR |\
PKT_RX_IPV4_HDR_EXT | PKT_RX_IPV6_HDR |\
PKT_RX_IPV6_HDR_EXT))
@@ -142,11 +148,26 @@ ixgbe_rxq_rearm(struct ixgbe_rx_queue *rxq)
((uint64_t)OLFLAGS_MASK << 16) | \
((uint64_t)OLFLAGS_MASK))
#define PTYPE_SHIFT (1)
+#endif /* RTE_NEXT_ABI */
+
#define VTAG_SHIFT (3)
static inline void
desc_to_olflags_v(__m128i descs[4], struct rte_mbuf **rx_pkts)
{
+#ifdef RTE_NEXT_ABI
+ __m128i vtag0, vtag1;
+ union {
+ uint16_t e[4];
+ uint64_t dword;
+ } vol;
+
+ vtag0 = _mm_unpackhi_epi16(descs[0], descs[1]);
+ vtag1 = _mm_unpackhi_epi16(descs[2], descs[3]);
+ vtag1 = _mm_unpacklo_epi32(vtag0, vtag1);
+ vtag1 = _mm_srli_epi16(vtag1, VTAG_SHIFT);
+ vol.dword = _mm_cvtsi128_si64(vtag1) & OLFLAGS_MASK_V;
+#else
__m128i ptype0, ptype1, vtag0, vtag1;
union {
uint16_t e[4];
@@ -166,6 +187,7 @@ desc_to_olflags_v(__m128i descs[4], struct rte_mbuf **rx_pkts)
ptype1 = _mm_or_si128(ptype1, vtag1);
vol.dword = _mm_cvtsi128_si64(ptype1) & OLFLAGS_MASK_V;
+#endif /* RTE_NEXT_ABI */
rx_pkts[0]->ol_flags = vol.e[0];
rx_pkts[1]->ol_flags = vol.e[1];
@@ -196,6 +218,18 @@ _recv_raw_pkts_vec(struct ixgbe_rx_queue *rxq, struct rte_mbuf **rx_pkts,
int pos;
uint64_t var;
__m128i shuf_msk;
+#ifdef RTE_NEXT_ABI
+ __m128i crc_adjust = _mm_set_epi16(
+ 0, 0, 0, /* ignore non-length fields */
+ -rxq->crc_len, /* sub crc on data_len */
+ 0, /* ignore high-16bits of pkt_len */
+ -rxq->crc_len, /* sub crc on pkt_len */
+ 0, 0 /* ignore pkt_type field */
+ );
+ __m128i dd_check, eop_check;
+ __m128i desc_mask = _mm_set_epi32(0xFFFFFFFF, 0xFFFFFFFF,
+ 0xFFFFFFFF, 0xFFFF07F0);
+#else
__m128i crc_adjust = _mm_set_epi16(
0, 0, 0, 0, /* ignore non-length fields */
0, /* ignore high-16bits of pkt_len */
@@ -204,6 +238,7 @@ _recv_raw_pkts_vec(struct ixgbe_rx_queue *rxq, struct rte_mbuf **rx_pkts,
0 /* ignore pkt_type field */
);
__m128i dd_check, eop_check;
+#endif /* RTE_NEXT_ABI */
if (unlikely(nb_pkts < RTE_IXGBE_VPMD_RX_BURST))
return 0;
@@ -232,6 +267,18 @@ _recv_raw_pkts_vec(struct ixgbe_rx_queue *rxq, struct rte_mbuf **rx_pkts,
eop_check = _mm_set_epi64x(0x0000000200000002LL, 0x0000000200000002LL);
/* mask to shuffle from desc. to mbuf */
+#ifdef RTE_NEXT_ABI
+ shuf_msk = _mm_set_epi8(
+ 7, 6, 5, 4, /* octet 4~7, 32bits rss */
+ 15, 14, /* octet 14~15, low 16 bits vlan_macip */
+ 13, 12, /* octet 12~13, 16 bits data_len */
+ 0xFF, 0xFF, /* skip high 16 bits pkt_len, zero out */
+ 13, 12, /* octet 12~13, low 16 bits pkt_len */
+ 0xFF, 0xFF, /* skip high 16 bits pkt_type */
+ 1, /* octet 1, 8 bits pkt_type field */
+ 0 /* octet 0, 4 bits offset 4 pkt_type field */
+ );
+#else
shuf_msk = _mm_set_epi8(
7, 6, 5, 4, /* octet 4~7, 32bits rss */
0xFF, 0xFF, /* skip high 16 bits vlan_macip, zero out */
@@ -241,18 +288,28 @@ _recv_raw_pkts_vec(struct ixgbe_rx_queue *rxq, struct rte_mbuf **rx_pkts,
13, 12, /* octet 12~13, 16 bits data_len */
0xFF, 0xFF /* skip pkt_type field */
);
+#endif /* RTE_NEXT_ABI */
/* Cache is empty -> need to scan the buffer rings, but first move
* the next 'n' mbufs into the cache */
sw_ring = &rxq->sw_ring[rxq->rx_tail];
- /*
- * A. load 4 packet in one loop
+#ifdef RTE_NEXT_ABI
+ /* A. load 4 packet in one loop
+ * [A*. mask out 4 unused dirty field in desc]
* B. copy 4 mbuf point from swring to rx_pkts
* C. calc the number of DD bits among the 4 packets
* [C*. extract the end-of-packet bit, if requested]
* D. fill info. from desc to mbuf
*/
+#else
+ /* A. load 4 packet in one loop
+ * B. copy 4 mbuf point from swring to rx_pkts
+ * C. calc the number of DD bits among the 4 packets
+ * [C*. extract the end-of-packet bit, if requested]
+ * D. fill info. from desc to mbuf
+ */
+#endif /* RTE_NEXT_ABI */
for (pos = 0, nb_pkts_recd = 0; pos < RTE_IXGBE_VPMD_RX_BURST;
pos += RTE_IXGBE_DESCS_PER_LOOP,
rxdp += RTE_IXGBE_DESCS_PER_LOOP) {
@@ -289,6 +346,16 @@ _recv_raw_pkts_vec(struct ixgbe_rx_queue *rxq, struct rte_mbuf **rx_pkts,
/* B.2 copy 2 mbuf point into rx_pkts */
_mm_storeu_si128((__m128i *)&rx_pkts[pos+2], mbp2);
+#ifdef RTE_NEXT_ABI
+ /* A* mask out 0~3 bits RSS type */
+ descs[3] = _mm_and_si128(descs[3], desc_mask);
+ descs[2] = _mm_and_si128(descs[2], desc_mask);
+
+ /* A* mask out 0~3 bits RSS type */
+ descs[1] = _mm_and_si128(descs[1], desc_mask);
+ descs[0] = _mm_and_si128(descs[0], desc_mask);
+#endif /* RTE_NEXT_ABI */
+
/* avoid compiler reorder optimization */
rte_compiler_barrier();
@@ -301,7 +368,11 @@ _recv_raw_pkts_vec(struct ixgbe_rx_queue *rxq, struct rte_mbuf **rx_pkts,
/* C.1 4=>2 filter staterr info only */
sterr_tmp1 = _mm_unpackhi_epi32(descs[1], descs[0]);
+#ifdef RTE_NEXT_ABI
+ /* set ol_flags with vlan packet type */
+#else
/* set ol_flags with packet type and vlan tag */
+#endif /* RTE_NEXT_ABI */
desc_to_olflags_v(descs, &rx_pkts[pos]);
/* D.2 pkt 3,4 set in_port/nb_seg and remove crc */
diff --git a/lib/librte_eal/linuxapp/eal/include/exec-env/rte_kni_common.h b/lib/librte_eal/linuxapp/eal/include/exec-env/rte_kni_common.h
index 1e55c2d..e9f38bd 100644
--- a/lib/librte_eal/linuxapp/eal/include/exec-env/rte_kni_common.h
+++ b/lib/librte_eal/linuxapp/eal/include/exec-env/rte_kni_common.h
@@ -117,9 +117,15 @@ struct rte_kni_mbuf {
uint16_t data_off; /**< Start address of data in segment buffer. */
char pad1[4];
uint64_t ol_flags; /**< Offload features. */
+#ifdef RTE_NEXT_ABI
+ char pad2[4];
+ uint32_t pkt_len; /**< Total pkt len: sum of all segment data_len. */
+ uint16_t data_len; /**< Amount of data in segment buffer. */
+#else
char pad2[2];
uint16_t data_len; /**< Amount of data in segment buffer. */
uint32_t pkt_len; /**< Total pkt len: sum of all segment data_len. */
+#endif
/* fields on second cache line */
char pad3[8] __attribute__((__aligned__(RTE_CACHE_LINE_SIZE)));
diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
index 80419df..ac29da3 100644
--- a/lib/librte_mbuf/rte_mbuf.h
+++ b/lib/librte_mbuf/rte_mbuf.h
@@ -276,6 +276,28 @@ struct rte_mbuf {
/* remaining bytes are set on RX when pulling packet from descriptor */
MARKER rx_descriptor_fields1;
+#ifdef RTE_NEXT_ABI
+ /*
+ * The packet type, which is the combination of outer/inner L2, L3, L4
+ * and tunnel types.
+ */
+ union {
+ uint32_t packet_type; /**< L2/L3/L4 and tunnel information. */
+ struct {
+ uint32_t l2_type:4; /**< (Outer) L2 type. */
+ uint32_t l3_type:4; /**< (Outer) L3 type. */
+ uint32_t l4_type:4; /**< (Outer) L4 type. */
+ uint32_t tun_type:4; /**< Tunnel type. */
+ uint32_t inner_l2_type:4; /**< Inner L2 type. */
+ uint32_t inner_l3_type:4; /**< Inner L3 type. */
+ uint32_t inner_l4_type:4; /**< Inner L4 type. */
+ };
+ };
+
+ uint32_t pkt_len; /**< Total pkt len: sum of all segments. */
+ uint16_t data_len; /**< Amount of data in segment buffer. */
+ uint16_t vlan_tci; /**< VLAN Tag Control Identifier (CPU order) */
+#else /* RTE_NEXT_ABI */
/**
* The packet type, which is used to indicate ordinary packet and also
* tunneled packet format, i.e. each number is represented a type of
@@ -287,6 +309,7 @@ struct rte_mbuf {
uint32_t pkt_len; /**< Total pkt len: sum of all segments. */
uint16_t vlan_tci; /**< VLAN Tag Control Identifier (CPU order) */
uint16_t vlan_tci_outer; /**< Outer VLAN Tag Control Identifier (CPU order) */
+#endif /* RTE_NEXT_ABI */
union {
uint32_t rss; /**< RSS hash result if RSS enabled */
struct {
@@ -307,6 +330,9 @@ struct rte_mbuf {
} hash; /**< hash information */
uint32_t seqn; /**< Sequence number. See also rte_reorder_insert() */
+#ifdef RTE_NEXT_ABI
+ uint16_t vlan_tci_outer; /**< Outer VLAN Tag Control Identifier (CPU order) */
+#endif /* RTE_NEXT_ABI */
/* second cache line - fields only used in slow path or on TX */
MARKER cacheline1 __rte_cache_aligned;
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v9 02/19] mbuf: add definitions of unified packet types
2015-07-03 8:32 ` [dpdk-dev] [PATCH v9 00/19] " Helin Zhang
2015-07-03 8:32 ` [dpdk-dev] [PATCH v9 01/19] mbuf: redefine packet_type in rte_mbuf Helin Zhang
@ 2015-07-03 8:32 ` Helin Zhang
2015-07-03 8:32 ` [dpdk-dev] [PATCH v9 03/19] e1000: replace bit mask based packet type with unified packet type Helin Zhang
` (17 subsequent siblings)
19 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-07-03 8:32 UTC (permalink / raw)
To: dev
As there are only 6 bit flags in ol_flags for indicating packet
types, which is not enough to describe all the possible packet
types hardware can recognize. For example, i40e hardware can
recognize more than 150 packet types. Unified packet type is
composed of L2 type, L3 type, L4 type, tunnel type, inner L2 type,
inner L3 type and inner L4 type fields, and can be stored in
'struct rte_mbuf' of 32 bits field 'packet_type'.
To avoid breaking ABI compatibility, all the changes would be
enabled by RTE_NEXT_ABI, which is disabled by default.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
lib/librte_mbuf/rte_mbuf.h | 486 +++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 486 insertions(+)
v3 changes:
* Put the definitions of unified packet type into a single patch.
v4 changes:
* Added detailed description of each packet types.
v5 changes:
* Re-worded the commit logs.
* Added more detailed description for all packet types, together with examples.
v6 changes:
* Disabled the code changes for unified packet type by default, to
avoid breaking ABI compatibility.
v7 changes:
* Renamed RTE_UNIFIED_PKT_TYPE to RTE_NEXT_ABI.
v9 changes:
* Renamed MAC to ETHER in packet type names.
* Corrected the packet type explanation of RTE_PTYPE_L2_ETHER.
diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
index ac29da3..3a17d95 100644
--- a/lib/librte_mbuf/rte_mbuf.h
+++ b/lib/librte_mbuf/rte_mbuf.h
@@ -202,6 +202,492 @@ extern "C" {
/* Use final bit of flags to indicate a control mbuf */
#define CTRL_MBUF_FLAG (1ULL << 63) /**< Mbuf contains control data */
+#ifdef RTE_NEXT_ABI
+/*
+ * 32 bits are divided into several fields to mark packet types. Note that
+ * each field is indexical.
+ * - Bit 3:0 is for L2 types.
+ * - Bit 7:4 is for L3 or outer L3 (for tunneling case) types.
+ * - Bit 11:8 is for L4 or outer L4 (for tunneling case) types.
+ * - Bit 15:12 is for tunnel types.
+ * - Bit 19:16 is for inner L2 types.
+ * - Bit 23:20 is for inner L3 types.
+ * - Bit 27:24 is for inner L4 types.
+ * - Bit 31:28 is reserved.
+ *
+ * To be compatible with Vector PMD, RTE_PTYPE_L3_IPV4, RTE_PTYPE_L3_IPV4_EXT,
+ * RTE_PTYPE_L3_IPV6, RTE_PTYPE_L3_IPV6_EXT, RTE_PTYPE_L4_TCP, RTE_PTYPE_L4_UDP
+ * and RTE_PTYPE_L4_SCTP should be kept as below in a contiguous 7 bits.
+ *
+ * Note that L3 types values are selected for checking IPV4/IPV6 header from
+ * performance point of view. Reading annotations of RTE_ETH_IS_IPV4_HDR and
+ * RTE_ETH_IS_IPV6_HDR is needed for any future changes of L3 type values.
+ *
+ * Note that the packet types of the same packet recognized by different
+ * hardware may be different, as different hardware may have different
+ * capability of packet type recognition.
+ *
+ * examples:
+ * <'ether type'=0x0800
+ * | 'version'=4, 'protocol'=0x29
+ * | 'version'=6, 'next header'=0x3A
+ * | 'ICMPv6 header'>
+ * will be recognized on i40e hardware as packet type combination of,
+ * RTE_PTYPE_L2_ETHER |
+ * RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ * RTE_PTYPE_TUNNEL_IP |
+ * RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ * RTE_PTYPE_INNER_L4_ICMP.
+ *
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=0x2F
+ * | 'GRE header'
+ * | 'version'=6, 'next header'=0x11
+ * | 'UDP header'>
+ * will be recognized on i40e hardware as packet type combination of,
+ * RTE_PTYPE_L2_ETHER |
+ * RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ * RTE_PTYPE_TUNNEL_GRENAT |
+ * RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ * RTE_PTYPE_INNER_L4_UDP.
+ */
+#define RTE_PTYPE_UNKNOWN 0x00000000
+/**
+ * Ethernet packet type.
+ * It is used for outer packet for tunneling cases.
+ *
+ * Packet format:
+ * <'ether type'=[0x0800|0x86DD]>
+ */
+#define RTE_PTYPE_L2_ETHER 0x00000001
+/**
+ * Ethernet packet type for time sync.
+ *
+ * Packet format:
+ * <'ether type'=0x88F7>
+ */
+#define RTE_PTYPE_L2_ETHER_TIMESYNC 0x00000002
+/**
+ * ARP (Address Resolution Protocol) packet type.
+ *
+ * Packet format:
+ * <'ether type'=0x0806>
+ */
+#define RTE_PTYPE_L2_ETHER_ARP 0x00000003
+/**
+ * LLDP (Link Layer Discovery Protocol) packet type.
+ *
+ * Packet format:
+ * <'ether type'=0x88CC>
+ */
+#define RTE_PTYPE_L2_ETHER_LLDP 0x00000004
+/**
+ * Mask of layer 2 packet types.
+ * It is used for outer packet for tunneling cases.
+ */
+#define RTE_PTYPE_L2_MASK 0x0000000f
+/**
+ * IP (Internet Protocol) version 4 packet type.
+ * It is used for outer packet for tunneling cases, and does not contain any
+ * header option.
+ *
+ * Packet format:
+ * <'ether type'=0x0800
+ * | 'version'=4, 'ihl'=5>
+ */
+#define RTE_PTYPE_L3_IPV4 0x00000010
+/**
+ * IP (Internet Protocol) version 4 packet type.
+ * It is used for outer packet for tunneling cases, and contains header
+ * options.
+ *
+ * Packet format:
+ * <'ether type'=0x0800
+ * | 'version'=4, 'ihl'=[6-15], 'options'>
+ */
+#define RTE_PTYPE_L3_IPV4_EXT 0x00000030
+/**
+ * IP (Internet Protocol) version 6 packet type.
+ * It is used for outer packet for tunneling cases, and does not contain any
+ * extension header.
+ *
+ * Packet format:
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=0x3B>
+ */
+#define RTE_PTYPE_L3_IPV6 0x00000040
+/**
+ * IP (Internet Protocol) version 4 packet type.
+ * It is used for outer packet for tunneling cases, and may or maynot contain
+ * header options.
+ *
+ * Packet format:
+ * <'ether type'=0x0800
+ * | 'version'=4, 'ihl'=[5-15], <'options'>>
+ */
+#define RTE_PTYPE_L3_IPV4_EXT_UNKNOWN 0x00000090
+/**
+ * IP (Internet Protocol) version 6 packet type.
+ * It is used for outer packet for tunneling cases, and contains extension
+ * headers.
+ *
+ * Packet format:
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=[0x0|0x2B|0x2C|0x32|0x33|0x3C|0x87],
+ * 'extension headers'>
+ */
+#define RTE_PTYPE_L3_IPV6_EXT 0x000000c0
+/**
+ * IP (Internet Protocol) version 6 packet type.
+ * It is used for outer packet for tunneling cases, and may or maynot contain
+ * extension headers.
+ *
+ * Packet format:
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=[0x3B|0x0|0x2B|0x2C|0x32|0x33|0x3C|0x87],
+ * <'extension headers'>>
+ */
+#define RTE_PTYPE_L3_IPV6_EXT_UNKNOWN 0x000000e0
+/**
+ * Mask of layer 3 packet types.
+ * It is used for outer packet for tunneling cases.
+ */
+#define RTE_PTYPE_L3_MASK 0x000000f0
+/**
+ * TCP (Transmission Control Protocol) packet type.
+ * It is used for outer packet for tunneling cases.
+ *
+ * Packet format:
+ * <'ether type'=0x0800
+ * | 'version'=4, 'protocol'=6, 'MF'=0>
+ * or,
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=6>
+ */
+#define RTE_PTYPE_L4_TCP 0x00000100
+/**
+ * UDP (User Datagram Protocol) packet type.
+ * It is used for outer packet for tunneling cases.
+ *
+ * Packet format:
+ * <'ether type'=0x0800
+ * | 'version'=4, 'protocol'=17, 'MF'=0>
+ * or,
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=17>
+ */
+#define RTE_PTYPE_L4_UDP 0x00000200
+/**
+ * Fragmented IP (Internet Protocol) packet type.
+ * It is used for outer packet for tunneling cases.
+ *
+ * It refers to those packets of any IP types, which can be recognized as
+ * fragmented. A fragmented packet cannot be recognized as any other L4 types
+ * (RTE_PTYPE_L4_TCP, RTE_PTYPE_L4_UDP, RTE_PTYPE_L4_SCTP, RTE_PTYPE_L4_ICMP,
+ * RTE_PTYPE_L4_NONFRAG).
+ *
+ * Packet format:
+ * <'ether type'=0x0800
+ * | 'version'=4, 'MF'=1>
+ * or,
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=44>
+ */
+#define RTE_PTYPE_L4_FRAG 0x00000300
+/**
+ * SCTP (Stream Control Transmission Protocol) packet type.
+ * It is used for outer packet for tunneling cases.
+ *
+ * Packet format:
+ * <'ether type'=0x0800
+ * | 'version'=4, 'protocol'=132, 'MF'=0>
+ * or,
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=132>
+ */
+#define RTE_PTYPE_L4_SCTP 0x00000400
+/**
+ * ICMP (Internet Control Message Protocol) packet type.
+ * It is used for outer packet for tunneling cases.
+ *
+ * Packet format:
+ * <'ether type'=0x0800
+ * | 'version'=4, 'protocol'=1, 'MF'=0>
+ * or,
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=1>
+ */
+#define RTE_PTYPE_L4_ICMP 0x00000500
+/**
+ * Non-fragmented IP (Internet Protocol) packet type.
+ * It is used for outer packet for tunneling cases.
+ *
+ * It refers to those packets of any IP types, while cannot be recognized as
+ * any of above L4 types (RTE_PTYPE_L4_TCP, RTE_PTYPE_L4_UDP,
+ * RTE_PTYPE_L4_FRAG, RTE_PTYPE_L4_SCTP, RTE_PTYPE_L4_ICMP).
+ *
+ * Packet format:
+ * <'ether type'=0x0800
+ * | 'version'=4, 'protocol'!=[6|17|132|1], 'MF'=0>
+ * or,
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'!=[6|17|44|132|1]>
+ */
+#define RTE_PTYPE_L4_NONFRAG 0x00000600
+/**
+ * Mask of layer 4 packet types.
+ * It is used for outer packet for tunneling cases.
+ */
+#define RTE_PTYPE_L4_MASK 0x00000f00
+/**
+ * IP (Internet Protocol) in IP (Internet Protocol) tunneling packet type.
+ *
+ * Packet format:
+ * <'ether type'=0x0800
+ * | 'version'=4, 'protocol'=[4|41]>
+ * or,
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=[4|41]>
+ */
+#define RTE_PTYPE_TUNNEL_IP 0x00001000
+/**
+ * GRE (Generic Routing Encapsulation) tunneling packet type.
+ *
+ * Packet format:
+ * <'ether type'=0x0800
+ * | 'version'=4, 'protocol'=47>
+ * or,
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=47>
+ */
+#define RTE_PTYPE_TUNNEL_GRE 0x00002000
+/**
+ * VXLAN (Virtual eXtensible Local Area Network) tunneling packet type.
+ *
+ * Packet format:
+ * <'ether type'=0x0800
+ * | 'version'=4, 'protocol'=17
+ * | 'destination port'=4798>
+ * or,
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=17
+ * | 'destination port'=4798>
+ */
+#define RTE_PTYPE_TUNNEL_VXLAN 0x00003000
+/**
+ * NVGRE (Network Virtualization using Generic Routing Encapsulation) tunneling
+ * packet type.
+ *
+ * Packet format:
+ * <'ether type'=0x0800
+ * | 'version'=4, 'protocol'=47
+ * | 'protocol type'=0x6558>
+ * or,
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=47
+ * | 'protocol type'=0x6558'>
+ */
+#define RTE_PTYPE_TUNNEL_NVGRE 0x00004000
+/**
+ * GENEVE (Generic Network Virtualization Encapsulation) tunneling packet type.
+ *
+ * Packet format:
+ * <'ether type'=0x0800
+ * | 'version'=4, 'protocol'=17
+ * | 'destination port'=6081>
+ * or,
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=17
+ * | 'destination port'=6081>
+ */
+#define RTE_PTYPE_TUNNEL_GENEVE 0x00005000
+/**
+ * Tunneling packet type of Teredo, VXLAN (Virtual eXtensible Local Area
+ * Network) or GRE (Generic Routing Encapsulation) could be recognized as this
+ * packet type, if they can not be recognized independently as of hardware
+ * capability.
+ */
+#define RTE_PTYPE_TUNNEL_GRENAT 0x00006000
+/**
+ * Mask of tunneling packet types.
+ */
+#define RTE_PTYPE_TUNNEL_MASK 0x0000f000
+/**
+ * Ethernet packet type.
+ * It is used for inner packet type only.
+ *
+ * Packet format (inner only):
+ * <'ether type'=[0x800|0x86DD]>
+ */
+#define RTE_PTYPE_INNER_L2_ETHER 0x00010000
+/**
+ * Ethernet packet type with VLAN (Virtual Local Area Network) tag.
+ *
+ * Packet format (inner only):
+ * <'ether type'=[0x800|0x86DD], vlan=[1-4095]>
+ */
+#define RTE_PTYPE_INNER_L2_ETHER_VLAN 0x00020000
+/**
+ * Mask of inner layer 2 packet types.
+ */
+#define RTE_PTYPE_INNER_L2_MASK 0x000f0000
+/**
+ * IP (Internet Protocol) version 4 packet type.
+ * It is used for inner packet only, and does not contain any header option.
+ *
+ * Packet format (inner only):
+ * <'ether type'=0x0800
+ * | 'version'=4, 'ihl'=5>
+ */
+#define RTE_PTYPE_INNER_L3_IPV4 0x00100000
+/**
+ * IP (Internet Protocol) version 4 packet type.
+ * It is used for inner packet only, and contains header options.
+ *
+ * Packet format (inner only):
+ * <'ether type'=0x0800
+ * | 'version'=4, 'ihl'=[6-15], 'options'>
+ */
+#define RTE_PTYPE_INNER_L3_IPV4_EXT 0x00200000
+/**
+ * IP (Internet Protocol) version 6 packet type.
+ * It is used for inner packet only, and does not contain any extension header.
+ *
+ * Packet format (inner only):
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=0x3B>
+ */
+#define RTE_PTYPE_INNER_L3_IPV6 0x00300000
+/**
+ * IP (Internet Protocol) version 4 packet type.
+ * It is used for inner packet only, and may or maynot contain header options.
+ *
+ * Packet format (inner only):
+ * <'ether type'=0x0800
+ * | 'version'=4, 'ihl'=[5-15], <'options'>>
+ */
+#define RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN 0x00400000
+/**
+ * IP (Internet Protocol) version 6 packet type.
+ * It is used for inner packet only, and contains extension headers.
+ *
+ * Packet format (inner only):
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=[0x0|0x2B|0x2C|0x32|0x33|0x3C|0x87],
+ * 'extension headers'>
+ */
+#define RTE_PTYPE_INNER_L3_IPV6_EXT 0x00500000
+/**
+ * IP (Internet Protocol) version 6 packet type.
+ * It is used for inner packet only, and may or maynot contain extension
+ * headers.
+ *
+ * Packet format (inner only):
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=[0x3B|0x0|0x2B|0x2C|0x32|0x33|0x3C|0x87],
+ * <'extension headers'>>
+ */
+#define RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN 0x00600000
+/**
+ * Mask of inner layer 3 packet types.
+ */
+#define RTE_PTYPE_INNER_INNER_L3_MASK 0x00f00000
+/**
+ * TCP (Transmission Control Protocol) packet type.
+ * It is used for inner packet only.
+ *
+ * Packet format (inner only):
+ * <'ether type'=0x0800
+ * | 'version'=4, 'protocol'=6, 'MF'=0>
+ * or,
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=6>
+ */
+#define RTE_PTYPE_INNER_L4_TCP 0x01000000
+/**
+ * UDP (User Datagram Protocol) packet type.
+ * It is used for inner packet only.
+ *
+ * Packet format (inner only):
+ * <'ether type'=0x0800
+ * | 'version'=4, 'protocol'=17, 'MF'=0>
+ * or,
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=17>
+ */
+#define RTE_PTYPE_INNER_L4_UDP 0x02000000
+/**
+ * Fragmented IP (Internet Protocol) packet type.
+ * It is used for inner packet only, and may or maynot have layer 4 packet.
+ *
+ * Packet format (inner only):
+ * <'ether type'=0x0800
+ * | 'version'=4, 'MF'=1>
+ * or,
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=44>
+ */
+#define RTE_PTYPE_INNER_L4_FRAG 0x03000000
+/**
+ * SCTP (Stream Control Transmission Protocol) packet type.
+ * It is used for inner packet only.
+ *
+ * Packet format (inner only):
+ * <'ether type'=0x0800
+ * | 'version'=4, 'protocol'=132, 'MF'=0>
+ * or,
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=132>
+ */
+#define RTE_PTYPE_INNER_L4_SCTP 0x04000000
+/**
+ * ICMP (Internet Control Message Protocol) packet type.
+ * It is used for inner packet only.
+ *
+ * Packet format (inner only):
+ * <'ether type'=0x0800
+ * | 'version'=4, 'protocol'=1, 'MF'=0>
+ * or,
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=1>
+ */
+#define RTE_PTYPE_INNER_L4_ICMP 0x05000000
+/**
+ * Non-fragmented IP (Internet Protocol) packet type.
+ * It is used for inner packet only, and may or maynot have other unknown layer
+ * 4 packet types.
+ *
+ * Packet format (inner only):
+ * <'ether type'=0x0800
+ * | 'version'=4, 'protocol'!=[6|17|132|1], 'MF'=0>
+ * or,
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'!=[6|17|44|132|1]>
+ */
+#define RTE_PTYPE_INNER_L4_NONFRAG 0x06000000
+/**
+ * Mask of inner layer 4 packet types.
+ */
+#define RTE_PTYPE_INNER_L4_MASK 0x0f000000
+
+/**
+ * Check if the (outer) L3 header is IPv4. To avoid comparing IPv4 types one by
+ * one, bit 4 is selected to be used for IPv4 only. Then checking bit 4 can
+ * determin if it is an IPV4 packet.
+ */
+#define RTE_ETH_IS_IPV4_HDR(ptype) ((ptype) & RTE_PTYPE_L3_IPV4)
+
+/**
+ * Check if the (outer) L3 header is IPv4. To avoid comparing IPv4 types one by
+ * one, bit 6 is selected to be used for IPv4 only. Then checking bit 6 can
+ * determin if it is an IPV4 packet.
+ */
+#define RTE_ETH_IS_IPV6_HDR(ptype) ((ptype) & RTE_PTYPE_L3_IPV6)
+
+/* Check if it is a tunneling packet */
+#define RTE_ETH_IS_TUNNEL_PKT(ptype) ((ptype) & RTE_PTYPE_TUNNEL_MASK)
+#endif /* RTE_NEXT_ABI */
+
/**
* Get the name of a RX offload flag
*
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v9 03/19] e1000: replace bit mask based packet type with unified packet type
2015-07-03 8:32 ` [dpdk-dev] [PATCH v9 00/19] " Helin Zhang
2015-07-03 8:32 ` [dpdk-dev] [PATCH v9 01/19] mbuf: redefine packet_type in rte_mbuf Helin Zhang
2015-07-03 8:32 ` [dpdk-dev] [PATCH v9 02/19] mbuf: add definitions of unified packet types Helin Zhang
@ 2015-07-03 8:32 ` Helin Zhang
2015-07-03 8:32 ` [dpdk-dev] [PATCH v9 04/19] ixgbe: " Helin Zhang
` (16 subsequent siblings)
19 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-07-03 8:32 UTC (permalink / raw)
To: dev
To unify packet types among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.
To avoid breaking ABI compatibility, all the changes would be
enabled by RTE_NEXT_ABI, which is disabled by default.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
drivers/net/e1000/igb_rxtx.c | 104 +++++++++++++++++++++++++++++++++++++++++++
1 file changed, 104 insertions(+)
v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.
v5 changes:
* Re-worded the commit logs.
v6 changes:
* Disabled the code changes for unified packet type by default, to
avoid breaking ABI compatibility.
v7 changes:
* Renamed RTE_UNIFIED_PKT_TYPE to RTE_NEXT_ABI.
v9 changes:
* Renamed MAC to ETHER in packet type names.
diff --git a/drivers/net/e1000/igb_rxtx.c b/drivers/net/e1000/igb_rxtx.c
index 43d6703..165144c 100644
--- a/drivers/net/e1000/igb_rxtx.c
+++ b/drivers/net/e1000/igb_rxtx.c
@@ -590,6 +590,101 @@ eth_igb_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
* RX functions
*
**********************************************************************/
+#ifdef RTE_NEXT_ABI
+#define IGB_PACKET_TYPE_IPV4 0X01
+#define IGB_PACKET_TYPE_IPV4_TCP 0X11
+#define IGB_PACKET_TYPE_IPV4_UDP 0X21
+#define IGB_PACKET_TYPE_IPV4_SCTP 0X41
+#define IGB_PACKET_TYPE_IPV4_EXT 0X03
+#define IGB_PACKET_TYPE_IPV4_EXT_SCTP 0X43
+#define IGB_PACKET_TYPE_IPV6 0X04
+#define IGB_PACKET_TYPE_IPV6_TCP 0X14
+#define IGB_PACKET_TYPE_IPV6_UDP 0X24
+#define IGB_PACKET_TYPE_IPV6_EXT 0X0C
+#define IGB_PACKET_TYPE_IPV6_EXT_TCP 0X1C
+#define IGB_PACKET_TYPE_IPV6_EXT_UDP 0X2C
+#define IGB_PACKET_TYPE_IPV4_IPV6 0X05
+#define IGB_PACKET_TYPE_IPV4_IPV6_TCP 0X15
+#define IGB_PACKET_TYPE_IPV4_IPV6_UDP 0X25
+#define IGB_PACKET_TYPE_IPV4_IPV6_EXT 0X0D
+#define IGB_PACKET_TYPE_IPV4_IPV6_EXT_TCP 0X1D
+#define IGB_PACKET_TYPE_IPV4_IPV6_EXT_UDP 0X2D
+#define IGB_PACKET_TYPE_MAX 0X80
+#define IGB_PACKET_TYPE_MASK 0X7F
+#define IGB_PACKET_TYPE_SHIFT 0X04
+static inline uint32_t
+igb_rxd_pkt_info_to_pkt_type(uint16_t pkt_info)
+{
+ static const uint32_t
+ ptype_table[IGB_PACKET_TYPE_MAX] __rte_cache_aligned = {
+ [IGB_PACKET_TYPE_IPV4] = RTE_PTYPE_L2_ETHER |
+ RTE_PTYPE_L3_IPV4,
+ [IGB_PACKET_TYPE_IPV4_EXT] = RTE_PTYPE_L2_ETHER |
+ RTE_PTYPE_L3_IPV4_EXT,
+ [IGB_PACKET_TYPE_IPV6] = RTE_PTYPE_L2_ETHER |
+ RTE_PTYPE_L3_IPV6,
+ [IGB_PACKET_TYPE_IPV4_IPV6] = RTE_PTYPE_L2_ETHER |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6,
+ [IGB_PACKET_TYPE_IPV6_EXT] = RTE_PTYPE_L2_ETHER |
+ RTE_PTYPE_L3_IPV6_EXT,
+ [IGB_PACKET_TYPE_IPV4_IPV6_EXT] = RTE_PTYPE_L2_ETHER |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT,
+ [IGB_PACKET_TYPE_IPV4_TCP] = RTE_PTYPE_L2_ETHER |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_TCP,
+ [IGB_PACKET_TYPE_IPV6_TCP] = RTE_PTYPE_L2_ETHER |
+ RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_TCP,
+ [IGB_PACKET_TYPE_IPV4_IPV6_TCP] = RTE_PTYPE_L2_ETHER |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6 | RTE_PTYPE_INNER_L4_TCP,
+ [IGB_PACKET_TYPE_IPV6_EXT_TCP] = RTE_PTYPE_L2_ETHER |
+ RTE_PTYPE_L3_IPV6_EXT | RTE_PTYPE_L4_TCP,
+ [IGB_PACKET_TYPE_IPV4_IPV6_EXT_TCP] = RTE_PTYPE_L2_ETHER |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT | RTE_PTYPE_INNER_L4_TCP,
+ [IGB_PACKET_TYPE_IPV4_UDP] = RTE_PTYPE_L2_ETHER |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_UDP,
+ [IGB_PACKET_TYPE_IPV6_UDP] = RTE_PTYPE_L2_ETHER |
+ RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_UDP,
+ [IGB_PACKET_TYPE_IPV4_IPV6_UDP] = RTE_PTYPE_L2_ETHER |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6 | RTE_PTYPE_INNER_L4_UDP,
+ [IGB_PACKET_TYPE_IPV6_EXT_UDP] = RTE_PTYPE_L2_ETHER |
+ RTE_PTYPE_L3_IPV6_EXT | RTE_PTYPE_L4_UDP,
+ [IGB_PACKET_TYPE_IPV4_IPV6_EXT_UDP] = RTE_PTYPE_L2_ETHER |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT | RTE_PTYPE_INNER_L4_UDP,
+ [IGB_PACKET_TYPE_IPV4_SCTP] = RTE_PTYPE_L2_ETHER |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_SCTP,
+ [IGB_PACKET_TYPE_IPV4_EXT_SCTP] = RTE_PTYPE_L2_ETHER |
+ RTE_PTYPE_L3_IPV4_EXT | RTE_PTYPE_L4_SCTP,
+ };
+ if (unlikely(pkt_info & E1000_RXDADV_PKTTYPE_ETQF))
+ return RTE_PTYPE_UNKNOWN;
+
+ pkt_info = (pkt_info >> IGB_PACKET_TYPE_SHIFT) & IGB_PACKET_TYPE_MASK;
+
+ return ptype_table[pkt_info];
+}
+
+static inline uint64_t
+rx_desc_hlen_type_rss_to_pkt_flags(uint32_t hl_tp_rs)
+{
+ uint64_t pkt_flags = ((hl_tp_rs & 0x0F) == 0) ? 0 : PKT_RX_RSS_HASH;
+
+#if defined(RTE_LIBRTE_IEEE1588)
+ static uint32_t ip_pkt_etqf_map[8] = {
+ 0, 0, 0, PKT_RX_IEEE1588_PTP,
+ 0, 0, 0, 0,
+ };
+
+ pkt_flags |= ip_pkt_etqf_map[(hl_tp_rs >> 4) & 0x07];
+#endif
+
+ return pkt_flags;
+}
+#else /* RTE_NEXT_ABI */
static inline uint64_t
rx_desc_hlen_type_rss_to_pkt_flags(uint32_t hl_tp_rs)
{
@@ -617,6 +712,7 @@ rx_desc_hlen_type_rss_to_pkt_flags(uint32_t hl_tp_rs)
#endif
return pkt_flags | (((hl_tp_rs & 0x0F) == 0) ? 0 : PKT_RX_RSS_HASH);
}
+#endif /* RTE_NEXT_ABI */
static inline uint64_t
rx_desc_status_to_pkt_flags(uint32_t rx_status)
@@ -790,6 +886,10 @@ eth_igb_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
pkt_flags = pkt_flags | rx_desc_status_to_pkt_flags(staterr);
pkt_flags = pkt_flags | rx_desc_error_to_pkt_flags(staterr);
rxm->ol_flags = pkt_flags;
+#ifdef RTE_NEXT_ABI
+ rxm->packet_type = igb_rxd_pkt_info_to_pkt_type(rxd.wb.lower.
+ lo_dword.hs_rss.pkt_info);
+#endif
/*
* Store the mbuf address into the next entry of the array
@@ -1024,6 +1124,10 @@ eth_igb_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
pkt_flags = pkt_flags | rx_desc_status_to_pkt_flags(staterr);
pkt_flags = pkt_flags | rx_desc_error_to_pkt_flags(staterr);
first_seg->ol_flags = pkt_flags;
+#ifdef RTE_NEXT_ABI
+ first_seg->packet_type = igb_rxd_pkt_info_to_pkt_type(rxd.wb.
+ lower.lo_dword.hs_rss.pkt_info);
+#endif
/* Prefetch data of first segment, if configured to do so. */
rte_packet_prefetch((char *)first_seg->buf_addr +
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v9 04/19] ixgbe: replace bit mask based packet type with unified packet type
2015-07-03 8:32 ` [dpdk-dev] [PATCH v9 00/19] " Helin Zhang
` (2 preceding siblings ...)
2015-07-03 8:32 ` [dpdk-dev] [PATCH v9 03/19] e1000: replace bit mask based packet type with unified packet type Helin Zhang
@ 2015-07-03 8:32 ` Helin Zhang
2015-07-03 8:32 ` [dpdk-dev] [PATCH v9 05/19] i40e: " Helin Zhang
` (15 subsequent siblings)
19 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-07-03 8:32 UTC (permalink / raw)
To: dev
To unify packet type among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.
To avoid breaking ABI compatibility, all the changes would be
enabled by RTE_NEXT_ABI, which is disabled by default.
Note that around 2.5% performance drop (64B) was observed of doing
4 ports (1 port per 82599 card) IO forwarding on the same SNB core.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
drivers/net/ixgbe/ixgbe_rxtx.c | 163 +++++++++++++++++++++++++++++++++++++++++
1 file changed, 163 insertions(+)
v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.
v5 changes:
* Re-worded the commit logs.
v6 changes:
* Disabled the code changes for unified packet type by default, to
avoid breaking ABI compatibility.
v7 changes:
* Renamed RTE_UNIFIED_PKT_TYPE to RTE_NEXT_ABI.
v9 changes:
* Renamed MAC to ETHER in packet type names.
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c
index a211096..1455e54 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx.c
@@ -860,6 +860,110 @@ end_of_tx:
* RX functions
*
**********************************************************************/
+#ifdef RTE_NEXT_ABI
+#define IXGBE_PACKET_TYPE_IPV4 0X01
+#define IXGBE_PACKET_TYPE_IPV4_TCP 0X11
+#define IXGBE_PACKET_TYPE_IPV4_UDP 0X21
+#define IXGBE_PACKET_TYPE_IPV4_SCTP 0X41
+#define IXGBE_PACKET_TYPE_IPV4_EXT 0X03
+#define IXGBE_PACKET_TYPE_IPV4_EXT_SCTP 0X43
+#define IXGBE_PACKET_TYPE_IPV6 0X04
+#define IXGBE_PACKET_TYPE_IPV6_TCP 0X14
+#define IXGBE_PACKET_TYPE_IPV6_UDP 0X24
+#define IXGBE_PACKET_TYPE_IPV6_EXT 0X0C
+#define IXGBE_PACKET_TYPE_IPV6_EXT_TCP 0X1C
+#define IXGBE_PACKET_TYPE_IPV6_EXT_UDP 0X2C
+#define IXGBE_PACKET_TYPE_IPV4_IPV6 0X05
+#define IXGBE_PACKET_TYPE_IPV4_IPV6_TCP 0X15
+#define IXGBE_PACKET_TYPE_IPV4_IPV6_UDP 0X25
+#define IXGBE_PACKET_TYPE_IPV4_IPV6_EXT 0X0D
+#define IXGBE_PACKET_TYPE_IPV4_IPV6_EXT_TCP 0X1D
+#define IXGBE_PACKET_TYPE_IPV4_IPV6_EXT_UDP 0X2D
+#define IXGBE_PACKET_TYPE_MAX 0X80
+#define IXGBE_PACKET_TYPE_MASK 0X7F
+#define IXGBE_PACKET_TYPE_SHIFT 0X04
+static inline uint32_t
+ixgbe_rxd_pkt_info_to_pkt_type(uint16_t pkt_info)
+{
+ static const uint32_t
+ ptype_table[IXGBE_PACKET_TYPE_MAX] __rte_cache_aligned = {
+ [IXGBE_PACKET_TYPE_IPV4] = RTE_PTYPE_L2_ETHER |
+ RTE_PTYPE_L3_IPV4,
+ [IXGBE_PACKET_TYPE_IPV4_EXT] = RTE_PTYPE_L2_ETHER |
+ RTE_PTYPE_L3_IPV4_EXT,
+ [IXGBE_PACKET_TYPE_IPV6] = RTE_PTYPE_L2_ETHER |
+ RTE_PTYPE_L3_IPV6,
+ [IXGBE_PACKET_TYPE_IPV4_IPV6] = RTE_PTYPE_L2_ETHER |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6,
+ [IXGBE_PACKET_TYPE_IPV6_EXT] = RTE_PTYPE_L2_ETHER |
+ RTE_PTYPE_L3_IPV6_EXT,
+ [IXGBE_PACKET_TYPE_IPV4_IPV6_EXT] = RTE_PTYPE_L2_ETHER |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT,
+ [IXGBE_PACKET_TYPE_IPV4_TCP] = RTE_PTYPE_L2_ETHER |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_TCP,
+ [IXGBE_PACKET_TYPE_IPV6_TCP] = RTE_PTYPE_L2_ETHER |
+ RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_TCP,
+ [IXGBE_PACKET_TYPE_IPV4_IPV6_TCP] = RTE_PTYPE_L2_ETHER |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6 | RTE_PTYPE_INNER_L4_TCP,
+ [IXGBE_PACKET_TYPE_IPV6_EXT_TCP] = RTE_PTYPE_L2_ETHER |
+ RTE_PTYPE_L3_IPV6_EXT | RTE_PTYPE_L4_TCP,
+ [IXGBE_PACKET_TYPE_IPV4_IPV6_EXT_TCP] = RTE_PTYPE_L2_ETHER |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT | RTE_PTYPE_INNER_L4_TCP,
+ [IXGBE_PACKET_TYPE_IPV4_UDP] = RTE_PTYPE_L2_ETHER |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_UDP,
+ [IXGBE_PACKET_TYPE_IPV6_UDP] = RTE_PTYPE_L2_ETHER |
+ RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_UDP,
+ [IXGBE_PACKET_TYPE_IPV4_IPV6_UDP] = RTE_PTYPE_L2_ETHER |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6 | RTE_PTYPE_INNER_L4_UDP,
+ [IXGBE_PACKET_TYPE_IPV6_EXT_UDP] = RTE_PTYPE_L2_ETHER |
+ RTE_PTYPE_L3_IPV6_EXT | RTE_PTYPE_L4_UDP,
+ [IXGBE_PACKET_TYPE_IPV4_IPV6_EXT_UDP] = RTE_PTYPE_L2_ETHER |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT | RTE_PTYPE_INNER_L4_UDP,
+ [IXGBE_PACKET_TYPE_IPV4_SCTP] = RTE_PTYPE_L2_ETHER |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_SCTP,
+ [IXGBE_PACKET_TYPE_IPV4_EXT_SCTP] = RTE_PTYPE_L2_ETHER |
+ RTE_PTYPE_L3_IPV4_EXT | RTE_PTYPE_L4_SCTP,
+ };
+ if (unlikely(pkt_info & IXGBE_RXDADV_PKTTYPE_ETQF))
+ return RTE_PTYPE_UNKNOWN;
+
+ pkt_info = (pkt_info >> IXGBE_PACKET_TYPE_SHIFT) &
+ IXGBE_PACKET_TYPE_MASK;
+
+ return ptype_table[pkt_info];
+}
+
+static inline uint64_t
+ixgbe_rxd_pkt_info_to_pkt_flags(uint16_t pkt_info)
+{
+ static uint64_t ip_rss_types_map[16] __rte_cache_aligned = {
+ 0, PKT_RX_RSS_HASH, PKT_RX_RSS_HASH, PKT_RX_RSS_HASH,
+ 0, PKT_RX_RSS_HASH, 0, PKT_RX_RSS_HASH,
+ PKT_RX_RSS_HASH, 0, 0, 0,
+ 0, 0, 0, PKT_RX_FDIR,
+ };
+#ifdef RTE_LIBRTE_IEEE1588
+ static uint64_t ip_pkt_etqf_map[8] = {
+ 0, 0, 0, PKT_RX_IEEE1588_PTP,
+ 0, 0, 0, 0,
+ };
+
+ if (likely(pkt_info & IXGBE_RXDADV_PKTTYPE_ETQF))
+ return ip_pkt_etqf_map[(pkt_info >> 4) & 0X07] |
+ ip_rss_types_map[pkt_info & 0XF];
+ else
+ return ip_rss_types_map[pkt_info & 0XF];
+#else
+ return ip_rss_types_map[pkt_info & 0XF];
+#endif
+}
+#else /* RTE_NEXT_ABI */
static inline uint64_t
rx_desc_hlen_type_rss_to_pkt_flags(uint32_t hl_tp_rs)
{
@@ -895,6 +999,7 @@ rx_desc_hlen_type_rss_to_pkt_flags(uint32_t hl_tp_rs)
#endif
return pkt_flags | ip_rss_types_map[hl_tp_rs & 0xF];
}
+#endif /* RTE_NEXT_ABI */
static inline uint64_t
rx_desc_status_to_pkt_flags(uint32_t rx_status)
@@ -950,7 +1055,13 @@ ixgbe_rx_scan_hw_ring(struct ixgbe_rx_queue *rxq)
struct rte_mbuf *mb;
uint16_t pkt_len;
uint64_t pkt_flags;
+#ifdef RTE_NEXT_ABI
+ int nb_dd;
+ uint32_t s[LOOK_AHEAD];
+ uint16_t pkt_info[LOOK_AHEAD];
+#else
int s[LOOK_AHEAD], nb_dd;
+#endif /* RTE_NEXT_ABI */
int i, j, nb_rx = 0;
@@ -973,6 +1084,12 @@ ixgbe_rx_scan_hw_ring(struct ixgbe_rx_queue *rxq)
for (j = LOOK_AHEAD-1; j >= 0; --j)
s[j] = rxdp[j].wb.upper.status_error;
+#ifdef RTE_NEXT_ABI
+ for (j = LOOK_AHEAD-1; j >= 0; --j)
+ pkt_info[j] = rxdp[j].wb.lower.lo_dword.
+ hs_rss.pkt_info;
+#endif /* RTE_NEXT_ABI */
+
/* Compute how many status bits were set */
nb_dd = 0;
for (j = 0; j < LOOK_AHEAD; ++j)
@@ -989,12 +1106,22 @@ ixgbe_rx_scan_hw_ring(struct ixgbe_rx_queue *rxq)
mb->vlan_tci = rte_le_to_cpu_16(rxdp[j].wb.upper.vlan);
/* convert descriptor fields to rte mbuf flags */
+#ifdef RTE_NEXT_ABI
+ pkt_flags = rx_desc_status_to_pkt_flags(s[j]);
+ pkt_flags |= rx_desc_error_to_pkt_flags(s[j]);
+ pkt_flags |=
+ ixgbe_rxd_pkt_info_to_pkt_flags(pkt_info[j]);
+ mb->ol_flags = pkt_flags;
+ mb->packet_type =
+ ixgbe_rxd_pkt_info_to_pkt_type(pkt_info[j]);
+#else /* RTE_NEXT_ABI */
pkt_flags = rx_desc_hlen_type_rss_to_pkt_flags(
rxdp[j].wb.lower.lo_dword.data);
/* reuse status field from scan list */
pkt_flags |= rx_desc_status_to_pkt_flags(s[j]);
pkt_flags |= rx_desc_error_to_pkt_flags(s[j]);
mb->ol_flags = pkt_flags;
+#endif /* RTE_NEXT_ABI */
if (likely(pkt_flags & PKT_RX_RSS_HASH))
mb->hash.rss = rxdp[j].wb.lower.hi_dword.rss;
@@ -1211,7 +1338,11 @@ ixgbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
union ixgbe_adv_rx_desc rxd;
uint64_t dma_addr;
uint32_t staterr;
+#ifdef RTE_NEXT_ABI
+ uint32_t pkt_info;
+#else
uint32_t hlen_type_rss;
+#endif
uint16_t pkt_len;
uint16_t rx_id;
uint16_t nb_rx;
@@ -1329,6 +1460,19 @@ ixgbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
rxm->data_len = pkt_len;
rxm->port = rxq->port_id;
+#ifdef RTE_NEXT_ABI
+ pkt_info = rte_le_to_cpu_32(rxd.wb.lower.lo_dword.hs_rss.
+ pkt_info);
+ /* Only valid if PKT_RX_VLAN_PKT set in pkt_flags */
+ rxm->vlan_tci = rte_le_to_cpu_16(rxd.wb.upper.vlan);
+
+ pkt_flags = rx_desc_status_to_pkt_flags(staterr);
+ pkt_flags = pkt_flags | rx_desc_error_to_pkt_flags(staterr);
+ pkt_flags = pkt_flags |
+ ixgbe_rxd_pkt_info_to_pkt_flags(pkt_info);
+ rxm->ol_flags = pkt_flags;
+ rxm->packet_type = ixgbe_rxd_pkt_info_to_pkt_type(pkt_info);
+#else /* RTE_NEXT_ABI */
hlen_type_rss = rte_le_to_cpu_32(rxd.wb.lower.lo_dword.data);
/* Only valid if PKT_RX_VLAN_PKT set in pkt_flags */
rxm->vlan_tci = rte_le_to_cpu_16(rxd.wb.upper.vlan);
@@ -1337,6 +1481,7 @@ ixgbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
pkt_flags = pkt_flags | rx_desc_status_to_pkt_flags(staterr);
pkt_flags = pkt_flags | rx_desc_error_to_pkt_flags(staterr);
rxm->ol_flags = pkt_flags;
+#endif /* RTE_NEXT_ABI */
if (likely(pkt_flags & PKT_RX_RSS_HASH))
rxm->hash.rss = rxd.wb.lower.hi_dword.rss;
@@ -1410,6 +1555,23 @@ ixgbe_fill_cluster_head_buf(
uint8_t port_id,
uint32_t staterr)
{
+#ifdef RTE_NEXT_ABI
+ uint16_t pkt_info;
+ uint64_t pkt_flags;
+
+ head->port = port_id;
+
+ /* The vlan_tci field is only valid when PKT_RX_VLAN_PKT is
+ * set in the pkt_flags field.
+ */
+ head->vlan_tci = rte_le_to_cpu_16(desc->wb.upper.vlan);
+ pkt_info = rte_le_to_cpu_32(desc->wb.lower.lo_dword.hs_rss.pkt_info);
+ pkt_flags = rx_desc_status_to_pkt_flags(staterr);
+ pkt_flags |= rx_desc_error_to_pkt_flags(staterr);
+ pkt_flags |= ixgbe_rxd_pkt_info_to_pkt_flags(pkt_info);
+ head->ol_flags = pkt_flags;
+ head->packet_type = ixgbe_rxd_pkt_info_to_pkt_type(pkt_info);
+#else /* RTE_NEXT_ABI */
uint32_t hlen_type_rss;
uint64_t pkt_flags;
@@ -1425,6 +1587,7 @@ ixgbe_fill_cluster_head_buf(
pkt_flags |= rx_desc_status_to_pkt_flags(staterr);
pkt_flags |= rx_desc_error_to_pkt_flags(staterr);
head->ol_flags = pkt_flags;
+#endif /* RTE_NEXT_ABI */
if (likely(pkt_flags & PKT_RX_RSS_HASH))
head->hash.rss = rte_le_to_cpu_32(desc->wb.lower.hi_dword.rss);
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v9 05/19] i40e: replace bit mask based packet type with unified packet type
2015-07-03 8:32 ` [dpdk-dev] [PATCH v9 00/19] " Helin Zhang
` (3 preceding siblings ...)
2015-07-03 8:32 ` [dpdk-dev] [PATCH v9 04/19] ixgbe: " Helin Zhang
@ 2015-07-03 8:32 ` Helin Zhang
2015-07-03 8:32 ` [dpdk-dev] [PATCH v9 06/19] enic: " Helin Zhang
` (14 subsequent siblings)
19 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-07-03 8:32 UTC (permalink / raw)
To: dev
To unify packet types among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.
To avoid breaking ABI compatibility, all the changes would be
enabled by RTE_NEXT_ABI, which is disabled by default.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
drivers/net/i40e/i40e_rxtx.c | 554 +++++++++++++++++++++++++++++++++++++++++++
1 file changed, 554 insertions(+)
v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.
v5 changes:
* Re-worded the commit logs.
v6 changes:
* Disabled the code changes for unified packet type by default, to
avoid breaking ABI compatibility.
v7 changes:
* Renamed RTE_UNIFIED_PKT_TYPE to RTE_NEXT_ABI.
v9 changes:
* Renamed MAC to ETHER in packet type names.
diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
index b2e1d6d..a608d1f 100644
--- a/drivers/net/i40e/i40e_rxtx.c
+++ b/drivers/net/i40e/i40e_rxtx.c
@@ -176,6 +176,540 @@ i40e_rxd_error_to_pkt_flags(uint64_t qword)
return flags;
}
+#ifdef RTE_NEXT_ABI
+/* For each value it means, datasheet of hardware can tell more details */
+static inline uint32_t
+i40e_rxd_pkt_type_mapping(uint8_t ptype)
+{
+ static const uint32_t ptype_table[UINT8_MAX] __rte_cache_aligned = {
+ /* L2 types */
+ /* [0] reserved */
+ [1] = RTE_PTYPE_L2_ETHER,
+ [2] = RTE_PTYPE_L2_ETHER_TIMESYNC,
+ /* [3] - [5] reserved */
+ [6] = RTE_PTYPE_L2_ETHER_LLDP,
+ /* [7] - [10] reserved */
+ [11] = RTE_PTYPE_L2_ETHER_ARP,
+ /* [12] - [21] reserved */
+
+ /* Non tunneled IPv4 */
+ [22] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_FRAG,
+ [23] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_NONFRAG,
+ [24] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_UDP,
+ /* [25] reserved */
+ [26] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_TCP,
+ [27] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_SCTP,
+ [28] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_ICMP,
+
+ /* IPv4 --> IPv4 */
+ [29] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [30] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [31] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [32] reserved */
+ [33] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [34] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [35] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv4 --> IPv6 */
+ [36] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [37] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [38] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [39] reserved */
+ [40] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [41] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [42] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv4 --> GRE/Teredo/VXLAN */
+ [43] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT,
+
+ /* IPv4 --> GRE/Teredo/VXLAN --> IPv4 */
+ [44] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [45] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [46] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [47] reserved */
+ [48] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [49] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [50] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv4 --> GRE/Teredo/VXLAN --> IPv6 */
+ [51] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [52] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [53] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [54] reserved */
+ [55] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [56] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [57] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv4 --> GRE/Teredo/VXLAN --> MAC */
+ [58] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER,
+
+ /* IPv4 --> GRE/Teredo/VXLAN --> MAC --> IPv4 */
+ [59] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [60] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [61] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [62] reserved */
+ [63] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [64] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [65] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv4 --> GRE/Teredo/VXLAN --> MAC --> IPv6 */
+ [66] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [67] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [68] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [69] reserved */
+ [70] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [71] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [72] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv4 --> GRE/Teredo/VXLAN --> MAC/VLAN */
+ [73] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L2_ETHER_VLAN,
+
+ /* IPv4 --> GRE/Teredo/VXLAN --> MAC/VLAN --> IPv4 */
+ [74] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L2_ETHER_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [75] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L2_ETHER_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [76] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L2_ETHER_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [77] reserved */
+ [78] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L2_ETHER_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [79] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L2_ETHER_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [80] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L2_ETHER_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv4 --> GRE/Teredo/VXLAN --> MAC/VLAN --> IPv6 */
+ [81] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L2_ETHER_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [82] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L2_ETHER_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [83] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L2_ETHER_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [84] reserved */
+ [85] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L2_ETHER_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [86] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L2_ETHER_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [87] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L2_ETHER_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* Non tunneled IPv6 */
+ [88] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_FRAG,
+ [89] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_NONFRAG,
+ [90] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_UDP,
+ /* [91] reserved */
+ [92] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_TCP,
+ [93] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_SCTP,
+ [94] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_ICMP,
+
+ /* IPv6 --> IPv4 */
+ [95] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [96] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [97] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [98] reserved */
+ [99] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [100] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [101] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv6 --> IPv6 */
+ [102] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [103] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [104] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [105] reserved */
+ [106] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [107] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [108] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv6 --> GRE/Teredo/VXLAN */
+ [109] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT,
+
+ /* IPv6 --> GRE/Teredo/VXLAN --> IPv4 */
+ [110] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [111] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [112] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [113] reserved */
+ [114] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [115] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [116] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv6 --> GRE/Teredo/VXLAN --> IPv6 */
+ [117] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [118] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [119] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [120] reserved */
+ [121] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [122] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [123] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv6 --> GRE/Teredo/VXLAN --> MAC */
+ [124] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER,
+
+ /* IPv6 --> GRE/Teredo/VXLAN --> MAC --> IPv4 */
+ [125] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [126] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [127] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [128] reserved */
+ [129] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [130] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [131] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv6 --> GRE/Teredo/VXLAN --> MAC --> IPv6 */
+ [132] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [133] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [134] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [135] reserved */
+ [136] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [137] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [138] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv6 --> GRE/Teredo/VXLAN --> MAC/VLAN */
+ [139] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L2_ETHER_VLAN,
+
+ /* IPv6 --> GRE/Teredo/VXLAN --> MAC/VLAN --> IPv4 */
+ [140] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L2_ETHER_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [141] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L2_ETHER_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [142] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L2_ETHER_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [143] reserved */
+ [144] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L2_ETHER_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [145] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L2_ETHER_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [146] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L2_ETHER_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv6 --> GRE/Teredo/VXLAN --> MAC/VLAN --> IPv6 */
+ [147] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L2_ETHER_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [148] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L2_ETHER_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [149] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L2_ETHER_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [150] reserved */
+ [151] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L2_ETHER_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [152] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L2_ETHER_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [153] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L2_ETHER_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* All others reserved */
+ };
+
+ return ptype_table[ptype];
+}
+#else /* RTE_NEXT_ABI */
/* Translate pkt types to pkt flags */
static inline uint64_t
i40e_rxd_ptype_to_pkt_flags(uint64_t qword)
@@ -443,6 +977,7 @@ i40e_rxd_ptype_to_pkt_flags(uint64_t qword)
return ip_ptype_map[ptype];
}
+#endif /* RTE_NEXT_ABI */
#define I40E_RX_DESC_EXT_STATUS_FLEXBH_MASK 0x03
#define I40E_RX_DESC_EXT_STATUS_FLEXBH_FD_ID 0x01
@@ -730,11 +1265,18 @@ i40e_rx_scan_hw_ring(struct i40e_rx_queue *rxq)
i40e_rxd_to_vlan_tci(mb, &rxdp[j]);
pkt_flags = i40e_rxd_status_to_pkt_flags(qword1);
pkt_flags |= i40e_rxd_error_to_pkt_flags(qword1);
+#ifdef RTE_NEXT_ABI
+ mb->packet_type =
+ i40e_rxd_pkt_type_mapping((uint8_t)((qword1 &
+ I40E_RXD_QW1_PTYPE_MASK) >>
+ I40E_RXD_QW1_PTYPE_SHIFT));
+#else
pkt_flags |= i40e_rxd_ptype_to_pkt_flags(qword1);
mb->packet_type = (uint16_t)((qword1 &
I40E_RXD_QW1_PTYPE_MASK) >>
I40E_RXD_QW1_PTYPE_SHIFT);
+#endif /* RTE_NEXT_ABI */
if (pkt_flags & PKT_RX_RSS_HASH)
mb->hash.rss = rte_le_to_cpu_32(\
rxdp[j].wb.qword0.hi_dword.rss);
@@ -971,9 +1513,15 @@ i40e_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
i40e_rxd_to_vlan_tci(rxm, &rxd);
pkt_flags = i40e_rxd_status_to_pkt_flags(qword1);
pkt_flags |= i40e_rxd_error_to_pkt_flags(qword1);
+#ifdef RTE_NEXT_ABI
+ rxm->packet_type =
+ i40e_rxd_pkt_type_mapping((uint8_t)((qword1 &
+ I40E_RXD_QW1_PTYPE_MASK) >> I40E_RXD_QW1_PTYPE_SHIFT));
+#else
pkt_flags |= i40e_rxd_ptype_to_pkt_flags(qword1);
rxm->packet_type = (uint16_t)((qword1 & I40E_RXD_QW1_PTYPE_MASK) >>
I40E_RXD_QW1_PTYPE_SHIFT);
+#endif /* RTE_NEXT_ABI */
if (pkt_flags & PKT_RX_RSS_HASH)
rxm->hash.rss =
rte_le_to_cpu_32(rxd.wb.qword0.hi_dword.rss);
@@ -1129,10 +1677,16 @@ i40e_recv_scattered_pkts(void *rx_queue,
i40e_rxd_to_vlan_tci(first_seg, &rxd);
pkt_flags = i40e_rxd_status_to_pkt_flags(qword1);
pkt_flags |= i40e_rxd_error_to_pkt_flags(qword1);
+#ifdef RTE_NEXT_ABI
+ first_seg->packet_type =
+ i40e_rxd_pkt_type_mapping((uint8_t)((qword1 &
+ I40E_RXD_QW1_PTYPE_MASK) >> I40E_RXD_QW1_PTYPE_SHIFT));
+#else
pkt_flags |= i40e_rxd_ptype_to_pkt_flags(qword1);
first_seg->packet_type = (uint16_t)((qword1 &
I40E_RXD_QW1_PTYPE_MASK) >>
I40E_RXD_QW1_PTYPE_SHIFT);
+#endif /* RTE_NEXT_ABI */
if (pkt_flags & PKT_RX_RSS_HASH)
rxm->hash.rss =
rte_le_to_cpu_32(rxd.wb.qword0.hi_dword.rss);
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v9 06/19] enic: replace bit mask based packet type with unified packet type
2015-07-03 8:32 ` [dpdk-dev] [PATCH v9 00/19] " Helin Zhang
` (4 preceding siblings ...)
2015-07-03 8:32 ` [dpdk-dev] [PATCH v9 05/19] i40e: " Helin Zhang
@ 2015-07-03 8:32 ` Helin Zhang
2015-07-03 8:32 ` [dpdk-dev] [PATCH v9 07/19] vmxnet3: " Helin Zhang
` (13 subsequent siblings)
19 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-07-03 8:32 UTC (permalink / raw)
To: dev
To unify packet types among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.
To avoid breaking ABI compatibility, all the changes would be
enabled by RTE_NEXT_ABI, which is disabled by default.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
drivers/net/enic/enic_main.c | 26 ++++++++++++++++++++++++++
1 file changed, 26 insertions(+)
v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.
v5 changes:
* Re-worded the commit logs.
v6 changes:
* Disabled the code changes for unified packet type by default, to
avoid breaking ABI compatibility.
v7 changes:
* Renamed RTE_UNIFIED_PKT_TYPE to RTE_NEXT_ABI.
diff --git a/drivers/net/enic/enic_main.c b/drivers/net/enic/enic_main.c
index 15313c2..f47e96c 100644
--- a/drivers/net/enic/enic_main.c
+++ b/drivers/net/enic/enic_main.c
@@ -423,7 +423,11 @@ static int enic_rq_indicate_buf(struct vnic_rq *rq,
rx_pkt->pkt_len = bytes_written;
if (ipv4) {
+#ifdef RTE_NEXT_ABI
+ rx_pkt->packet_type = RTE_PTYPE_L3_IPV4;
+#else
rx_pkt->ol_flags |= PKT_RX_IPV4_HDR;
+#endif
if (!csum_not_calc) {
if (unlikely(!ipv4_csum_ok))
rx_pkt->ol_flags |= PKT_RX_IP_CKSUM_BAD;
@@ -432,7 +436,11 @@ static int enic_rq_indicate_buf(struct vnic_rq *rq,
rx_pkt->ol_flags |= PKT_RX_L4_CKSUM_BAD;
}
} else if (ipv6)
+#ifdef RTE_NEXT_ABI
+ rx_pkt->packet_type = RTE_PTYPE_L3_IPV6;
+#else
rx_pkt->ol_flags |= PKT_RX_IPV6_HDR;
+#endif
} else {
/* Header split */
if (sop && !eop) {
@@ -445,7 +453,11 @@ static int enic_rq_indicate_buf(struct vnic_rq *rq,
*rx_pkt_bucket = rx_pkt;
rx_pkt->pkt_len = bytes_written;
if (ipv4) {
+#ifdef RTE_NEXT_ABI
+ rx_pkt->packet_type = RTE_PTYPE_L3_IPV4;
+#else
rx_pkt->ol_flags |= PKT_RX_IPV4_HDR;
+#endif
if (!csum_not_calc) {
if (unlikely(!ipv4_csum_ok))
rx_pkt->ol_flags |=
@@ -457,13 +469,22 @@ static int enic_rq_indicate_buf(struct vnic_rq *rq,
PKT_RX_L4_CKSUM_BAD;
}
} else if (ipv6)
+#ifdef RTE_NEXT_ABI
+ rx_pkt->packet_type = RTE_PTYPE_L3_IPV6;
+#else
rx_pkt->ol_flags |= PKT_RX_IPV6_HDR;
+#endif
} else {
/* Payload */
hdr_rx_pkt = *rx_pkt_bucket;
hdr_rx_pkt->pkt_len += bytes_written;
if (ipv4) {
+#ifdef RTE_NEXT_ABI
+ hdr_rx_pkt->packet_type =
+ RTE_PTYPE_L3_IPV4;
+#else
hdr_rx_pkt->ol_flags |= PKT_RX_IPV4_HDR;
+#endif
if (!csum_not_calc) {
if (unlikely(!ipv4_csum_ok))
hdr_rx_pkt->ol_flags |=
@@ -475,7 +496,12 @@ static int enic_rq_indicate_buf(struct vnic_rq *rq,
PKT_RX_L4_CKSUM_BAD;
}
} else if (ipv6)
+#ifdef RTE_NEXT_ABI
+ hdr_rx_pkt->packet_type =
+ RTE_PTYPE_L3_IPV6;
+#else
hdr_rx_pkt->ol_flags |= PKT_RX_IPV6_HDR;
+#endif
}
}
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v9 07/19] vmxnet3: replace bit mask based packet type with unified packet type
2015-07-03 8:32 ` [dpdk-dev] [PATCH v9 00/19] " Helin Zhang
` (5 preceding siblings ...)
2015-07-03 8:32 ` [dpdk-dev] [PATCH v9 06/19] enic: " Helin Zhang
@ 2015-07-03 8:32 ` Helin Zhang
2015-07-03 8:32 ` [dpdk-dev] [PATCH v9 08/19] fm10k: " Helin Zhang
` (12 subsequent siblings)
19 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-07-03 8:32 UTC (permalink / raw)
To: dev
To unify packet types among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.
To avoid breaking ABI compatibility, all the changes would be
enabled by RTE_NEXT_ABI, which is disabled by default.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
drivers/net/vmxnet3/vmxnet3_rxtx.c | 8 ++++++++
1 file changed, 8 insertions(+)
v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.
v5 changes:
* Re-worded the commit logs.
v6 changes:
* Disabled the code changes for unified packet type by default, to
avoid breaking ABI compatibility.
v7 changes:
* Renamed RTE_UNIFIED_PKT_TYPE to RTE_NEXT_ABI.
diff --git a/drivers/net/vmxnet3/vmxnet3_rxtx.c b/drivers/net/vmxnet3/vmxnet3_rxtx.c
index a1eac45..25ae2f6 100644
--- a/drivers/net/vmxnet3/vmxnet3_rxtx.c
+++ b/drivers/net/vmxnet3/vmxnet3_rxtx.c
@@ -649,9 +649,17 @@ vmxnet3_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
struct ipv4_hdr *ip = (struct ipv4_hdr *)(eth + 1);
if (((ip->version_ihl & 0xf) << 2) > (int)sizeof(struct ipv4_hdr))
+#ifdef RTE_NEXT_ABI
+ rxm->packet_type = RTE_PTYPE_L3_IPV4_EXT;
+#else
rxm->ol_flags |= PKT_RX_IPV4_HDR_EXT;
+#endif
else
+#ifdef RTE_NEXT_ABI
+ rxm->packet_type = RTE_PTYPE_L3_IPV4;
+#else
rxm->ol_flags |= PKT_RX_IPV4_HDR;
+#endif
if (!rcd->cnc) {
if (!rcd->ipc)
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v9 08/19] fm10k: replace bit mask based packet type with unified packet type
2015-07-03 8:32 ` [dpdk-dev] [PATCH v9 00/19] " Helin Zhang
` (6 preceding siblings ...)
2015-07-03 8:32 ` [dpdk-dev] [PATCH v9 07/19] vmxnet3: " Helin Zhang
@ 2015-07-03 8:32 ` Helin Zhang
2015-07-03 8:32 ` [dpdk-dev] [PATCH v9 09/19] cxgbe: " Helin Zhang
` (11 subsequent siblings)
19 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-07-03 8:32 UTC (permalink / raw)
To: dev
To unify packet types among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.
To avoid breaking ABI compatibility, all the changes would be
enabled by RTE_NEXT_ABI, which is disabled by default.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
drivers/net/fm10k/fm10k_rxtx.c | 27 +++++++++++++++++++++++++++
1 file changed, 27 insertions(+)
v4 changes:
* Supported unified packet type of fm10k from v4.
v5 changes:
* Re-worded the commit logs.
v6 changes:
* Disabled the code changes for unified packet type by default, to
avoid breaking ABI compatibility.
v7 changes:
* Renamed RTE_UNIFIED_PKT_TYPE to RTE_NEXT_ABI.
v9 changes:
* Renamed MAC to ETHER in packet type names.
diff --git a/drivers/net/fm10k/fm10k_rxtx.c b/drivers/net/fm10k/fm10k_rxtx.c
index f5d1ad0..d3bcdca 100644
--- a/drivers/net/fm10k/fm10k_rxtx.c
+++ b/drivers/net/fm10k/fm10k_rxtx.c
@@ -68,12 +68,37 @@ static inline void dump_rxd(union fm10k_rx_desc *rxd)
static inline void
rx_desc_to_ol_flags(struct rte_mbuf *m, const union fm10k_rx_desc *d)
{
+#ifdef RTE_NEXT_ABI
+ static const uint32_t
+ ptype_table[FM10K_RXD_PKTTYPE_MASK >> FM10K_RXD_PKTTYPE_SHIFT]
+ __rte_cache_aligned = {
+ [FM10K_PKTTYPE_OTHER] = RTE_PTYPE_L2_ETHER,
+ [FM10K_PKTTYPE_IPV4] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4,
+ [FM10K_PKTTYPE_IPV4_EX] = RTE_PTYPE_L2_ETHER |
+ RTE_PTYPE_L3_IPV4_EXT,
+ [FM10K_PKTTYPE_IPV6] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6,
+ [FM10K_PKTTYPE_IPV6_EX] = RTE_PTYPE_L2_ETHER |
+ RTE_PTYPE_L3_IPV6_EXT,
+ [FM10K_PKTTYPE_IPV4 | FM10K_PKTTYPE_TCP] = RTE_PTYPE_L2_ETHER |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_TCP,
+ [FM10K_PKTTYPE_IPV6 | FM10K_PKTTYPE_TCP] = RTE_PTYPE_L2_ETHER |
+ RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_TCP,
+ [FM10K_PKTTYPE_IPV4 | FM10K_PKTTYPE_UDP] = RTE_PTYPE_L2_ETHER |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_UDP,
+ [FM10K_PKTTYPE_IPV6 | FM10K_PKTTYPE_UDP] = RTE_PTYPE_L2_ETHER |
+ RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_UDP,
+ };
+
+ m->packet_type = ptype_table[(d->w.pkt_info & FM10K_RXD_PKTTYPE_MASK)
+ >> FM10K_RXD_PKTTYPE_SHIFT];
+#else /* RTE_NEXT_ABI */
uint16_t ptype;
static const uint16_t pt_lut[] = { 0,
PKT_RX_IPV4_HDR, PKT_RX_IPV4_HDR_EXT,
PKT_RX_IPV6_HDR, PKT_RX_IPV6_HDR_EXT,
0, 0, 0
};
+#endif /* RTE_NEXT_ABI */
if (d->w.pkt_info & FM10K_RXD_RSSTYPE_MASK)
m->ol_flags |= PKT_RX_RSS_HASH;
@@ -97,9 +122,11 @@ rx_desc_to_ol_flags(struct rte_mbuf *m, const union fm10k_rx_desc *d)
if (unlikely(d->d.staterr & FM10K_RXD_STATUS_RXE))
m->ol_flags |= PKT_RX_RECIP_ERR;
+#ifndef RTE_NEXT_ABI
ptype = (d->d.data & FM10K_RXD_PKTTYPE_MASK_L3) >>
FM10K_RXD_PKTTYPE_SHIFT;
m->ol_flags |= pt_lut[(uint8_t)ptype];
+#endif
}
uint16_t
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v9 09/19] cxgbe: replace bit mask based packet type with unified packet type
2015-07-03 8:32 ` [dpdk-dev] [PATCH v9 00/19] " Helin Zhang
` (7 preceding siblings ...)
2015-07-03 8:32 ` [dpdk-dev] [PATCH v9 08/19] fm10k: " Helin Zhang
@ 2015-07-03 8:32 ` Helin Zhang
2015-07-03 8:32 ` [dpdk-dev] [PATCH v9 10/19] app/test-pipeline: " Helin Zhang
` (10 subsequent siblings)
19 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-07-03 8:32 UTC (permalink / raw)
To: dev
To unify packet types among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.
To avoid breaking ABI compatibility, all the changes would be enabled
by RTE_NEXT_ABI, which is disabled by default.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
drivers/net/cxgbe/sge.c | 8 ++++++++
1 file changed, 8 insertions(+)
v9 changes:
* Added unified packet type support in newly added cxgbe driver.
diff --git a/drivers/net/cxgbe/sge.c b/drivers/net/cxgbe/sge.c
index 359296e..fdae0b4 100644
--- a/drivers/net/cxgbe/sge.c
+++ b/drivers/net/cxgbe/sge.c
@@ -1326,14 +1326,22 @@ int t4_ethrx_handler(struct sge_rspq *q, const __be64 *rsp,
mbuf->port = pkt->iff;
if (pkt->l2info & htonl(F_RXF_IP)) {
+#ifdef RTE_NEXT_ABI
+ mbuf->packet_type = RTE_PTYPE_L3_IPV4;
+#else
mbuf->ol_flags |= PKT_RX_IPV4_HDR;
+#endif
if (unlikely(!csum_ok))
mbuf->ol_flags |= PKT_RX_IP_CKSUM_BAD;
if ((pkt->l2info & htonl(F_RXF_UDP | F_RXF_TCP)) && !csum_ok)
mbuf->ol_flags |= PKT_RX_L4_CKSUM_BAD;
} else if (pkt->l2info & htonl(F_RXF_IP6)) {
+#ifdef RTE_NEXT_ABI
+ mbuf->packet_type = RTE_PTYPE_L3_IPV6;
+#else
mbuf->ol_flags |= PKT_RX_IPV6_HDR;
+#endif
}
mbuf->port = pkt->iff;
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v9 10/19] app/test-pipeline: replace bit mask based packet type with unified packet type
2015-07-03 8:32 ` [dpdk-dev] [PATCH v9 00/19] " Helin Zhang
` (8 preceding siblings ...)
2015-07-03 8:32 ` [dpdk-dev] [PATCH v9 09/19] cxgbe: " Helin Zhang
@ 2015-07-03 8:32 ` Helin Zhang
2015-07-03 8:32 ` [dpdk-dev] [PATCH v9 11/19] app/testpmd: " Helin Zhang
` (9 subsequent siblings)
19 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-07-03 8:32 UTC (permalink / raw)
To: dev
To unify packet types among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.
To avoid breaking ABI compatibility, all the changes would be
enabled by RTE_NEXT_ABI, which is disabled by default.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
app/test-pipeline/pipeline_hash.c | 13 +++++++++++++
1 file changed, 13 insertions(+)
v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.
v5 changes:
* Re-worded the commit logs.
v6 changes:
* Disabled the code changes for unified packet type by default, to
avoid breaking ABI compatibility.
v7 changes:
* Renamed RTE_UNIFIED_PKT_TYPE to RTE_NEXT_ABI.
diff --git a/app/test-pipeline/pipeline_hash.c b/app/test-pipeline/pipeline_hash.c
index 4598ad4..aa3f9e5 100644
--- a/app/test-pipeline/pipeline_hash.c
+++ b/app/test-pipeline/pipeline_hash.c
@@ -459,20 +459,33 @@ app_main_loop_rx_metadata(void) {
signature = RTE_MBUF_METADATA_UINT32_PTR(m, 0);
key = RTE_MBUF_METADATA_UINT8_PTR(m, 32);
+#ifdef RTE_NEXT_ABI
+ if (RTE_ETH_IS_IPV4_HDR(m->packet_type)) {
+#else
if (m->ol_flags & PKT_RX_IPV4_HDR) {
+#endif
ip_hdr = (struct ipv4_hdr *)
&m_data[sizeof(struct ether_hdr)];
ip_dst = ip_hdr->dst_addr;
k32 = (uint32_t *) key;
k32[0] = ip_dst & 0xFFFFFF00;
+#ifdef RTE_NEXT_ABI
+ } else if (RTE_ETH_IS_IPV6_HDR(m->packet_type)) {
+#else
} else {
+#endif
ipv6_hdr = (struct ipv6_hdr *)
&m_data[sizeof(struct ether_hdr)];
ipv6_dst = ipv6_hdr->dst_addr;
memcpy(key, ipv6_dst, 16);
+#ifdef RTE_NEXT_ABI
+ } else
+ continue;
+#else
}
+#endif
*signature = test_hash(key, 0, 0);
}
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v9 11/19] app/testpmd: replace bit mask based packet type with unified packet type
2015-07-03 8:32 ` [dpdk-dev] [PATCH v9 00/19] " Helin Zhang
` (9 preceding siblings ...)
2015-07-03 8:32 ` [dpdk-dev] [PATCH v9 10/19] app/test-pipeline: " Helin Zhang
@ 2015-07-03 8:32 ` Helin Zhang
2015-07-03 8:32 ` [dpdk-dev] [PATCH v9 12/19] app/test: Remove useless code Helin Zhang
` (8 subsequent siblings)
19 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-07-03 8:32 UTC (permalink / raw)
To: dev
To unify packet types among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.
To avoid breaking ABI compatibility, all the changes would be
enabled by RTE_NEXT_ABI, which is disabled by default.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Signed-off-by: Jijiang Liu <jijiang.liu@intel.com>
---
app/test-pmd/csumonly.c | 14 ++++
app/test-pmd/rxonly.c | 183 ++++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 197 insertions(+)
v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.
v4 changes:
* Added printing logs of packet types of each received packet in rxonly mode.
v5 changes:
* Re-worded the commit logs.
v6 changes:
* Disabled the code changes for unified packet type by default, to
avoid breaking ABI compatibility.
v7 changes:
* Renamed RTE_UNIFIED_PKT_TYPE to RTE_NEXT_ABI.
v9 changes:
* Renamed MAC to ETHER in packet type names.
diff --git a/app/test-pmd/csumonly.c b/app/test-pmd/csumonly.c
index 4287940..1bf3485 100644
--- a/app/test-pmd/csumonly.c
+++ b/app/test-pmd/csumonly.c
@@ -202,8 +202,14 @@ parse_ethernet(struct ether_hdr *eth_hdr, struct testpmd_offload_info *info)
/* Parse a vxlan header */
static void
+#ifdef RTE_NEXT_ABI
+parse_vxlan(struct udp_hdr *udp_hdr,
+ struct testpmd_offload_info *info,
+ uint32_t pkt_type)
+#else
parse_vxlan(struct udp_hdr *udp_hdr, struct testpmd_offload_info *info,
uint64_t mbuf_olflags)
+#endif
{
struct ether_hdr *eth_hdr;
@@ -211,8 +217,12 @@ parse_vxlan(struct udp_hdr *udp_hdr, struct testpmd_offload_info *info,
* (rfc7348) or that the rx offload flag is set (i40e only
* currently) */
if (udp_hdr->dst_port != _htons(4789) &&
+#ifdef RTE_NEXT_ABI
+ RTE_ETH_IS_TUNNEL_PKT(pkt_type) == 0)
+#else
(mbuf_olflags & (PKT_RX_TUNNEL_IPV4_HDR |
PKT_RX_TUNNEL_IPV6_HDR)) == 0)
+#endif
return;
info->is_tunnel = 1;
@@ -549,7 +559,11 @@ pkt_burst_checksum_forward(struct fwd_stream *fs)
struct udp_hdr *udp_hdr;
udp_hdr = (struct udp_hdr *)((char *)l3_hdr +
info.l3_len);
+#ifdef RTE_NEXT_ABI
+ parse_vxlan(udp_hdr, &info, m->packet_type);
+#else
parse_vxlan(udp_hdr, &info, m->ol_flags);
+#endif
} else if (info.l4_proto == IPPROTO_GRE) {
struct simple_gre_hdr *gre_hdr;
gre_hdr = (struct simple_gre_hdr *)
diff --git a/app/test-pmd/rxonly.c b/app/test-pmd/rxonly.c
index 4a9f86e..632056d 100644
--- a/app/test-pmd/rxonly.c
+++ b/app/test-pmd/rxonly.c
@@ -91,7 +91,11 @@ pkt_burst_receive(struct fwd_stream *fs)
uint64_t ol_flags;
uint16_t nb_rx;
uint16_t i, packet_type;
+#ifdef RTE_NEXT_ABI
+ uint16_t is_encapsulation;
+#else
uint64_t is_encapsulation;
+#endif
#ifdef RTE_TEST_PMD_RECORD_CORE_CYCLES
uint64_t start_tsc;
@@ -135,8 +139,12 @@ pkt_burst_receive(struct fwd_stream *fs)
ol_flags = mb->ol_flags;
packet_type = mb->packet_type;
+#ifdef RTE_NEXT_ABI
+ is_encapsulation = RTE_ETH_IS_TUNNEL_PKT(packet_type);
+#else
is_encapsulation = ol_flags & (PKT_RX_TUNNEL_IPV4_HDR |
PKT_RX_TUNNEL_IPV6_HDR);
+#endif
print_ether_addr(" src=", ð_hdr->s_addr);
print_ether_addr(" - dst=", ð_hdr->d_addr);
@@ -163,6 +171,177 @@ pkt_burst_receive(struct fwd_stream *fs)
if (ol_flags & PKT_RX_QINQ_PKT)
printf(" - QinQ VLAN tci=0x%x, VLAN tci outer=0x%x",
mb->vlan_tci, mb->vlan_tci_outer);
+#ifdef RTE_NEXT_ABI
+ if (mb->packet_type) {
+ uint32_t ptype;
+
+ /* (outer) L2 packet type */
+ ptype = mb->packet_type & RTE_PTYPE_L2_MASK;
+ switch (ptype) {
+ case RTE_PTYPE_L2_ETHER:
+ printf(" - (outer) L2 type: ETHER");
+ break;
+ case RTE_PTYPE_L2_ETHER_TIMESYNC:
+ printf(" - (outer) L2 type: ETHER_Timesync");
+ break;
+ case RTE_PTYPE_L2_ETHER_ARP:
+ printf(" - (outer) L2 type: ETHER_ARP");
+ break;
+ case RTE_PTYPE_L2_ETHER_LLDP:
+ printf(" - (outer) L2 type: ETHER_LLDP");
+ break;
+ default:
+ printf(" - (outer) L2 type: Unknown");
+ break;
+ }
+
+ /* (outer) L3 packet type */
+ ptype = mb->packet_type & RTE_PTYPE_L3_MASK;
+ switch (ptype) {
+ case RTE_PTYPE_L3_IPV4:
+ printf(" - (outer) L3 type: IPV4");
+ break;
+ case RTE_PTYPE_L3_IPV4_EXT:
+ printf(" - (outer) L3 type: IPV4_EXT");
+ break;
+ case RTE_PTYPE_L3_IPV6:
+ printf(" - (outer) L3 type: IPV6");
+ break;
+ case RTE_PTYPE_L3_IPV4_EXT_UNKNOWN:
+ printf(" - (outer) L3 type: IPV4_EXT_UNKNOWN");
+ break;
+ case RTE_PTYPE_L3_IPV6_EXT:
+ printf(" - (outer) L3 type: IPV6_EXT");
+ break;
+ case RTE_PTYPE_L3_IPV6_EXT_UNKNOWN:
+ printf(" - (outer) L3 type: IPV6_EXT_UNKNOWN");
+ break;
+ default:
+ printf(" - (outer) L3 type: Unknown");
+ break;
+ }
+
+ /* (outer) L4 packet type */
+ ptype = mb->packet_type & RTE_PTYPE_L4_MASK;
+ switch (ptype) {
+ case RTE_PTYPE_L4_TCP:
+ printf(" - (outer) L4 type: TCP");
+ break;
+ case RTE_PTYPE_L4_UDP:
+ printf(" - (outer) L4 type: UDP");
+ break;
+ case RTE_PTYPE_L4_FRAG:
+ printf(" - (outer) L4 type: L4_FRAG");
+ break;
+ case RTE_PTYPE_L4_SCTP:
+ printf(" - (outer) L4 type: SCTP");
+ break;
+ case RTE_PTYPE_L4_ICMP:
+ printf(" - (outer) L4 type: ICMP");
+ break;
+ case RTE_PTYPE_L4_NONFRAG:
+ printf(" - (outer) L4 type: L4_NONFRAG");
+ break;
+ default:
+ printf(" - (outer) L4 type: Unknown");
+ break;
+ }
+
+ /* packet tunnel type */
+ ptype = mb->packet_type & RTE_PTYPE_TUNNEL_MASK;
+ switch (ptype) {
+ case RTE_PTYPE_TUNNEL_IP:
+ printf(" - Tunnel type: IP");
+ break;
+ case RTE_PTYPE_TUNNEL_GRE:
+ printf(" - Tunnel type: GRE");
+ break;
+ case RTE_PTYPE_TUNNEL_VXLAN:
+ printf(" - Tunnel type: VXLAN");
+ break;
+ case RTE_PTYPE_TUNNEL_NVGRE:
+ printf(" - Tunnel type: NVGRE");
+ break;
+ case RTE_PTYPE_TUNNEL_GENEVE:
+ printf(" - Tunnel type: GENEVE");
+ break;
+ case RTE_PTYPE_TUNNEL_GRENAT:
+ printf(" - Tunnel type: GRENAT");
+ break;
+ default:
+ printf(" - Tunnel type: Unknown");
+ break;
+ }
+
+ /* inner L2 packet type */
+ ptype = mb->packet_type & RTE_PTYPE_INNER_L2_MASK;
+ switch (ptype) {
+ case RTE_PTYPE_INNER_L2_ETHER:
+ printf(" - Inner L2 type: ETHER");
+ break;
+ case RTE_PTYPE_INNER_L2_ETHER_VLAN:
+ printf(" - Inner L2 type: ETHER_VLAN");
+ break;
+ default:
+ printf(" - Inner L2 type: Unknown");
+ break;
+ }
+
+ /* inner L3 packet type */
+ ptype = mb->packet_type & RTE_PTYPE_INNER_INNER_L3_MASK;
+ switch (ptype) {
+ case RTE_PTYPE_INNER_L3_IPV4:
+ printf(" - Inner L3 type: IPV4");
+ break;
+ case RTE_PTYPE_INNER_L3_IPV4_EXT:
+ printf(" - Inner L3 type: IPV4_EXT");
+ break;
+ case RTE_PTYPE_INNER_L3_IPV6:
+ printf(" - Inner L3 type: IPV6");
+ break;
+ case RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN:
+ printf(" - Inner L3 type: IPV4_EXT_UNKNOWN");
+ break;
+ case RTE_PTYPE_INNER_L3_IPV6_EXT:
+ printf(" - Inner L3 type: IPV6_EXT");
+ break;
+ case RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN:
+ printf(" - Inner L3 type: IPV6_EXT_UNKOWN");
+ break;
+ default:
+ printf(" - Inner L3 type: Unknown");
+ break;
+ }
+
+ /* inner L4 packet type */
+ ptype = mb->packet_type & RTE_PTYPE_INNER_L4_MASK;
+ switch (ptype) {
+ case RTE_PTYPE_INNER_L4_TCP:
+ printf(" - Inner L4 type: TCP");
+ break;
+ case RTE_PTYPE_INNER_L4_UDP:
+ printf(" - Inner L4 type: UDP");
+ break;
+ case RTE_PTYPE_INNER_L4_FRAG:
+ printf(" - Inner L4 type: L4_FRAG");
+ break;
+ case RTE_PTYPE_INNER_L4_SCTP:
+ printf(" - Inner L4 type: SCTP");
+ break;
+ case RTE_PTYPE_INNER_L4_ICMP:
+ printf(" - Inner L4 type: ICMP");
+ break;
+ case RTE_PTYPE_INNER_L4_NONFRAG:
+ printf(" - Inner L4 type: L4_NONFRAG");
+ break;
+ default:
+ printf(" - Inner L4 type: Unknown");
+ break;
+ }
+ printf("\n");
+ } else
+ printf("Unknown packet type\n");
+#endif /* RTE_NEXT_ABI */
if (is_encapsulation) {
struct ipv4_hdr *ipv4_hdr;
struct ipv6_hdr *ipv6_hdr;
@@ -176,7 +355,11 @@ pkt_burst_receive(struct fwd_stream *fs)
l2_len = sizeof(struct ether_hdr);
/* Do not support ipv4 option field */
+#ifdef RTE_NEXT_ABI
+ if (RTE_ETH_IS_IPV4_HDR(packet_type)) {
+#else
if (ol_flags & PKT_RX_TUNNEL_IPV4_HDR) {
+#endif
l3_len = sizeof(struct ipv4_hdr);
ipv4_hdr = rte_pktmbuf_mtod_offset(mb,
struct ipv4_hdr *,
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v9 12/19] app/test: Remove useless code
2015-07-03 8:32 ` [dpdk-dev] [PATCH v9 00/19] " Helin Zhang
` (10 preceding siblings ...)
2015-07-03 8:32 ` [dpdk-dev] [PATCH v9 11/19] app/testpmd: " Helin Zhang
@ 2015-07-03 8:32 ` Helin Zhang
2015-07-03 8:32 ` [dpdk-dev] [PATCH v9 13/19] examples/ip_fragmentation: replace bit mask based packet type with unified packet type Helin Zhang
` (7 subsequent siblings)
19 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-07-03 8:32 UTC (permalink / raw)
To: dev
Severl useless code lines are added accidently, which blocks packet
type unification. They should be deleted at all.
To avoid breaking ABI compatibility, all the changes would be
enabled by RTE_NEXT_ABI, which is disabled by default.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
app/test/packet_burst_generator.c | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
v4 changes:
* Removed several useless code lines which block packet type unification.
v5 changes:
* Re-worded the commit logs.
v6 changes:
* Disabled the code changes for unified packet type by default, to
avoid breaking ABI compatibility.
v7 changes:
* Renamed RTE_UNIFIED_PKT_TYPE to RTE_NEXT_ABI.
diff --git a/app/test/packet_burst_generator.c b/app/test/packet_burst_generator.c
index 28d9e25..d9d808b 100644
--- a/app/test/packet_burst_generator.c
+++ b/app/test/packet_burst_generator.c
@@ -273,19 +273,21 @@ nomore_mbuf:
if (ipv4) {
pkt->vlan_tci = ETHER_TYPE_IPv4;
pkt->l3_len = sizeof(struct ipv4_hdr);
-
+#ifndef RTE_NEXT_ABI
if (vlan_enabled)
pkt->ol_flags = PKT_RX_IPV4_HDR | PKT_RX_VLAN_PKT;
else
pkt->ol_flags = PKT_RX_IPV4_HDR;
+#endif
} else {
pkt->vlan_tci = ETHER_TYPE_IPv6;
pkt->l3_len = sizeof(struct ipv6_hdr);
-
+#ifndef RTE_NEXT_ABI
if (vlan_enabled)
pkt->ol_flags = PKT_RX_IPV6_HDR | PKT_RX_VLAN_PKT;
else
pkt->ol_flags = PKT_RX_IPV6_HDR;
+#endif
}
pkts_burst[nb_pkt] = pkt;
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v9 13/19] examples/ip_fragmentation: replace bit mask based packet type with unified packet type
2015-07-03 8:32 ` [dpdk-dev] [PATCH v9 00/19] " Helin Zhang
` (11 preceding siblings ...)
2015-07-03 8:32 ` [dpdk-dev] [PATCH v9 12/19] app/test: Remove useless code Helin Zhang
@ 2015-07-03 8:32 ` Helin Zhang
2015-07-03 8:32 ` [dpdk-dev] [PATCH v9 14/19] examples/ip_reassembly: " Helin Zhang
` (6 subsequent siblings)
19 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-07-03 8:32 UTC (permalink / raw)
To: dev
To unify packet types among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.
To avoid breaking ABI compatibility, all the changes would be
enabled by RTE_NEXT_ABI, which is disabled by default.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
examples/ip_fragmentation/main.c | 9 +++++++++
1 file changed, 9 insertions(+)
v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.
v5 changes:
* Re-worded the commit logs.
v6 changes:
* Disabled the code changes for unified packet type by default, to
avoid breaking ABI compatibility.
v7 changes:
* Renamed RTE_UNIFIED_PKT_TYPE to RTE_NEXT_ABI.
diff --git a/examples/ip_fragmentation/main.c b/examples/ip_fragmentation/main.c
index 0922ba6..b71d05f 100644
--- a/examples/ip_fragmentation/main.c
+++ b/examples/ip_fragmentation/main.c
@@ -283,7 +283,11 @@ l3fwd_simple_forward(struct rte_mbuf *m, struct lcore_queue_conf *qconf,
len = qconf->tx_mbufs[port_out].len;
/* if this is an IPv4 packet */
+#ifdef RTE_NEXT_ABI
+ if (RTE_ETH_IS_IPV4_HDR(m->packet_type)) {
+#else
if (m->ol_flags & PKT_RX_IPV4_HDR) {
+#endif
struct ipv4_hdr *ip_hdr;
uint32_t ip_dst;
/* Read the lookup key (i.e. ip_dst) from the input packet */
@@ -317,9 +321,14 @@ l3fwd_simple_forward(struct rte_mbuf *m, struct lcore_queue_conf *qconf,
if (unlikely (len2 < 0))
return;
}
+#ifdef RTE_NEXT_ABI
+ } else if (RTE_ETH_IS_IPV6_HDR(m->packet_type)) {
+ /* if this is an IPv6 packet */
+#else
}
/* if this is an IPv6 packet */
else if (m->ol_flags & PKT_RX_IPV6_HDR) {
+#endif
struct ipv6_hdr *ip_hdr;
ipv6 = 1;
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v9 14/19] examples/ip_reassembly: replace bit mask based packet type with unified packet type
2015-07-03 8:32 ` [dpdk-dev] [PATCH v9 00/19] " Helin Zhang
` (12 preceding siblings ...)
2015-07-03 8:32 ` [dpdk-dev] [PATCH v9 13/19] examples/ip_fragmentation: replace bit mask based packet type with unified packet type Helin Zhang
@ 2015-07-03 8:32 ` Helin Zhang
2015-07-03 8:32 ` [dpdk-dev] [PATCH v9 15/19] examples/l3fwd-acl: " Helin Zhang
` (5 subsequent siblings)
19 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-07-03 8:32 UTC (permalink / raw)
To: dev
To unify packet types among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.
To avoid breaking ABI compatibility, all the changes would be
enabled by RTE_NEXT_ABI, which is disabled by default.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
examples/ip_reassembly/main.c | 9 +++++++++
1 file changed, 9 insertions(+)
v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.
v5 changes:
* Re-worded the commit logs.
v6 changes:
* Disabled the code changes for unified packet type by default, to
avoid breaking ABI compatibility.
v7 changes:
* Renamed RTE_UNIFIED_PKT_TYPE to RTE_NEXT_ABI.
diff --git a/examples/ip_reassembly/main.c b/examples/ip_reassembly/main.c
index 9ecb6f9..f1c47ad 100644
--- a/examples/ip_reassembly/main.c
+++ b/examples/ip_reassembly/main.c
@@ -356,7 +356,11 @@ reassemble(struct rte_mbuf *m, uint8_t portid, uint32_t queue,
dst_port = portid;
/* if packet is IPv4 */
+#ifdef RTE_NEXT_ABI
+ if (RTE_ETH_IS_IPV4_HDR(m->packet_type)) {
+#else
if (m->ol_flags & (PKT_RX_IPV4_HDR)) {
+#endif
struct ipv4_hdr *ip_hdr;
uint32_t ip_dst;
@@ -396,9 +400,14 @@ reassemble(struct rte_mbuf *m, uint8_t portid, uint32_t queue,
}
eth_hdr->ether_type = rte_be_to_cpu_16(ETHER_TYPE_IPv4);
+#ifdef RTE_NEXT_ABI
+ } else if (RTE_ETH_IS_IPV6_HDR(m->packet_type)) {
+ /* if packet is IPv6 */
+#else
}
/* if packet is IPv6 */
else if (m->ol_flags & (PKT_RX_IPV6_HDR | PKT_RX_IPV6_HDR_EXT)) {
+#endif
struct ipv6_extension_fragment *frag_hdr;
struct ipv6_hdr *ip_hdr;
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v9 15/19] examples/l3fwd-acl: replace bit mask based packet type with unified packet type
2015-07-03 8:32 ` [dpdk-dev] [PATCH v9 00/19] " Helin Zhang
` (13 preceding siblings ...)
2015-07-03 8:32 ` [dpdk-dev] [PATCH v9 14/19] examples/ip_reassembly: " Helin Zhang
@ 2015-07-03 8:32 ` Helin Zhang
2015-07-03 8:32 ` [dpdk-dev] [PATCH v9 16/19] examples/l3fwd-power: " Helin Zhang
` (4 subsequent siblings)
19 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-07-03 8:32 UTC (permalink / raw)
To: dev
To unify packet types among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.
To avoid breaking ABI compatibility, all the changes would be
enabled by RTE_NEXT_ABI, which is disabled by default.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
examples/l3fwd-acl/main.c | 29 +++++++++++++++++++++++------
1 file changed, 23 insertions(+), 6 deletions(-)
v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.
v5 changes:
* Re-worded the commit logs.
v6 changes:
* Disabled the code changes for unified packet type by default, to
avoid breaking ABI compatibility.
v7 changes:
* Renamed RTE_UNIFIED_PKT_TYPE to RTE_NEXT_ABI.
diff --git a/examples/l3fwd-acl/main.c b/examples/l3fwd-acl/main.c
index 29cb25e..b2bdf2f 100644
--- a/examples/l3fwd-acl/main.c
+++ b/examples/l3fwd-acl/main.c
@@ -645,10 +645,13 @@ prepare_one_packet(struct rte_mbuf **pkts_in, struct acl_search_t *acl,
struct ipv4_hdr *ipv4_hdr;
struct rte_mbuf *pkt = pkts_in[index];
+#ifdef RTE_NEXT_ABI
+ if (RTE_ETH_IS_IPV4_HDR(pkt->packet_type)) {
+#else
int type = pkt->ol_flags & (PKT_RX_IPV4_HDR | PKT_RX_IPV6_HDR);
if (type == PKT_RX_IPV4_HDR) {
-
+#endif
ipv4_hdr = rte_pktmbuf_mtod_offset(pkt, struct ipv4_hdr *,
sizeof(struct ether_hdr));
@@ -667,9 +670,11 @@ prepare_one_packet(struct rte_mbuf **pkts_in, struct acl_search_t *acl,
/* Not a valid IPv4 packet */
rte_pktmbuf_free(pkt);
}
-
+#ifdef RTE_NEXT_ABI
+ } else if (RTE_ETH_IS_IPV6_HDR(pkt->packet_type)) {
+#else
} else if (type == PKT_RX_IPV6_HDR) {
-
+#endif
/* Fill acl structure */
acl->data_ipv6[acl->num_ipv6] = MBUF_IPV6_2PROTO(pkt);
acl->m_ipv6[(acl->num_ipv6)++] = pkt;
@@ -687,17 +692,22 @@ prepare_one_packet(struct rte_mbuf **pkts_in, struct acl_search_t *acl,
{
struct rte_mbuf *pkt = pkts_in[index];
+#ifdef RTE_NEXT_ABI
+ if (RTE_ETH_IS_IPV4_HDR(pkt->packet_type)) {
+#else
int type = pkt->ol_flags & (PKT_RX_IPV4_HDR | PKT_RX_IPV6_HDR);
if (type == PKT_RX_IPV4_HDR) {
-
+#endif
/* Fill acl structure */
acl->data_ipv4[acl->num_ipv4] = MBUF_IPV4_2PROTO(pkt);
acl->m_ipv4[(acl->num_ipv4)++] = pkt;
-
+#ifdef RTE_NEXT_ABI
+ } else if (RTE_ETH_IS_IPV6_HDR(pkt->packet_type)) {
+#else
} else if (type == PKT_RX_IPV6_HDR) {
-
+#endif
/* Fill acl structure */
acl->data_ipv6[acl->num_ipv6] = MBUF_IPV6_2PROTO(pkt);
acl->m_ipv6[(acl->num_ipv6)++] = pkt;
@@ -745,10 +755,17 @@ send_one_packet(struct rte_mbuf *m, uint32_t res)
/* in the ACL list, drop it */
#ifdef L3FWDACL_DEBUG
if ((res & ACL_DENY_SIGNATURE) != 0) {
+#ifdef RTE_NEXT_ABI
+ if (RTE_ETH_IS_IPV4_HDR(m->packet_type))
+ dump_acl4_rule(m, res);
+ else if (RTE_ETH_IS_IPV6_HDR(m->packet_type))
+ dump_acl6_rule(m, res);
+#else
if (m->ol_flags & PKT_RX_IPV4_HDR)
dump_acl4_rule(m, res);
else
dump_acl6_rule(m, res);
+#endif /* RTE_NEXT_ABI */
}
#endif
rte_pktmbuf_free(m);
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v9 16/19] examples/l3fwd-power: replace bit mask based packet type with unified packet type
2015-07-03 8:32 ` [dpdk-dev] [PATCH v9 00/19] " Helin Zhang
` (14 preceding siblings ...)
2015-07-03 8:32 ` [dpdk-dev] [PATCH v9 15/19] examples/l3fwd-acl: " Helin Zhang
@ 2015-07-03 8:32 ` Helin Zhang
2015-07-03 8:32 ` [dpdk-dev] [PATCH v9 17/19] examples/l3fwd: " Helin Zhang
` (3 subsequent siblings)
19 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-07-03 8:32 UTC (permalink / raw)
To: dev
To unify packet types among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.
To avoid breaking ABI compatibility, all the changes would be
enabled by RTE_NEXT_ABI, which is disabled by default.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
examples/l3fwd-power/main.c | 8 ++++++++
1 file changed, 8 insertions(+)
v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.
v5 changes:
* Re-worded the commit logs.
v6 changes:
* Disabled the code changes for unified packet type by default, to
avoid breaking ABI compatibility.
v7 changes:
* Renamed RTE_UNIFIED_PKT_TYPE to RTE_NEXT_ABI.
diff --git a/examples/l3fwd-power/main.c b/examples/l3fwd-power/main.c
index d4eba1a..dbbebdd 100644
--- a/examples/l3fwd-power/main.c
+++ b/examples/l3fwd-power/main.c
@@ -635,7 +635,11 @@ l3fwd_simple_forward(struct rte_mbuf *m, uint8_t portid,
eth_hdr = rte_pktmbuf_mtod(m, struct ether_hdr *);
+#ifdef RTE_NEXT_ABI
+ if (RTE_ETH_IS_IPV4_HDR(m->packet_type)) {
+#else
if (m->ol_flags & PKT_RX_IPV4_HDR) {
+#endif
/* Handle IPv4 headers.*/
ipv4_hdr =
rte_pktmbuf_mtod_offset(m, struct ipv4_hdr *,
@@ -670,8 +674,12 @@ l3fwd_simple_forward(struct rte_mbuf *m, uint8_t portid,
ether_addr_copy(&ports_eth_addr[dst_port], ð_hdr->s_addr);
send_single_packet(m, dst_port);
+#ifdef RTE_NEXT_ABI
+ } else if (RTE_ETH_IS_IPV6_HDR(m->packet_type)) {
+#else
}
else {
+#endif
/* Handle IPv6 headers.*/
#if (APP_LOOKUP_METHOD == APP_LOOKUP_EXACT_MATCH)
struct ipv6_hdr *ipv6_hdr;
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v9 17/19] examples/l3fwd: replace bit mask based packet type with unified packet type
2015-07-03 8:32 ` [dpdk-dev] [PATCH v9 00/19] " Helin Zhang
` (15 preceding siblings ...)
2015-07-03 8:32 ` [dpdk-dev] [PATCH v9 16/19] examples/l3fwd-power: " Helin Zhang
@ 2015-07-03 8:32 ` Helin Zhang
2015-07-03 8:32 ` [dpdk-dev] [PATCH v9 18/19] examples/tep_termination: " Helin Zhang
` (2 subsequent siblings)
19 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-07-03 8:32 UTC (permalink / raw)
To: dev
To unify packet types among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.
To avoid breaking ABI compatibility, all the changes would be
enabled by RTE_NEXT_ABI, which is disabled by default.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
examples/l3fwd/main.c | 123 ++++++++++++++++++++++++++++++++++++++++++++++++--
1 file changed, 120 insertions(+), 3 deletions(-)
v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.
v3 changes:
* Minor bug fixes and enhancements.
v5 changes:
* Re-worded the commit logs.
v6 changes:
* Disabled the code changes for unified packet type by default, to
avoid breaking ABI compatibility.
v7 changes:
* Renamed RTE_UNIFIED_PKT_TYPE to RTE_NEXT_ABI.
diff --git a/examples/l3fwd/main.c b/examples/l3fwd/main.c
index 5c22ed1..b1bcb35 100644
--- a/examples/l3fwd/main.c
+++ b/examples/l3fwd/main.c
@@ -939,7 +939,11 @@ l3fwd_simple_forward(struct rte_mbuf *m, uint8_t portid, struct lcore_conf *qcon
eth_hdr = rte_pktmbuf_mtod(m, struct ether_hdr *);
+#ifdef RTE_NEXT_ABI
+ if (RTE_ETH_IS_IPV4_HDR(m->packet_type)) {
+#else
if (m->ol_flags & PKT_RX_IPV4_HDR) {
+#endif
/* Handle IPv4 headers.*/
ipv4_hdr = rte_pktmbuf_mtod_offset(m, struct ipv4_hdr *,
sizeof(struct ether_hdr));
@@ -970,8 +974,11 @@ l3fwd_simple_forward(struct rte_mbuf *m, uint8_t portid, struct lcore_conf *qcon
ether_addr_copy(&ports_eth_addr[dst_port], ð_hdr->s_addr);
send_single_packet(m, dst_port);
-
+#ifdef RTE_NEXT_ABI
+ } else if (RTE_ETH_IS_IPV6_HDR(m->packet_type)) {
+#else
} else {
+#endif
/* Handle IPv6 headers.*/
struct ipv6_hdr *ipv6_hdr;
@@ -990,8 +997,13 @@ l3fwd_simple_forward(struct rte_mbuf *m, uint8_t portid, struct lcore_conf *qcon
ether_addr_copy(&ports_eth_addr[dst_port], ð_hdr->s_addr);
send_single_packet(m, dst_port);
+#ifdef RTE_NEXT_ABI
+ } else
+ /* Free the mbuf that contains non-IPV4/IPV6 packet */
+ rte_pktmbuf_free(m);
+#else
}
-
+#endif
}
#ifdef DO_RFC_1812_CHECKS
@@ -1015,12 +1027,19 @@ l3fwd_simple_forward(struct rte_mbuf *m, uint8_t portid, struct lcore_conf *qcon
* to BAD_PORT value.
*/
static inline __attribute__((always_inline)) void
+#ifdef RTE_NEXT_ABI
+rfc1812_process(struct ipv4_hdr *ipv4_hdr, uint16_t *dp, uint32_t ptype)
+#else
rfc1812_process(struct ipv4_hdr *ipv4_hdr, uint16_t *dp, uint32_t flags)
+#endif
{
uint8_t ihl;
+#ifdef RTE_NEXT_ABI
+ if (RTE_ETH_IS_IPV4_HDR(ptype)) {
+#else
if ((flags & PKT_RX_IPV4_HDR) != 0) {
-
+#endif
ihl = ipv4_hdr->version_ihl - IPV4_MIN_VER_IHL;
ipv4_hdr->time_to_live--;
@@ -1050,11 +1069,19 @@ get_dst_port(const struct lcore_conf *qconf, struct rte_mbuf *pkt,
struct ipv6_hdr *ipv6_hdr;
struct ether_hdr *eth_hdr;
+#ifdef RTE_NEXT_ABI
+ if (RTE_ETH_IS_IPV4_HDR(pkt->packet_type)) {
+#else
if (pkt->ol_flags & PKT_RX_IPV4_HDR) {
+#endif
if (rte_lpm_lookup(qconf->ipv4_lookup_struct, dst_ipv4,
&next_hop) != 0)
next_hop = portid;
+#ifdef RTE_NEXT_ABI
+ } else if (RTE_ETH_IS_IPV6_HDR(pkt->packet_type)) {
+#else
} else if (pkt->ol_flags & PKT_RX_IPV6_HDR) {
+#endif
eth_hdr = rte_pktmbuf_mtod(pkt, struct ether_hdr *);
ipv6_hdr = (struct ipv6_hdr *)(eth_hdr + 1);
if (rte_lpm6_lookup(qconf->ipv6_lookup_struct,
@@ -1088,12 +1115,52 @@ process_packet(struct lcore_conf *qconf, struct rte_mbuf *pkt,
ve = val_eth[dp];
dst_port[0] = dp;
+#ifdef RTE_NEXT_ABI
+ rfc1812_process(ipv4_hdr, dst_port, pkt->packet_type);
+#else
rfc1812_process(ipv4_hdr, dst_port, pkt->ol_flags);
+#endif
te = _mm_blend_epi16(te, ve, MASK_ETH);
_mm_store_si128((__m128i *)eth_hdr, te);
}
+#ifdef RTE_NEXT_ABI
+/*
+ * Read packet_type and destination IPV4 addresses from 4 mbufs.
+ */
+static inline void
+processx4_step1(struct rte_mbuf *pkt[FWDSTEP],
+ __m128i *dip,
+ uint32_t *ipv4_flag)
+{
+ struct ipv4_hdr *ipv4_hdr;
+ struct ether_hdr *eth_hdr;
+ uint32_t x0, x1, x2, x3;
+
+ eth_hdr = rte_pktmbuf_mtod(pkt[0], struct ether_hdr *);
+ ipv4_hdr = (struct ipv4_hdr *)(eth_hdr + 1);
+ x0 = ipv4_hdr->dst_addr;
+ ipv4_flag[0] = pkt[0]->packet_type & RTE_PTYPE_L3_IPV4;
+
+ eth_hdr = rte_pktmbuf_mtod(pkt[1], struct ether_hdr *);
+ ipv4_hdr = (struct ipv4_hdr *)(eth_hdr + 1);
+ x1 = ipv4_hdr->dst_addr;
+ ipv4_flag[0] &= pkt[1]->packet_type;
+
+ eth_hdr = rte_pktmbuf_mtod(pkt[2], struct ether_hdr *);
+ ipv4_hdr = (struct ipv4_hdr *)(eth_hdr + 1);
+ x2 = ipv4_hdr->dst_addr;
+ ipv4_flag[0] &= pkt[2]->packet_type;
+
+ eth_hdr = rte_pktmbuf_mtod(pkt[3], struct ether_hdr *);
+ ipv4_hdr = (struct ipv4_hdr *)(eth_hdr + 1);
+ x3 = ipv4_hdr->dst_addr;
+ ipv4_flag[0] &= pkt[3]->packet_type;
+
+ dip[0] = _mm_set_epi32(x3, x2, x1, x0);
+}
+#else /* RTE_NEXT_ABI */
/*
* Read ol_flags and destination IPV4 addresses from 4 mbufs.
*/
@@ -1126,14 +1193,24 @@ processx4_step1(struct rte_mbuf *pkt[FWDSTEP], __m128i *dip, uint32_t *flag)
dip[0] = _mm_set_epi32(x3, x2, x1, x0);
}
+#endif /* RTE_NEXT_ABI */
/*
* Lookup into LPM for destination port.
* If lookup fails, use incoming port (portid) as destination port.
*/
static inline void
+#ifdef RTE_NEXT_ABI
+processx4_step2(const struct lcore_conf *qconf,
+ __m128i dip,
+ uint32_t ipv4_flag,
+ uint8_t portid,
+ struct rte_mbuf *pkt[FWDSTEP],
+ uint16_t dprt[FWDSTEP])
+#else
processx4_step2(const struct lcore_conf *qconf, __m128i dip, uint32_t flag,
uint8_t portid, struct rte_mbuf *pkt[FWDSTEP], uint16_t dprt[FWDSTEP])
+#endif /* RTE_NEXT_ABI */
{
rte_xmm_t dst;
const __m128i bswap_mask = _mm_set_epi8(12, 13, 14, 15, 8, 9, 10, 11,
@@ -1143,7 +1220,11 @@ processx4_step2(const struct lcore_conf *qconf, __m128i dip, uint32_t flag,
dip = _mm_shuffle_epi8(dip, bswap_mask);
/* if all 4 packets are IPV4. */
+#ifdef RTE_NEXT_ABI
+ if (likely(ipv4_flag)) {
+#else
if (likely(flag != 0)) {
+#endif
rte_lpm_lookupx4(qconf->ipv4_lookup_struct, dip, dprt, portid);
} else {
dst.x = dip;
@@ -1193,6 +1274,16 @@ processx4_step3(struct rte_mbuf *pkt[FWDSTEP], uint16_t dst_port[FWDSTEP])
_mm_store_si128(p[2], te[2]);
_mm_store_si128(p[3], te[3]);
+#ifdef RTE_NEXT_ABI
+ rfc1812_process((struct ipv4_hdr *)((struct ether_hdr *)p[0] + 1),
+ &dst_port[0], pkt[0]->packet_type);
+ rfc1812_process((struct ipv4_hdr *)((struct ether_hdr *)p[1] + 1),
+ &dst_port[1], pkt[1]->packet_type);
+ rfc1812_process((struct ipv4_hdr *)((struct ether_hdr *)p[2] + 1),
+ &dst_port[2], pkt[2]->packet_type);
+ rfc1812_process((struct ipv4_hdr *)((struct ether_hdr *)p[3] + 1),
+ &dst_port[3], pkt[3]->packet_type);
+#else /* RTE_NEXT_ABI */
rfc1812_process((struct ipv4_hdr *)((struct ether_hdr *)p[0] + 1),
&dst_port[0], pkt[0]->ol_flags);
rfc1812_process((struct ipv4_hdr *)((struct ether_hdr *)p[1] + 1),
@@ -1201,6 +1292,7 @@ processx4_step3(struct rte_mbuf *pkt[FWDSTEP], uint16_t dst_port[FWDSTEP])
&dst_port[2], pkt[2]->ol_flags);
rfc1812_process((struct ipv4_hdr *)((struct ether_hdr *)p[3] + 1),
&dst_port[3], pkt[3]->ol_flags);
+#endif /* RTE_NEXT_ABI */
}
/*
@@ -1387,7 +1479,11 @@ main_loop(__attribute__((unused)) void *dummy)
uint16_t *lp;
uint16_t dst_port[MAX_PKT_BURST];
__m128i dip[MAX_PKT_BURST / FWDSTEP];
+#ifdef RTE_NEXT_ABI
+ uint32_t ipv4_flag[MAX_PKT_BURST / FWDSTEP];
+#else
uint32_t flag[MAX_PKT_BURST / FWDSTEP];
+#endif
uint16_t pnum[MAX_PKT_BURST + 1];
#endif
@@ -1457,6 +1553,18 @@ main_loop(__attribute__((unused)) void *dummy)
*/
int32_t n = RTE_ALIGN_FLOOR(nb_rx, 4);
for (j = 0; j < n ; j+=4) {
+#ifdef RTE_NEXT_ABI
+ uint32_t pkt_type =
+ pkts_burst[j]->packet_type &
+ pkts_burst[j+1]->packet_type &
+ pkts_burst[j+2]->packet_type &
+ pkts_burst[j+3]->packet_type;
+ if (pkt_type & RTE_PTYPE_L3_IPV4) {
+ simple_ipv4_fwd_4pkts(
+ &pkts_burst[j], portid, qconf);
+ } else if (pkt_type &
+ RTE_PTYPE_L3_IPV6) {
+#else /* RTE_NEXT_ABI */
uint32_t ol_flag = pkts_burst[j]->ol_flags
& pkts_burst[j+1]->ol_flags
& pkts_burst[j+2]->ol_flags
@@ -1465,6 +1573,7 @@ main_loop(__attribute__((unused)) void *dummy)
simple_ipv4_fwd_4pkts(&pkts_burst[j],
portid, qconf);
} else if (ol_flag & PKT_RX_IPV6_HDR) {
+#endif /* RTE_NEXT_ABI */
simple_ipv6_fwd_4pkts(&pkts_burst[j],
portid, qconf);
} else {
@@ -1489,13 +1598,21 @@ main_loop(__attribute__((unused)) void *dummy)
for (j = 0; j != k; j += FWDSTEP) {
processx4_step1(&pkts_burst[j],
&dip[j / FWDSTEP],
+#ifdef RTE_NEXT_ABI
+ &ipv4_flag[j / FWDSTEP]);
+#else
&flag[j / FWDSTEP]);
+#endif
}
k = RTE_ALIGN_FLOOR(nb_rx, FWDSTEP);
for (j = 0; j != k; j += FWDSTEP) {
processx4_step2(qconf, dip[j / FWDSTEP],
+#ifdef RTE_NEXT_ABI
+ ipv4_flag[j / FWDSTEP], portid,
+#else
flag[j / FWDSTEP], portid,
+#endif
&pkts_burst[j], &dst_port[j]);
}
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v9 18/19] examples/tep_termination: replace bit mask based packet type with unified packet type
2015-07-03 8:32 ` [dpdk-dev] [PATCH v9 00/19] " Helin Zhang
` (16 preceding siblings ...)
2015-07-03 8:32 ` [dpdk-dev] [PATCH v9 17/19] examples/l3fwd: " Helin Zhang
@ 2015-07-03 8:32 ` Helin Zhang
2015-07-03 8:32 ` [dpdk-dev] [PATCH v9 19/19] mbuf: remove old packet type bit masks Helin Zhang
2015-07-09 16:31 ` [dpdk-dev] [PATCH v10 00/19] unified packet type Helin Zhang
19 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-07-03 8:32 UTC (permalink / raw)
To: dev
To unify packet types among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.
To avoid breaking ABI compatibility, all the changes would be enabled
by RTE_NEXT_ABI, which is disabled by default.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
examples/tep_termination/vxlan.c | 4 ++++
1 file changed, 4 insertions(+)
v9 changes:
* Used unified packet type to check if it is a VXLAN packet, included in
RTE_NEXT_ABI which is disabled by default.
diff --git a/examples/tep_termination/vxlan.c b/examples/tep_termination/vxlan.c
index b2a2f53..ae4bc9e 100644
--- a/examples/tep_termination/vxlan.c
+++ b/examples/tep_termination/vxlan.c
@@ -180,8 +180,12 @@ decapsulation(struct rte_mbuf *pkt)
* (rfc7348) or that the rx offload flag is set (i40e only
* currently)*/
if (udp_hdr->dst_port != rte_cpu_to_be_16(DEFAULT_VXLAN_PORT) &&
+#ifdef RTE_NEXT_ABI
+ ((pkt->packet_type & RTE_PTYPE_TUNNEL_MASK) == 0)
+#else
(pkt->ol_flags & (PKT_RX_TUNNEL_IPV4_HDR |
PKT_RX_TUNNEL_IPV6_HDR)) == 0)
+#endif
return -1;
outer_header_len = info.outer_l2_len + info.outer_l3_len
+ sizeof(struct udp_hdr) + sizeof(struct vxlan_hdr);
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v9 19/19] mbuf: remove old packet type bit masks
2015-07-03 8:32 ` [dpdk-dev] [PATCH v9 00/19] " Helin Zhang
` (17 preceding siblings ...)
2015-07-03 8:32 ` [dpdk-dev] [PATCH v9 18/19] examples/tep_termination: " Helin Zhang
@ 2015-07-03 8:32 ` Helin Zhang
2015-07-09 16:31 ` [dpdk-dev] [PATCH v10 00/19] unified packet type Helin Zhang
19 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-07-03 8:32 UTC (permalink / raw)
To: dev
As unified packet types are used instead, those old bit masks and
the relevant macros for packet type indication need to be removed.
To avoid breaking ABI compatibility, all the changes would be
enabled by RTE_NEXT_ABI, which is disabled by default.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
lib/librte_mbuf/rte_mbuf.c | 4 ++++
lib/librte_mbuf/rte_mbuf.h | 4 ++++
2 files changed, 8 insertions(+)
v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.
* Redefined the bit masks for packet RX offload flags.
v5 changes:
* Rolled back the bit masks of RX flags, for ABI compatibility.
v6 changes:
* Disabled the code changes for unified packet type by default, to
avoid breaking ABI compatibility.
v7 changes:
* Renamed RTE_UNIFIED_PKT_TYPE to RTE_NEXT_ABI.
diff --git a/lib/librte_mbuf/rte_mbuf.c b/lib/librte_mbuf/rte_mbuf.c
index f506517..4320dd4 100644
--- a/lib/librte_mbuf/rte_mbuf.c
+++ b/lib/librte_mbuf/rte_mbuf.c
@@ -251,14 +251,18 @@ const char *rte_get_rx_ol_flag_name(uint64_t mask)
/* case PKT_RX_HBUF_OVERFLOW: return "PKT_RX_HBUF_OVERFLOW"; */
/* case PKT_RX_RECIP_ERR: return "PKT_RX_RECIP_ERR"; */
/* case PKT_RX_MAC_ERR: return "PKT_RX_MAC_ERR"; */
+#ifndef RTE_NEXT_ABI
case PKT_RX_IPV4_HDR: return "PKT_RX_IPV4_HDR";
case PKT_RX_IPV4_HDR_EXT: return "PKT_RX_IPV4_HDR_EXT";
case PKT_RX_IPV6_HDR: return "PKT_RX_IPV6_HDR";
case PKT_RX_IPV6_HDR_EXT: return "PKT_RX_IPV6_HDR_EXT";
+#endif /* RTE_NEXT_ABI */
case PKT_RX_IEEE1588_PTP: return "PKT_RX_IEEE1588_PTP";
case PKT_RX_IEEE1588_TMST: return "PKT_RX_IEEE1588_TMST";
+#ifndef RTE_NEXT_ABI
case PKT_RX_TUNNEL_IPV4_HDR: return "PKT_RX_TUNNEL_IPV4_HDR";
case PKT_RX_TUNNEL_IPV6_HDR: return "PKT_RX_TUNNEL_IPV6_HDR";
+#endif /* RTE_NEXT_ABI */
default: return NULL;
}
}
diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
index 3a17d95..b90c73f 100644
--- a/lib/librte_mbuf/rte_mbuf.h
+++ b/lib/librte_mbuf/rte_mbuf.h
@@ -92,14 +92,18 @@ extern "C" {
#define PKT_RX_HBUF_OVERFLOW (0ULL << 0) /**< Header buffer overflow. */
#define PKT_RX_RECIP_ERR (0ULL << 0) /**< Hardware processing error. */
#define PKT_RX_MAC_ERR (0ULL << 0) /**< MAC error. */
+#ifndef RTE_NEXT_ABI
#define PKT_RX_IPV4_HDR (1ULL << 5) /**< RX packet with IPv4 header. */
#define PKT_RX_IPV4_HDR_EXT (1ULL << 6) /**< RX packet with extended IPv4 header. */
#define PKT_RX_IPV6_HDR (1ULL << 7) /**< RX packet with IPv6 header. */
#define PKT_RX_IPV6_HDR_EXT (1ULL << 8) /**< RX packet with extended IPv6 header. */
+#endif /* RTE_NEXT_ABI */
#define PKT_RX_IEEE1588_PTP (1ULL << 9) /**< RX IEEE1588 L2 Ethernet PT Packet. */
#define PKT_RX_IEEE1588_TMST (1ULL << 10) /**< RX IEEE1588 L2/L4 timestamped packet.*/
+#ifndef RTE_NEXT_ABI
#define PKT_RX_TUNNEL_IPV4_HDR (1ULL << 11) /**< RX tunnel packet with IPv4 header.*/
#define PKT_RX_TUNNEL_IPV6_HDR (1ULL << 12) /**< RX tunnel packet with IPv6 header. */
+#endif /* RTE_NEXT_ABI */
#define PKT_RX_FDIR_ID (1ULL << 13) /**< FD id reported if FDIR match. */
#define PKT_RX_FDIR_FLX (1ULL << 14) /**< Flexible bytes reported if FDIR match. */
#define PKT_RX_QINQ_PKT (1ULL << 15) /**< RX packet with double VLAN stripped. */
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v10 00/19] unified packet type
2015-07-03 8:32 ` [dpdk-dev] [PATCH v9 00/19] " Helin Zhang
` (18 preceding siblings ...)
2015-07-03 8:32 ` [dpdk-dev] [PATCH v9 19/19] mbuf: remove old packet type bit masks Helin Zhang
@ 2015-07-09 16:31 ` Helin Zhang
2015-07-09 16:31 ` [dpdk-dev] [PATCH v10 01/19] mbuf: redefine packet_type in rte_mbuf Helin Zhang
` (19 more replies)
19 siblings, 20 replies; 257+ messages in thread
From: Helin Zhang @ 2015-07-09 16:31 UTC (permalink / raw)
To: dev
Currently only 6 bits which are stored in ol_flags are used to indicate the
packet types. This is not enough, as some NIC hardware can recognize quite
a lot of packet types, e.g i40e hardware can recognize more than 150 packet
types. Hiding those packet types hides hardware offload capabilities which
could be quite useful for improving performance and for end users.
So an unified packet types are needed to support all possible PMDs. A 16
bits packet_type in mbuf structure can be changed to 32 bits and used for
this purpose. In addition, all packet types stored in ol_flag field should
be deleted at all, and 6 bits of ol_flags can be save as the benifit.
Initially, 32 bits of packet_type can be divided into several sub fields to
indicate different packet type information of a packet. The initial design
is to divide those bits into fields for L2 types, L3 types, L4 types, tunnel
types, inner L2 types, inner L3 types and inner L4 types. All PMDs should
translate the offloaded packet types into these 7 fields of information, for
user applications.
To avoid breaking ABI compatibility, currently all the code changes for
unified packet type are disabled at compile time by default. Users can enable
it manually by defining the macro of RTE_NEXT_ABI. The code changes will be
valid by default in a future release, and the old version will be deleted
accordingly, after the ABI change process is done.
Note that this patch set should be integrated after another patch set
for '[PATCH v3 0/7] support i40e QinQ stripping and insertion', to clearly
solve the conflict during integration. As both patch sets modified 'struct
rte_mbuf', and the final layout of the 'struct rte_mbuf' is key to vectorized
ixgbe PMD.
Its v8 version was acked by Konstantin Ananyev <konstantin.ananyev@intel.com>
v2 changes:
* Enlarged the packet_type field from 16 bits to 32 bits.
* Redefined the packet type sub-fields.
* Updated the 'struct rte_kni_mbuf' for KNI according to the mbuf changes.
* Used redefined packet types and enlarged packet_type field for all PMDs
and corresponding applications.
* Removed changes in bond and its relevant application, as there is no need
at all according to the recent bond changes.
v3 changes:
* Put the mbuf layout changes into a single patch.
* Put vector ixgbe changes right after mbuf changes.
* Disabled vector ixgbe PMD by default, as mbuf layout changed, and then
re-enabled it after vector ixgbe PMD updated.
* Put the definitions of unified packet type into a single patch.
* Minor bug fixes and enhancements in l3fwd example.
v4 changes:
* Added detailed description of each packet types.
* Supported unified packet type of fm10k.
* Added printing logs of packet types of each received packet for rxonly
mode in testpmd.
* Removed several useless code lines which block packet type unification from
app/test/packet_burst_generator.c.
v5 changes:
* Added more detailed description for each packet types, together with examples.
* Rolled back the macro definitions of RX packet flags, for ABI compitability.
v6 changes:
* Disabled the code changes for unified packet type by default, to
avoid breaking ABI compatibility.
v7 changes:
* Renamed RTE_UNIFIED_PKT_TYPE to RTE_NEXT_ABI.
* Integrated with patch set for '[PATCH v3 0/7] support i40e QinQ stripping
and insertion', to clearly solve the conflicts during merging.
v8 changes:
* Moved the field of 'vlan_tci_outer' in 'struct rte_mbuf' to the end of the 1st
cache line, to avoid breaking any vectorized PMD storing, as fields of
'packet_type, pkt_len, data_len, vlan_tci, rss' should be in an contiguous 128
bits.
v9 changes:
* Put the mbuf changes and vector PMD changes together, as they are
tightly relevant.
* Renamed MAC to ETHER in packet type names.
* Corrected the packet type explanation of RTE_PTYPE_L2_ETHER.
* Reworked newly added cxgbe driver and tep_termination example application to
support unified packet type, which is disabled by default.
v10 changes:
* Fixed a compile error in tep_termination, when RTE_NEXT_ABI is enabled.
Helin Zhang (19):
mbuf: redefine packet_type in rte_mbuf
mbuf: add definitions of unified packet types
e1000: replace bit mask based packet type with unified packet type
ixgbe: replace bit mask based packet type with unified packet type
i40e: replace bit mask based packet type with unified packet type
enic: replace bit mask based packet type with unified packet type
vmxnet3: replace bit mask based packet type with unified packet type
fm10k: replace bit mask based packet type with unified packet type
cxgbe: replace bit mask based packet type with unified packet type
app/test-pipeline: replace bit mask based packet type with unified
packet type
app/testpmd: replace bit mask based packet type with unified packet
type
app/test: Remove useless code
examples/ip_fragmentation: replace bit mask based packet type with
unified packet type
examples/ip_reassembly: replace bit mask based packet type with
unified packet type
examples/l3fwd-acl: replace bit mask based packet type with unified
packet type
examples/l3fwd-power: replace bit mask based packet type with unified
packet type
examples/l3fwd: replace bit mask based packet type with unified packet
type
examples/tep_termination: replace bit mask based packet type with
unified packet type
mbuf: remove old packet type bit masks
app/test-pipeline/pipeline_hash.c | 13 +
app/test-pmd/csumonly.c | 14 +
app/test-pmd/rxonly.c | 183 +++++++
app/test/packet_burst_generator.c | 6 +-
drivers/net/cxgbe/sge.c | 8 +
drivers/net/e1000/igb_rxtx.c | 104 ++++
drivers/net/enic/enic_main.c | 26 +
drivers/net/fm10k/fm10k_rxtx.c | 27 +
drivers/net/i40e/i40e_rxtx.c | 554 +++++++++++++++++++++
drivers/net/ixgbe/ixgbe_rxtx.c | 163 ++++++
drivers/net/ixgbe/ixgbe_rxtx_vec.c | 75 ++-
drivers/net/vmxnet3/vmxnet3_rxtx.c | 8 +
examples/ip_fragmentation/main.c | 9 +
examples/ip_reassembly/main.c | 9 +
examples/l3fwd-acl/main.c | 29 +-
examples/l3fwd-power/main.c | 8 +
examples/l3fwd/main.c | 123 ++++-
examples/tep_termination/vxlan.c | 4 +
.../linuxapp/eal/include/exec-env/rte_kni_common.h | 6 +
lib/librte_mbuf/rte_mbuf.c | 4 +
lib/librte_mbuf/rte_mbuf.h | 516 +++++++++++++++++++
21 files changed, 1876 insertions(+), 13 deletions(-)
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v10 01/19] mbuf: redefine packet_type in rte_mbuf
2015-07-09 16:31 ` [dpdk-dev] [PATCH v10 00/19] unified packet type Helin Zhang
@ 2015-07-09 16:31 ` Helin Zhang
2015-07-13 15:53 ` Thomas Monjalon
2015-07-09 16:31 ` [dpdk-dev] [PATCH v10 02/19] mbuf: add definitions of unified packet types Helin Zhang
` (18 subsequent siblings)
19 siblings, 1 reply; 257+ messages in thread
From: Helin Zhang @ 2015-07-09 16:31 UTC (permalink / raw)
To: dev
In order to unify the packet type, the field of 'packet_type' in 'struct rte_mbuf'
needs to be extended from 16 to 32 bits. Accordingly, some fields in 'struct rte_mbuf'
are re-organized to support this change for Vector PMD. As 'struct rte_kni_mbuf' for
KNI should be right mapped to 'struct rte_mbuf', it should be modified accordingly.
In ixgbe PMD driver, corresponding changes are added for the mbuf changes, especially
the bit masks of packet type for 'ol_flags' are replaced by unified packet type. In
addition, more packet types (UDP, TCP and SCTP) are supported in vectorized ixgbe PMD.
To avoid breaking ABI compatibility, all the changes would be enabled by RTE_NEXT_ABI,
which is disabled by default.
Note that around 2% performance drop (64B) was observed of doing 4 ports (1 port per
82599 card) IO forwarding on the same SNB core.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Signed-off-by: Cunming Liang <cunming.liang@intel.com>
---
drivers/net/ixgbe/ixgbe_rxtx_vec.c | 75 +++++++++++++++++++++-
.../linuxapp/eal/include/exec-env/rte_kni_common.h | 6 ++
lib/librte_mbuf/rte_mbuf.h | 26 ++++++++
3 files changed, 105 insertions(+), 2 deletions(-)
v2 changes:
* Enlarged the packet_type field from 16 bits to 32 bits.
* Redefined the packet type sub-fields.
* Updated the 'struct rte_kni_mbuf' for KNI according to the mbuf changes.
v3 changes:
* Put the mbuf layout changes into a single patch.
* Disabled vector ixgbe PMD by default, as mbuf layout changed.
v5 changes:
* Re-worded the commit logs.
v6 changes:
* Disabled the code changes for unified packet type by default, to
avoid breaking ABI compatibility.
v7 changes:
* Renamed RTE_UNIFIED_PKT_TYPE to RTE_NEXT_ABI.
* Integrated with changes of QinQ stripping/insertion.
v8 changes:
* Moved the field of 'vlan_tci_outer' in 'struct rte_mbuf' to the end
of the 1st cache line, to avoid breaking any vectorized PMD storing.
v9 changes:
* Put the mbuf changes and vector PMD changes together, as they are
tightly relevant.
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec.c b/drivers/net/ixgbe/ixgbe_rxtx_vec.c
index 912d3b4..d3ac74a 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec.c
@@ -134,6 +134,12 @@ ixgbe_rxq_rearm(struct ixgbe_rx_queue *rxq)
*/
#ifdef RTE_IXGBE_RX_OLFLAGS_ENABLE
+#ifdef RTE_NEXT_ABI
+#define OLFLAGS_MASK_V (((uint64_t)PKT_RX_VLAN_PKT << 48) | \
+ ((uint64_t)PKT_RX_VLAN_PKT << 32) | \
+ ((uint64_t)PKT_RX_VLAN_PKT << 16) | \
+ ((uint64_t)PKT_RX_VLAN_PKT))
+#else
#define OLFLAGS_MASK ((uint16_t)(PKT_RX_VLAN_PKT | PKT_RX_IPV4_HDR |\
PKT_RX_IPV4_HDR_EXT | PKT_RX_IPV6_HDR |\
PKT_RX_IPV6_HDR_EXT))
@@ -142,11 +148,26 @@ ixgbe_rxq_rearm(struct ixgbe_rx_queue *rxq)
((uint64_t)OLFLAGS_MASK << 16) | \
((uint64_t)OLFLAGS_MASK))
#define PTYPE_SHIFT (1)
+#endif /* RTE_NEXT_ABI */
+
#define VTAG_SHIFT (3)
static inline void
desc_to_olflags_v(__m128i descs[4], struct rte_mbuf **rx_pkts)
{
+#ifdef RTE_NEXT_ABI
+ __m128i vtag0, vtag1;
+ union {
+ uint16_t e[4];
+ uint64_t dword;
+ } vol;
+
+ vtag0 = _mm_unpackhi_epi16(descs[0], descs[1]);
+ vtag1 = _mm_unpackhi_epi16(descs[2], descs[3]);
+ vtag1 = _mm_unpacklo_epi32(vtag0, vtag1);
+ vtag1 = _mm_srli_epi16(vtag1, VTAG_SHIFT);
+ vol.dword = _mm_cvtsi128_si64(vtag1) & OLFLAGS_MASK_V;
+#else
__m128i ptype0, ptype1, vtag0, vtag1;
union {
uint16_t e[4];
@@ -166,6 +187,7 @@ desc_to_olflags_v(__m128i descs[4], struct rte_mbuf **rx_pkts)
ptype1 = _mm_or_si128(ptype1, vtag1);
vol.dword = _mm_cvtsi128_si64(ptype1) & OLFLAGS_MASK_V;
+#endif /* RTE_NEXT_ABI */
rx_pkts[0]->ol_flags = vol.e[0];
rx_pkts[1]->ol_flags = vol.e[1];
@@ -196,6 +218,18 @@ _recv_raw_pkts_vec(struct ixgbe_rx_queue *rxq, struct rte_mbuf **rx_pkts,
int pos;
uint64_t var;
__m128i shuf_msk;
+#ifdef RTE_NEXT_ABI
+ __m128i crc_adjust = _mm_set_epi16(
+ 0, 0, 0, /* ignore non-length fields */
+ -rxq->crc_len, /* sub crc on data_len */
+ 0, /* ignore high-16bits of pkt_len */
+ -rxq->crc_len, /* sub crc on pkt_len */
+ 0, 0 /* ignore pkt_type field */
+ );
+ __m128i dd_check, eop_check;
+ __m128i desc_mask = _mm_set_epi32(0xFFFFFFFF, 0xFFFFFFFF,
+ 0xFFFFFFFF, 0xFFFF07F0);
+#else
__m128i crc_adjust = _mm_set_epi16(
0, 0, 0, 0, /* ignore non-length fields */
0, /* ignore high-16bits of pkt_len */
@@ -204,6 +238,7 @@ _recv_raw_pkts_vec(struct ixgbe_rx_queue *rxq, struct rte_mbuf **rx_pkts,
0 /* ignore pkt_type field */
);
__m128i dd_check, eop_check;
+#endif /* RTE_NEXT_ABI */
if (unlikely(nb_pkts < RTE_IXGBE_VPMD_RX_BURST))
return 0;
@@ -232,6 +267,18 @@ _recv_raw_pkts_vec(struct ixgbe_rx_queue *rxq, struct rte_mbuf **rx_pkts,
eop_check = _mm_set_epi64x(0x0000000200000002LL, 0x0000000200000002LL);
/* mask to shuffle from desc. to mbuf */
+#ifdef RTE_NEXT_ABI
+ shuf_msk = _mm_set_epi8(
+ 7, 6, 5, 4, /* octet 4~7, 32bits rss */
+ 15, 14, /* octet 14~15, low 16 bits vlan_macip */
+ 13, 12, /* octet 12~13, 16 bits data_len */
+ 0xFF, 0xFF, /* skip high 16 bits pkt_len, zero out */
+ 13, 12, /* octet 12~13, low 16 bits pkt_len */
+ 0xFF, 0xFF, /* skip high 16 bits pkt_type */
+ 1, /* octet 1, 8 bits pkt_type field */
+ 0 /* octet 0, 4 bits offset 4 pkt_type field */
+ );
+#else
shuf_msk = _mm_set_epi8(
7, 6, 5, 4, /* octet 4~7, 32bits rss */
0xFF, 0xFF, /* skip high 16 bits vlan_macip, zero out */
@@ -241,18 +288,28 @@ _recv_raw_pkts_vec(struct ixgbe_rx_queue *rxq, struct rte_mbuf **rx_pkts,
13, 12, /* octet 12~13, 16 bits data_len */
0xFF, 0xFF /* skip pkt_type field */
);
+#endif /* RTE_NEXT_ABI */
/* Cache is empty -> need to scan the buffer rings, but first move
* the next 'n' mbufs into the cache */
sw_ring = &rxq->sw_ring[rxq->rx_tail];
- /*
- * A. load 4 packet in one loop
+#ifdef RTE_NEXT_ABI
+ /* A. load 4 packet in one loop
+ * [A*. mask out 4 unused dirty field in desc]
* B. copy 4 mbuf point from swring to rx_pkts
* C. calc the number of DD bits among the 4 packets
* [C*. extract the end-of-packet bit, if requested]
* D. fill info. from desc to mbuf
*/
+#else
+ /* A. load 4 packet in one loop
+ * B. copy 4 mbuf point from swring to rx_pkts
+ * C. calc the number of DD bits among the 4 packets
+ * [C*. extract the end-of-packet bit, if requested]
+ * D. fill info. from desc to mbuf
+ */
+#endif /* RTE_NEXT_ABI */
for (pos = 0, nb_pkts_recd = 0; pos < RTE_IXGBE_VPMD_RX_BURST;
pos += RTE_IXGBE_DESCS_PER_LOOP,
rxdp += RTE_IXGBE_DESCS_PER_LOOP) {
@@ -289,6 +346,16 @@ _recv_raw_pkts_vec(struct ixgbe_rx_queue *rxq, struct rte_mbuf **rx_pkts,
/* B.2 copy 2 mbuf point into rx_pkts */
_mm_storeu_si128((__m128i *)&rx_pkts[pos+2], mbp2);
+#ifdef RTE_NEXT_ABI
+ /* A* mask out 0~3 bits RSS type */
+ descs[3] = _mm_and_si128(descs[3], desc_mask);
+ descs[2] = _mm_and_si128(descs[2], desc_mask);
+
+ /* A* mask out 0~3 bits RSS type */
+ descs[1] = _mm_and_si128(descs[1], desc_mask);
+ descs[0] = _mm_and_si128(descs[0], desc_mask);
+#endif /* RTE_NEXT_ABI */
+
/* avoid compiler reorder optimization */
rte_compiler_barrier();
@@ -301,7 +368,11 @@ _recv_raw_pkts_vec(struct ixgbe_rx_queue *rxq, struct rte_mbuf **rx_pkts,
/* C.1 4=>2 filter staterr info only */
sterr_tmp1 = _mm_unpackhi_epi32(descs[1], descs[0]);
+#ifdef RTE_NEXT_ABI
+ /* set ol_flags with vlan packet type */
+#else
/* set ol_flags with packet type and vlan tag */
+#endif /* RTE_NEXT_ABI */
desc_to_olflags_v(descs, &rx_pkts[pos]);
/* D.2 pkt 3,4 set in_port/nb_seg and remove crc */
diff --git a/lib/librte_eal/linuxapp/eal/include/exec-env/rte_kni_common.h b/lib/librte_eal/linuxapp/eal/include/exec-env/rte_kni_common.h
index 1e55c2d..e9f38bd 100644
--- a/lib/librte_eal/linuxapp/eal/include/exec-env/rte_kni_common.h
+++ b/lib/librte_eal/linuxapp/eal/include/exec-env/rte_kni_common.h
@@ -117,9 +117,15 @@ struct rte_kni_mbuf {
uint16_t data_off; /**< Start address of data in segment buffer. */
char pad1[4];
uint64_t ol_flags; /**< Offload features. */
+#ifdef RTE_NEXT_ABI
+ char pad2[4];
+ uint32_t pkt_len; /**< Total pkt len: sum of all segment data_len. */
+ uint16_t data_len; /**< Amount of data in segment buffer. */
+#else
char pad2[2];
uint16_t data_len; /**< Amount of data in segment buffer. */
uint32_t pkt_len; /**< Total pkt len: sum of all segment data_len. */
+#endif
/* fields on second cache line */
char pad3[8] __attribute__((__aligned__(RTE_CACHE_LINE_SIZE)));
diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
index 80419df..ac29da3 100644
--- a/lib/librte_mbuf/rte_mbuf.h
+++ b/lib/librte_mbuf/rte_mbuf.h
@@ -276,6 +276,28 @@ struct rte_mbuf {
/* remaining bytes are set on RX when pulling packet from descriptor */
MARKER rx_descriptor_fields1;
+#ifdef RTE_NEXT_ABI
+ /*
+ * The packet type, which is the combination of outer/inner L2, L3, L4
+ * and tunnel types.
+ */
+ union {
+ uint32_t packet_type; /**< L2/L3/L4 and tunnel information. */
+ struct {
+ uint32_t l2_type:4; /**< (Outer) L2 type. */
+ uint32_t l3_type:4; /**< (Outer) L3 type. */
+ uint32_t l4_type:4; /**< (Outer) L4 type. */
+ uint32_t tun_type:4; /**< Tunnel type. */
+ uint32_t inner_l2_type:4; /**< Inner L2 type. */
+ uint32_t inner_l3_type:4; /**< Inner L3 type. */
+ uint32_t inner_l4_type:4; /**< Inner L4 type. */
+ };
+ };
+
+ uint32_t pkt_len; /**< Total pkt len: sum of all segments. */
+ uint16_t data_len; /**< Amount of data in segment buffer. */
+ uint16_t vlan_tci; /**< VLAN Tag Control Identifier (CPU order) */
+#else /* RTE_NEXT_ABI */
/**
* The packet type, which is used to indicate ordinary packet and also
* tunneled packet format, i.e. each number is represented a type of
@@ -287,6 +309,7 @@ struct rte_mbuf {
uint32_t pkt_len; /**< Total pkt len: sum of all segments. */
uint16_t vlan_tci; /**< VLAN Tag Control Identifier (CPU order) */
uint16_t vlan_tci_outer; /**< Outer VLAN Tag Control Identifier (CPU order) */
+#endif /* RTE_NEXT_ABI */
union {
uint32_t rss; /**< RSS hash result if RSS enabled */
struct {
@@ -307,6 +330,9 @@ struct rte_mbuf {
} hash; /**< hash information */
uint32_t seqn; /**< Sequence number. See also rte_reorder_insert() */
+#ifdef RTE_NEXT_ABI
+ uint16_t vlan_tci_outer; /**< Outer VLAN Tag Control Identifier (CPU order) */
+#endif /* RTE_NEXT_ABI */
/* second cache line - fields only used in slow path or on TX */
MARKER cacheline1 __rte_cache_aligned;
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v10 02/19] mbuf: add definitions of unified packet types
2015-07-09 16:31 ` [dpdk-dev] [PATCH v10 00/19] unified packet type Helin Zhang
2015-07-09 16:31 ` [dpdk-dev] [PATCH v10 01/19] mbuf: redefine packet_type in rte_mbuf Helin Zhang
@ 2015-07-09 16:31 ` Helin Zhang
2015-07-15 10:19 ` Olivier MATZ
2015-07-09 16:31 ` [dpdk-dev] [PATCH v10 03/19] e1000: replace bit mask based packet type with unified packet type Helin Zhang
` (17 subsequent siblings)
19 siblings, 1 reply; 257+ messages in thread
From: Helin Zhang @ 2015-07-09 16:31 UTC (permalink / raw)
To: dev
As there are only 6 bit flags in ol_flags for indicating packet
types, which is not enough to describe all the possible packet
types hardware can recognize. For example, i40e hardware can
recognize more than 150 packet types. Unified packet type is
composed of L2 type, L3 type, L4 type, tunnel type, inner L2 type,
inner L3 type and inner L4 type fields, and can be stored in
'struct rte_mbuf' of 32 bits field 'packet_type'.
To avoid breaking ABI compatibility, all the changes would be
enabled by RTE_NEXT_ABI, which is disabled by default.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
lib/librte_mbuf/rte_mbuf.h | 486 +++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 486 insertions(+)
v3 changes:
* Put the definitions of unified packet type into a single patch.
v4 changes:
* Added detailed description of each packet types.
v5 changes:
* Re-worded the commit logs.
* Added more detailed description for all packet types, together with examples.
v6 changes:
* Disabled the code changes for unified packet type by default, to
avoid breaking ABI compatibility.
v7 changes:
* Renamed RTE_UNIFIED_PKT_TYPE to RTE_NEXT_ABI.
v9 changes:
* Renamed MAC to ETHER in packet type names.
* Corrected the packet type explanation of RTE_PTYPE_L2_ETHER.
diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
index ac29da3..3a17d95 100644
--- a/lib/librte_mbuf/rte_mbuf.h
+++ b/lib/librte_mbuf/rte_mbuf.h
@@ -202,6 +202,492 @@ extern "C" {
/* Use final bit of flags to indicate a control mbuf */
#define CTRL_MBUF_FLAG (1ULL << 63) /**< Mbuf contains control data */
+#ifdef RTE_NEXT_ABI
+/*
+ * 32 bits are divided into several fields to mark packet types. Note that
+ * each field is indexical.
+ * - Bit 3:0 is for L2 types.
+ * - Bit 7:4 is for L3 or outer L3 (for tunneling case) types.
+ * - Bit 11:8 is for L4 or outer L4 (for tunneling case) types.
+ * - Bit 15:12 is for tunnel types.
+ * - Bit 19:16 is for inner L2 types.
+ * - Bit 23:20 is for inner L3 types.
+ * - Bit 27:24 is for inner L4 types.
+ * - Bit 31:28 is reserved.
+ *
+ * To be compatible with Vector PMD, RTE_PTYPE_L3_IPV4, RTE_PTYPE_L3_IPV4_EXT,
+ * RTE_PTYPE_L3_IPV6, RTE_PTYPE_L3_IPV6_EXT, RTE_PTYPE_L4_TCP, RTE_PTYPE_L4_UDP
+ * and RTE_PTYPE_L4_SCTP should be kept as below in a contiguous 7 bits.
+ *
+ * Note that L3 types values are selected for checking IPV4/IPV6 header from
+ * performance point of view. Reading annotations of RTE_ETH_IS_IPV4_HDR and
+ * RTE_ETH_IS_IPV6_HDR is needed for any future changes of L3 type values.
+ *
+ * Note that the packet types of the same packet recognized by different
+ * hardware may be different, as different hardware may have different
+ * capability of packet type recognition.
+ *
+ * examples:
+ * <'ether type'=0x0800
+ * | 'version'=4, 'protocol'=0x29
+ * | 'version'=6, 'next header'=0x3A
+ * | 'ICMPv6 header'>
+ * will be recognized on i40e hardware as packet type combination of,
+ * RTE_PTYPE_L2_ETHER |
+ * RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ * RTE_PTYPE_TUNNEL_IP |
+ * RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ * RTE_PTYPE_INNER_L4_ICMP.
+ *
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=0x2F
+ * | 'GRE header'
+ * | 'version'=6, 'next header'=0x11
+ * | 'UDP header'>
+ * will be recognized on i40e hardware as packet type combination of,
+ * RTE_PTYPE_L2_ETHER |
+ * RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ * RTE_PTYPE_TUNNEL_GRENAT |
+ * RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ * RTE_PTYPE_INNER_L4_UDP.
+ */
+#define RTE_PTYPE_UNKNOWN 0x00000000
+/**
+ * Ethernet packet type.
+ * It is used for outer packet for tunneling cases.
+ *
+ * Packet format:
+ * <'ether type'=[0x0800|0x86DD]>
+ */
+#define RTE_PTYPE_L2_ETHER 0x00000001
+/**
+ * Ethernet packet type for time sync.
+ *
+ * Packet format:
+ * <'ether type'=0x88F7>
+ */
+#define RTE_PTYPE_L2_ETHER_TIMESYNC 0x00000002
+/**
+ * ARP (Address Resolution Protocol) packet type.
+ *
+ * Packet format:
+ * <'ether type'=0x0806>
+ */
+#define RTE_PTYPE_L2_ETHER_ARP 0x00000003
+/**
+ * LLDP (Link Layer Discovery Protocol) packet type.
+ *
+ * Packet format:
+ * <'ether type'=0x88CC>
+ */
+#define RTE_PTYPE_L2_ETHER_LLDP 0x00000004
+/**
+ * Mask of layer 2 packet types.
+ * It is used for outer packet for tunneling cases.
+ */
+#define RTE_PTYPE_L2_MASK 0x0000000f
+/**
+ * IP (Internet Protocol) version 4 packet type.
+ * It is used for outer packet for tunneling cases, and does not contain any
+ * header option.
+ *
+ * Packet format:
+ * <'ether type'=0x0800
+ * | 'version'=4, 'ihl'=5>
+ */
+#define RTE_PTYPE_L3_IPV4 0x00000010
+/**
+ * IP (Internet Protocol) version 4 packet type.
+ * It is used for outer packet for tunneling cases, and contains header
+ * options.
+ *
+ * Packet format:
+ * <'ether type'=0x0800
+ * | 'version'=4, 'ihl'=[6-15], 'options'>
+ */
+#define RTE_PTYPE_L3_IPV4_EXT 0x00000030
+/**
+ * IP (Internet Protocol) version 6 packet type.
+ * It is used for outer packet for tunneling cases, and does not contain any
+ * extension header.
+ *
+ * Packet format:
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=0x3B>
+ */
+#define RTE_PTYPE_L3_IPV6 0x00000040
+/**
+ * IP (Internet Protocol) version 4 packet type.
+ * It is used for outer packet for tunneling cases, and may or maynot contain
+ * header options.
+ *
+ * Packet format:
+ * <'ether type'=0x0800
+ * | 'version'=4, 'ihl'=[5-15], <'options'>>
+ */
+#define RTE_PTYPE_L3_IPV4_EXT_UNKNOWN 0x00000090
+/**
+ * IP (Internet Protocol) version 6 packet type.
+ * It is used for outer packet for tunneling cases, and contains extension
+ * headers.
+ *
+ * Packet format:
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=[0x0|0x2B|0x2C|0x32|0x33|0x3C|0x87],
+ * 'extension headers'>
+ */
+#define RTE_PTYPE_L3_IPV6_EXT 0x000000c0
+/**
+ * IP (Internet Protocol) version 6 packet type.
+ * It is used for outer packet for tunneling cases, and may or maynot contain
+ * extension headers.
+ *
+ * Packet format:
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=[0x3B|0x0|0x2B|0x2C|0x32|0x33|0x3C|0x87],
+ * <'extension headers'>>
+ */
+#define RTE_PTYPE_L3_IPV6_EXT_UNKNOWN 0x000000e0
+/**
+ * Mask of layer 3 packet types.
+ * It is used for outer packet for tunneling cases.
+ */
+#define RTE_PTYPE_L3_MASK 0x000000f0
+/**
+ * TCP (Transmission Control Protocol) packet type.
+ * It is used for outer packet for tunneling cases.
+ *
+ * Packet format:
+ * <'ether type'=0x0800
+ * | 'version'=4, 'protocol'=6, 'MF'=0>
+ * or,
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=6>
+ */
+#define RTE_PTYPE_L4_TCP 0x00000100
+/**
+ * UDP (User Datagram Protocol) packet type.
+ * It is used for outer packet for tunneling cases.
+ *
+ * Packet format:
+ * <'ether type'=0x0800
+ * | 'version'=4, 'protocol'=17, 'MF'=0>
+ * or,
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=17>
+ */
+#define RTE_PTYPE_L4_UDP 0x00000200
+/**
+ * Fragmented IP (Internet Protocol) packet type.
+ * It is used for outer packet for tunneling cases.
+ *
+ * It refers to those packets of any IP types, which can be recognized as
+ * fragmented. A fragmented packet cannot be recognized as any other L4 types
+ * (RTE_PTYPE_L4_TCP, RTE_PTYPE_L4_UDP, RTE_PTYPE_L4_SCTP, RTE_PTYPE_L4_ICMP,
+ * RTE_PTYPE_L4_NONFRAG).
+ *
+ * Packet format:
+ * <'ether type'=0x0800
+ * | 'version'=4, 'MF'=1>
+ * or,
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=44>
+ */
+#define RTE_PTYPE_L4_FRAG 0x00000300
+/**
+ * SCTP (Stream Control Transmission Protocol) packet type.
+ * It is used for outer packet for tunneling cases.
+ *
+ * Packet format:
+ * <'ether type'=0x0800
+ * | 'version'=4, 'protocol'=132, 'MF'=0>
+ * or,
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=132>
+ */
+#define RTE_PTYPE_L4_SCTP 0x00000400
+/**
+ * ICMP (Internet Control Message Protocol) packet type.
+ * It is used for outer packet for tunneling cases.
+ *
+ * Packet format:
+ * <'ether type'=0x0800
+ * | 'version'=4, 'protocol'=1, 'MF'=0>
+ * or,
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=1>
+ */
+#define RTE_PTYPE_L4_ICMP 0x00000500
+/**
+ * Non-fragmented IP (Internet Protocol) packet type.
+ * It is used for outer packet for tunneling cases.
+ *
+ * It refers to those packets of any IP types, while cannot be recognized as
+ * any of above L4 types (RTE_PTYPE_L4_TCP, RTE_PTYPE_L4_UDP,
+ * RTE_PTYPE_L4_FRAG, RTE_PTYPE_L4_SCTP, RTE_PTYPE_L4_ICMP).
+ *
+ * Packet format:
+ * <'ether type'=0x0800
+ * | 'version'=4, 'protocol'!=[6|17|132|1], 'MF'=0>
+ * or,
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'!=[6|17|44|132|1]>
+ */
+#define RTE_PTYPE_L4_NONFRAG 0x00000600
+/**
+ * Mask of layer 4 packet types.
+ * It is used for outer packet for tunneling cases.
+ */
+#define RTE_PTYPE_L4_MASK 0x00000f00
+/**
+ * IP (Internet Protocol) in IP (Internet Protocol) tunneling packet type.
+ *
+ * Packet format:
+ * <'ether type'=0x0800
+ * | 'version'=4, 'protocol'=[4|41]>
+ * or,
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=[4|41]>
+ */
+#define RTE_PTYPE_TUNNEL_IP 0x00001000
+/**
+ * GRE (Generic Routing Encapsulation) tunneling packet type.
+ *
+ * Packet format:
+ * <'ether type'=0x0800
+ * | 'version'=4, 'protocol'=47>
+ * or,
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=47>
+ */
+#define RTE_PTYPE_TUNNEL_GRE 0x00002000
+/**
+ * VXLAN (Virtual eXtensible Local Area Network) tunneling packet type.
+ *
+ * Packet format:
+ * <'ether type'=0x0800
+ * | 'version'=4, 'protocol'=17
+ * | 'destination port'=4798>
+ * or,
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=17
+ * | 'destination port'=4798>
+ */
+#define RTE_PTYPE_TUNNEL_VXLAN 0x00003000
+/**
+ * NVGRE (Network Virtualization using Generic Routing Encapsulation) tunneling
+ * packet type.
+ *
+ * Packet format:
+ * <'ether type'=0x0800
+ * | 'version'=4, 'protocol'=47
+ * | 'protocol type'=0x6558>
+ * or,
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=47
+ * | 'protocol type'=0x6558'>
+ */
+#define RTE_PTYPE_TUNNEL_NVGRE 0x00004000
+/**
+ * GENEVE (Generic Network Virtualization Encapsulation) tunneling packet type.
+ *
+ * Packet format:
+ * <'ether type'=0x0800
+ * | 'version'=4, 'protocol'=17
+ * | 'destination port'=6081>
+ * or,
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=17
+ * | 'destination port'=6081>
+ */
+#define RTE_PTYPE_TUNNEL_GENEVE 0x00005000
+/**
+ * Tunneling packet type of Teredo, VXLAN (Virtual eXtensible Local Area
+ * Network) or GRE (Generic Routing Encapsulation) could be recognized as this
+ * packet type, if they can not be recognized independently as of hardware
+ * capability.
+ */
+#define RTE_PTYPE_TUNNEL_GRENAT 0x00006000
+/**
+ * Mask of tunneling packet types.
+ */
+#define RTE_PTYPE_TUNNEL_MASK 0x0000f000
+/**
+ * Ethernet packet type.
+ * It is used for inner packet type only.
+ *
+ * Packet format (inner only):
+ * <'ether type'=[0x800|0x86DD]>
+ */
+#define RTE_PTYPE_INNER_L2_ETHER 0x00010000
+/**
+ * Ethernet packet type with VLAN (Virtual Local Area Network) tag.
+ *
+ * Packet format (inner only):
+ * <'ether type'=[0x800|0x86DD], vlan=[1-4095]>
+ */
+#define RTE_PTYPE_INNER_L2_ETHER_VLAN 0x00020000
+/**
+ * Mask of inner layer 2 packet types.
+ */
+#define RTE_PTYPE_INNER_L2_MASK 0x000f0000
+/**
+ * IP (Internet Protocol) version 4 packet type.
+ * It is used for inner packet only, and does not contain any header option.
+ *
+ * Packet format (inner only):
+ * <'ether type'=0x0800
+ * | 'version'=4, 'ihl'=5>
+ */
+#define RTE_PTYPE_INNER_L3_IPV4 0x00100000
+/**
+ * IP (Internet Protocol) version 4 packet type.
+ * It is used for inner packet only, and contains header options.
+ *
+ * Packet format (inner only):
+ * <'ether type'=0x0800
+ * | 'version'=4, 'ihl'=[6-15], 'options'>
+ */
+#define RTE_PTYPE_INNER_L3_IPV4_EXT 0x00200000
+/**
+ * IP (Internet Protocol) version 6 packet type.
+ * It is used for inner packet only, and does not contain any extension header.
+ *
+ * Packet format (inner only):
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=0x3B>
+ */
+#define RTE_PTYPE_INNER_L3_IPV6 0x00300000
+/**
+ * IP (Internet Protocol) version 4 packet type.
+ * It is used for inner packet only, and may or maynot contain header options.
+ *
+ * Packet format (inner only):
+ * <'ether type'=0x0800
+ * | 'version'=4, 'ihl'=[5-15], <'options'>>
+ */
+#define RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN 0x00400000
+/**
+ * IP (Internet Protocol) version 6 packet type.
+ * It is used for inner packet only, and contains extension headers.
+ *
+ * Packet format (inner only):
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=[0x0|0x2B|0x2C|0x32|0x33|0x3C|0x87],
+ * 'extension headers'>
+ */
+#define RTE_PTYPE_INNER_L3_IPV6_EXT 0x00500000
+/**
+ * IP (Internet Protocol) version 6 packet type.
+ * It is used for inner packet only, and may or maynot contain extension
+ * headers.
+ *
+ * Packet format (inner only):
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=[0x3B|0x0|0x2B|0x2C|0x32|0x33|0x3C|0x87],
+ * <'extension headers'>>
+ */
+#define RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN 0x00600000
+/**
+ * Mask of inner layer 3 packet types.
+ */
+#define RTE_PTYPE_INNER_INNER_L3_MASK 0x00f00000
+/**
+ * TCP (Transmission Control Protocol) packet type.
+ * It is used for inner packet only.
+ *
+ * Packet format (inner only):
+ * <'ether type'=0x0800
+ * | 'version'=4, 'protocol'=6, 'MF'=0>
+ * or,
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=6>
+ */
+#define RTE_PTYPE_INNER_L4_TCP 0x01000000
+/**
+ * UDP (User Datagram Protocol) packet type.
+ * It is used for inner packet only.
+ *
+ * Packet format (inner only):
+ * <'ether type'=0x0800
+ * | 'version'=4, 'protocol'=17, 'MF'=0>
+ * or,
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=17>
+ */
+#define RTE_PTYPE_INNER_L4_UDP 0x02000000
+/**
+ * Fragmented IP (Internet Protocol) packet type.
+ * It is used for inner packet only, and may or maynot have layer 4 packet.
+ *
+ * Packet format (inner only):
+ * <'ether type'=0x0800
+ * | 'version'=4, 'MF'=1>
+ * or,
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=44>
+ */
+#define RTE_PTYPE_INNER_L4_FRAG 0x03000000
+/**
+ * SCTP (Stream Control Transmission Protocol) packet type.
+ * It is used for inner packet only.
+ *
+ * Packet format (inner only):
+ * <'ether type'=0x0800
+ * | 'version'=4, 'protocol'=132, 'MF'=0>
+ * or,
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=132>
+ */
+#define RTE_PTYPE_INNER_L4_SCTP 0x04000000
+/**
+ * ICMP (Internet Control Message Protocol) packet type.
+ * It is used for inner packet only.
+ *
+ * Packet format (inner only):
+ * <'ether type'=0x0800
+ * | 'version'=4, 'protocol'=1, 'MF'=0>
+ * or,
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'=1>
+ */
+#define RTE_PTYPE_INNER_L4_ICMP 0x05000000
+/**
+ * Non-fragmented IP (Internet Protocol) packet type.
+ * It is used for inner packet only, and may or maynot have other unknown layer
+ * 4 packet types.
+ *
+ * Packet format (inner only):
+ * <'ether type'=0x0800
+ * | 'version'=4, 'protocol'!=[6|17|132|1], 'MF'=0>
+ * or,
+ * <'ether type'=0x86DD
+ * | 'version'=6, 'next header'!=[6|17|44|132|1]>
+ */
+#define RTE_PTYPE_INNER_L4_NONFRAG 0x06000000
+/**
+ * Mask of inner layer 4 packet types.
+ */
+#define RTE_PTYPE_INNER_L4_MASK 0x0f000000
+
+/**
+ * Check if the (outer) L3 header is IPv4. To avoid comparing IPv4 types one by
+ * one, bit 4 is selected to be used for IPv4 only. Then checking bit 4 can
+ * determin if it is an IPV4 packet.
+ */
+#define RTE_ETH_IS_IPV4_HDR(ptype) ((ptype) & RTE_PTYPE_L3_IPV4)
+
+/**
+ * Check if the (outer) L3 header is IPv4. To avoid comparing IPv4 types one by
+ * one, bit 6 is selected to be used for IPv4 only. Then checking bit 6 can
+ * determin if it is an IPV4 packet.
+ */
+#define RTE_ETH_IS_IPV6_HDR(ptype) ((ptype) & RTE_PTYPE_L3_IPV6)
+
+/* Check if it is a tunneling packet */
+#define RTE_ETH_IS_TUNNEL_PKT(ptype) ((ptype) & RTE_PTYPE_TUNNEL_MASK)
+#endif /* RTE_NEXT_ABI */
+
/**
* Get the name of a RX offload flag
*
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v10 03/19] e1000: replace bit mask based packet type with unified packet type
2015-07-09 16:31 ` [dpdk-dev] [PATCH v10 00/19] unified packet type Helin Zhang
2015-07-09 16:31 ` [dpdk-dev] [PATCH v10 01/19] mbuf: redefine packet_type in rte_mbuf Helin Zhang
2015-07-09 16:31 ` [dpdk-dev] [PATCH v10 02/19] mbuf: add definitions of unified packet types Helin Zhang
@ 2015-07-09 16:31 ` Helin Zhang
2015-07-09 16:31 ` [dpdk-dev] [PATCH v10 04/19] ixgbe: " Helin Zhang
` (16 subsequent siblings)
19 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-07-09 16:31 UTC (permalink / raw)
To: dev
To unify packet types among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.
To avoid breaking ABI compatibility, all the changes would be
enabled by RTE_NEXT_ABI, which is disabled by default.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
drivers/net/e1000/igb_rxtx.c | 104 +++++++++++++++++++++++++++++++++++++++++++
1 file changed, 104 insertions(+)
v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.
v5 changes:
* Re-worded the commit logs.
v6 changes:
* Disabled the code changes for unified packet type by default, to
avoid breaking ABI compatibility.
v7 changes:
* Renamed RTE_UNIFIED_PKT_TYPE to RTE_NEXT_ABI.
v9 changes:
* Renamed MAC to ETHER in packet type names.
diff --git a/drivers/net/e1000/igb_rxtx.c b/drivers/net/e1000/igb_rxtx.c
index 43d6703..165144c 100644
--- a/drivers/net/e1000/igb_rxtx.c
+++ b/drivers/net/e1000/igb_rxtx.c
@@ -590,6 +590,101 @@ eth_igb_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
* RX functions
*
**********************************************************************/
+#ifdef RTE_NEXT_ABI
+#define IGB_PACKET_TYPE_IPV4 0X01
+#define IGB_PACKET_TYPE_IPV4_TCP 0X11
+#define IGB_PACKET_TYPE_IPV4_UDP 0X21
+#define IGB_PACKET_TYPE_IPV4_SCTP 0X41
+#define IGB_PACKET_TYPE_IPV4_EXT 0X03
+#define IGB_PACKET_TYPE_IPV4_EXT_SCTP 0X43
+#define IGB_PACKET_TYPE_IPV6 0X04
+#define IGB_PACKET_TYPE_IPV6_TCP 0X14
+#define IGB_PACKET_TYPE_IPV6_UDP 0X24
+#define IGB_PACKET_TYPE_IPV6_EXT 0X0C
+#define IGB_PACKET_TYPE_IPV6_EXT_TCP 0X1C
+#define IGB_PACKET_TYPE_IPV6_EXT_UDP 0X2C
+#define IGB_PACKET_TYPE_IPV4_IPV6 0X05
+#define IGB_PACKET_TYPE_IPV4_IPV6_TCP 0X15
+#define IGB_PACKET_TYPE_IPV4_IPV6_UDP 0X25
+#define IGB_PACKET_TYPE_IPV4_IPV6_EXT 0X0D
+#define IGB_PACKET_TYPE_IPV4_IPV6_EXT_TCP 0X1D
+#define IGB_PACKET_TYPE_IPV4_IPV6_EXT_UDP 0X2D
+#define IGB_PACKET_TYPE_MAX 0X80
+#define IGB_PACKET_TYPE_MASK 0X7F
+#define IGB_PACKET_TYPE_SHIFT 0X04
+static inline uint32_t
+igb_rxd_pkt_info_to_pkt_type(uint16_t pkt_info)
+{
+ static const uint32_t
+ ptype_table[IGB_PACKET_TYPE_MAX] __rte_cache_aligned = {
+ [IGB_PACKET_TYPE_IPV4] = RTE_PTYPE_L2_ETHER |
+ RTE_PTYPE_L3_IPV4,
+ [IGB_PACKET_TYPE_IPV4_EXT] = RTE_PTYPE_L2_ETHER |
+ RTE_PTYPE_L3_IPV4_EXT,
+ [IGB_PACKET_TYPE_IPV6] = RTE_PTYPE_L2_ETHER |
+ RTE_PTYPE_L3_IPV6,
+ [IGB_PACKET_TYPE_IPV4_IPV6] = RTE_PTYPE_L2_ETHER |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6,
+ [IGB_PACKET_TYPE_IPV6_EXT] = RTE_PTYPE_L2_ETHER |
+ RTE_PTYPE_L3_IPV6_EXT,
+ [IGB_PACKET_TYPE_IPV4_IPV6_EXT] = RTE_PTYPE_L2_ETHER |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT,
+ [IGB_PACKET_TYPE_IPV4_TCP] = RTE_PTYPE_L2_ETHER |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_TCP,
+ [IGB_PACKET_TYPE_IPV6_TCP] = RTE_PTYPE_L2_ETHER |
+ RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_TCP,
+ [IGB_PACKET_TYPE_IPV4_IPV6_TCP] = RTE_PTYPE_L2_ETHER |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6 | RTE_PTYPE_INNER_L4_TCP,
+ [IGB_PACKET_TYPE_IPV6_EXT_TCP] = RTE_PTYPE_L2_ETHER |
+ RTE_PTYPE_L3_IPV6_EXT | RTE_PTYPE_L4_TCP,
+ [IGB_PACKET_TYPE_IPV4_IPV6_EXT_TCP] = RTE_PTYPE_L2_ETHER |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT | RTE_PTYPE_INNER_L4_TCP,
+ [IGB_PACKET_TYPE_IPV4_UDP] = RTE_PTYPE_L2_ETHER |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_UDP,
+ [IGB_PACKET_TYPE_IPV6_UDP] = RTE_PTYPE_L2_ETHER |
+ RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_UDP,
+ [IGB_PACKET_TYPE_IPV4_IPV6_UDP] = RTE_PTYPE_L2_ETHER |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6 | RTE_PTYPE_INNER_L4_UDP,
+ [IGB_PACKET_TYPE_IPV6_EXT_UDP] = RTE_PTYPE_L2_ETHER |
+ RTE_PTYPE_L3_IPV6_EXT | RTE_PTYPE_L4_UDP,
+ [IGB_PACKET_TYPE_IPV4_IPV6_EXT_UDP] = RTE_PTYPE_L2_ETHER |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT | RTE_PTYPE_INNER_L4_UDP,
+ [IGB_PACKET_TYPE_IPV4_SCTP] = RTE_PTYPE_L2_ETHER |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_SCTP,
+ [IGB_PACKET_TYPE_IPV4_EXT_SCTP] = RTE_PTYPE_L2_ETHER |
+ RTE_PTYPE_L3_IPV4_EXT | RTE_PTYPE_L4_SCTP,
+ };
+ if (unlikely(pkt_info & E1000_RXDADV_PKTTYPE_ETQF))
+ return RTE_PTYPE_UNKNOWN;
+
+ pkt_info = (pkt_info >> IGB_PACKET_TYPE_SHIFT) & IGB_PACKET_TYPE_MASK;
+
+ return ptype_table[pkt_info];
+}
+
+static inline uint64_t
+rx_desc_hlen_type_rss_to_pkt_flags(uint32_t hl_tp_rs)
+{
+ uint64_t pkt_flags = ((hl_tp_rs & 0x0F) == 0) ? 0 : PKT_RX_RSS_HASH;
+
+#if defined(RTE_LIBRTE_IEEE1588)
+ static uint32_t ip_pkt_etqf_map[8] = {
+ 0, 0, 0, PKT_RX_IEEE1588_PTP,
+ 0, 0, 0, 0,
+ };
+
+ pkt_flags |= ip_pkt_etqf_map[(hl_tp_rs >> 4) & 0x07];
+#endif
+
+ return pkt_flags;
+}
+#else /* RTE_NEXT_ABI */
static inline uint64_t
rx_desc_hlen_type_rss_to_pkt_flags(uint32_t hl_tp_rs)
{
@@ -617,6 +712,7 @@ rx_desc_hlen_type_rss_to_pkt_flags(uint32_t hl_tp_rs)
#endif
return pkt_flags | (((hl_tp_rs & 0x0F) == 0) ? 0 : PKT_RX_RSS_HASH);
}
+#endif /* RTE_NEXT_ABI */
static inline uint64_t
rx_desc_status_to_pkt_flags(uint32_t rx_status)
@@ -790,6 +886,10 @@ eth_igb_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
pkt_flags = pkt_flags | rx_desc_status_to_pkt_flags(staterr);
pkt_flags = pkt_flags | rx_desc_error_to_pkt_flags(staterr);
rxm->ol_flags = pkt_flags;
+#ifdef RTE_NEXT_ABI
+ rxm->packet_type = igb_rxd_pkt_info_to_pkt_type(rxd.wb.lower.
+ lo_dword.hs_rss.pkt_info);
+#endif
/*
* Store the mbuf address into the next entry of the array
@@ -1024,6 +1124,10 @@ eth_igb_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
pkt_flags = pkt_flags | rx_desc_status_to_pkt_flags(staterr);
pkt_flags = pkt_flags | rx_desc_error_to_pkt_flags(staterr);
first_seg->ol_flags = pkt_flags;
+#ifdef RTE_NEXT_ABI
+ first_seg->packet_type = igb_rxd_pkt_info_to_pkt_type(rxd.wb.
+ lower.lo_dword.hs_rss.pkt_info);
+#endif
/* Prefetch data of first segment, if configured to do so. */
rte_packet_prefetch((char *)first_seg->buf_addr +
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v10 04/19] ixgbe: replace bit mask based packet type with unified packet type
2015-07-09 16:31 ` [dpdk-dev] [PATCH v10 00/19] unified packet type Helin Zhang
` (2 preceding siblings ...)
2015-07-09 16:31 ` [dpdk-dev] [PATCH v10 03/19] e1000: replace bit mask based packet type with unified packet type Helin Zhang
@ 2015-07-09 16:31 ` Helin Zhang
2015-07-09 16:31 ` [dpdk-dev] [PATCH v10 05/19] i40e: " Helin Zhang
` (15 subsequent siblings)
19 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-07-09 16:31 UTC (permalink / raw)
To: dev
To unify packet type among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.
To avoid breaking ABI compatibility, all the changes would be
enabled by RTE_NEXT_ABI, which is disabled by default.
Note that around 2.5% performance drop (64B) was observed of doing
4 ports (1 port per 82599 card) IO forwarding on the same SNB core.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
drivers/net/ixgbe/ixgbe_rxtx.c | 163 +++++++++++++++++++++++++++++++++++++++++
1 file changed, 163 insertions(+)
v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.
v5 changes:
* Re-worded the commit logs.
v6 changes:
* Disabled the code changes for unified packet type by default, to
avoid breaking ABI compatibility.
v7 changes:
* Renamed RTE_UNIFIED_PKT_TYPE to RTE_NEXT_ABI.
v9 changes:
* Renamed MAC to ETHER in packet type names.
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c
index b1db57f..9e99e80 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx.c
@@ -859,6 +859,110 @@ end_of_tx:
* RX functions
*
**********************************************************************/
+#ifdef RTE_NEXT_ABI
+#define IXGBE_PACKET_TYPE_IPV4 0X01
+#define IXGBE_PACKET_TYPE_IPV4_TCP 0X11
+#define IXGBE_PACKET_TYPE_IPV4_UDP 0X21
+#define IXGBE_PACKET_TYPE_IPV4_SCTP 0X41
+#define IXGBE_PACKET_TYPE_IPV4_EXT 0X03
+#define IXGBE_PACKET_TYPE_IPV4_EXT_SCTP 0X43
+#define IXGBE_PACKET_TYPE_IPV6 0X04
+#define IXGBE_PACKET_TYPE_IPV6_TCP 0X14
+#define IXGBE_PACKET_TYPE_IPV6_UDP 0X24
+#define IXGBE_PACKET_TYPE_IPV6_EXT 0X0C
+#define IXGBE_PACKET_TYPE_IPV6_EXT_TCP 0X1C
+#define IXGBE_PACKET_TYPE_IPV6_EXT_UDP 0X2C
+#define IXGBE_PACKET_TYPE_IPV4_IPV6 0X05
+#define IXGBE_PACKET_TYPE_IPV4_IPV6_TCP 0X15
+#define IXGBE_PACKET_TYPE_IPV4_IPV6_UDP 0X25
+#define IXGBE_PACKET_TYPE_IPV4_IPV6_EXT 0X0D
+#define IXGBE_PACKET_TYPE_IPV4_IPV6_EXT_TCP 0X1D
+#define IXGBE_PACKET_TYPE_IPV4_IPV6_EXT_UDP 0X2D
+#define IXGBE_PACKET_TYPE_MAX 0X80
+#define IXGBE_PACKET_TYPE_MASK 0X7F
+#define IXGBE_PACKET_TYPE_SHIFT 0X04
+static inline uint32_t
+ixgbe_rxd_pkt_info_to_pkt_type(uint16_t pkt_info)
+{
+ static const uint32_t
+ ptype_table[IXGBE_PACKET_TYPE_MAX] __rte_cache_aligned = {
+ [IXGBE_PACKET_TYPE_IPV4] = RTE_PTYPE_L2_ETHER |
+ RTE_PTYPE_L3_IPV4,
+ [IXGBE_PACKET_TYPE_IPV4_EXT] = RTE_PTYPE_L2_ETHER |
+ RTE_PTYPE_L3_IPV4_EXT,
+ [IXGBE_PACKET_TYPE_IPV6] = RTE_PTYPE_L2_ETHER |
+ RTE_PTYPE_L3_IPV6,
+ [IXGBE_PACKET_TYPE_IPV4_IPV6] = RTE_PTYPE_L2_ETHER |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6,
+ [IXGBE_PACKET_TYPE_IPV6_EXT] = RTE_PTYPE_L2_ETHER |
+ RTE_PTYPE_L3_IPV6_EXT,
+ [IXGBE_PACKET_TYPE_IPV4_IPV6_EXT] = RTE_PTYPE_L2_ETHER |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT,
+ [IXGBE_PACKET_TYPE_IPV4_TCP] = RTE_PTYPE_L2_ETHER |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_TCP,
+ [IXGBE_PACKET_TYPE_IPV6_TCP] = RTE_PTYPE_L2_ETHER |
+ RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_TCP,
+ [IXGBE_PACKET_TYPE_IPV4_IPV6_TCP] = RTE_PTYPE_L2_ETHER |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6 | RTE_PTYPE_INNER_L4_TCP,
+ [IXGBE_PACKET_TYPE_IPV6_EXT_TCP] = RTE_PTYPE_L2_ETHER |
+ RTE_PTYPE_L3_IPV6_EXT | RTE_PTYPE_L4_TCP,
+ [IXGBE_PACKET_TYPE_IPV4_IPV6_EXT_TCP] = RTE_PTYPE_L2_ETHER |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT | RTE_PTYPE_INNER_L4_TCP,
+ [IXGBE_PACKET_TYPE_IPV4_UDP] = RTE_PTYPE_L2_ETHER |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_UDP,
+ [IXGBE_PACKET_TYPE_IPV6_UDP] = RTE_PTYPE_L2_ETHER |
+ RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_UDP,
+ [IXGBE_PACKET_TYPE_IPV4_IPV6_UDP] = RTE_PTYPE_L2_ETHER |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6 | RTE_PTYPE_INNER_L4_UDP,
+ [IXGBE_PACKET_TYPE_IPV6_EXT_UDP] = RTE_PTYPE_L2_ETHER |
+ RTE_PTYPE_L3_IPV6_EXT | RTE_PTYPE_L4_UDP,
+ [IXGBE_PACKET_TYPE_IPV4_IPV6_EXT_UDP] = RTE_PTYPE_L2_ETHER |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT | RTE_PTYPE_INNER_L4_UDP,
+ [IXGBE_PACKET_TYPE_IPV4_SCTP] = RTE_PTYPE_L2_ETHER |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_SCTP,
+ [IXGBE_PACKET_TYPE_IPV4_EXT_SCTP] = RTE_PTYPE_L2_ETHER |
+ RTE_PTYPE_L3_IPV4_EXT | RTE_PTYPE_L4_SCTP,
+ };
+ if (unlikely(pkt_info & IXGBE_RXDADV_PKTTYPE_ETQF))
+ return RTE_PTYPE_UNKNOWN;
+
+ pkt_info = (pkt_info >> IXGBE_PACKET_TYPE_SHIFT) &
+ IXGBE_PACKET_TYPE_MASK;
+
+ return ptype_table[pkt_info];
+}
+
+static inline uint64_t
+ixgbe_rxd_pkt_info_to_pkt_flags(uint16_t pkt_info)
+{
+ static uint64_t ip_rss_types_map[16] __rte_cache_aligned = {
+ 0, PKT_RX_RSS_HASH, PKT_RX_RSS_HASH, PKT_RX_RSS_HASH,
+ 0, PKT_RX_RSS_HASH, 0, PKT_RX_RSS_HASH,
+ PKT_RX_RSS_HASH, 0, 0, 0,
+ 0, 0, 0, PKT_RX_FDIR,
+ };
+#ifdef RTE_LIBRTE_IEEE1588
+ static uint64_t ip_pkt_etqf_map[8] = {
+ 0, 0, 0, PKT_RX_IEEE1588_PTP,
+ 0, 0, 0, 0,
+ };
+
+ if (likely(pkt_info & IXGBE_RXDADV_PKTTYPE_ETQF))
+ return ip_pkt_etqf_map[(pkt_info >> 4) & 0X07] |
+ ip_rss_types_map[pkt_info & 0XF];
+ else
+ return ip_rss_types_map[pkt_info & 0XF];
+#else
+ return ip_rss_types_map[pkt_info & 0XF];
+#endif
+}
+#else /* RTE_NEXT_ABI */
static inline uint64_t
rx_desc_hlen_type_rss_to_pkt_flags(uint32_t hl_tp_rs)
{
@@ -894,6 +998,7 @@ rx_desc_hlen_type_rss_to_pkt_flags(uint32_t hl_tp_rs)
#endif
return pkt_flags | ip_rss_types_map[hl_tp_rs & 0xF];
}
+#endif /* RTE_NEXT_ABI */
static inline uint64_t
rx_desc_status_to_pkt_flags(uint32_t rx_status)
@@ -949,7 +1054,13 @@ ixgbe_rx_scan_hw_ring(struct ixgbe_rx_queue *rxq)
struct rte_mbuf *mb;
uint16_t pkt_len;
uint64_t pkt_flags;
+#ifdef RTE_NEXT_ABI
+ int nb_dd;
+ uint32_t s[LOOK_AHEAD];
+ uint16_t pkt_info[LOOK_AHEAD];
+#else
int s[LOOK_AHEAD], nb_dd;
+#endif /* RTE_NEXT_ABI */
int i, j, nb_rx = 0;
@@ -972,6 +1083,12 @@ ixgbe_rx_scan_hw_ring(struct ixgbe_rx_queue *rxq)
for (j = LOOK_AHEAD-1; j >= 0; --j)
s[j] = rxdp[j].wb.upper.status_error;
+#ifdef RTE_NEXT_ABI
+ for (j = LOOK_AHEAD-1; j >= 0; --j)
+ pkt_info[j] = rxdp[j].wb.lower.lo_dword.
+ hs_rss.pkt_info;
+#endif /* RTE_NEXT_ABI */
+
/* Compute how many status bits were set */
nb_dd = 0;
for (j = 0; j < LOOK_AHEAD; ++j)
@@ -988,12 +1105,22 @@ ixgbe_rx_scan_hw_ring(struct ixgbe_rx_queue *rxq)
mb->vlan_tci = rte_le_to_cpu_16(rxdp[j].wb.upper.vlan);
/* convert descriptor fields to rte mbuf flags */
+#ifdef RTE_NEXT_ABI
+ pkt_flags = rx_desc_status_to_pkt_flags(s[j]);
+ pkt_flags |= rx_desc_error_to_pkt_flags(s[j]);
+ pkt_flags |=
+ ixgbe_rxd_pkt_info_to_pkt_flags(pkt_info[j]);
+ mb->ol_flags = pkt_flags;
+ mb->packet_type =
+ ixgbe_rxd_pkt_info_to_pkt_type(pkt_info[j]);
+#else /* RTE_NEXT_ABI */
pkt_flags = rx_desc_hlen_type_rss_to_pkt_flags(
rxdp[j].wb.lower.lo_dword.data);
/* reuse status field from scan list */
pkt_flags |= rx_desc_status_to_pkt_flags(s[j]);
pkt_flags |= rx_desc_error_to_pkt_flags(s[j]);
mb->ol_flags = pkt_flags;
+#endif /* RTE_NEXT_ABI */
if (likely(pkt_flags & PKT_RX_RSS_HASH))
mb->hash.rss = rxdp[j].wb.lower.hi_dword.rss;
@@ -1210,7 +1337,11 @@ ixgbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
union ixgbe_adv_rx_desc rxd;
uint64_t dma_addr;
uint32_t staterr;
+#ifdef RTE_NEXT_ABI
+ uint32_t pkt_info;
+#else
uint32_t hlen_type_rss;
+#endif
uint16_t pkt_len;
uint16_t rx_id;
uint16_t nb_rx;
@@ -1328,6 +1459,19 @@ ixgbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
rxm->data_len = pkt_len;
rxm->port = rxq->port_id;
+#ifdef RTE_NEXT_ABI
+ pkt_info = rte_le_to_cpu_32(rxd.wb.lower.lo_dword.hs_rss.
+ pkt_info);
+ /* Only valid if PKT_RX_VLAN_PKT set in pkt_flags */
+ rxm->vlan_tci = rte_le_to_cpu_16(rxd.wb.upper.vlan);
+
+ pkt_flags = rx_desc_status_to_pkt_flags(staterr);
+ pkt_flags = pkt_flags | rx_desc_error_to_pkt_flags(staterr);
+ pkt_flags = pkt_flags |
+ ixgbe_rxd_pkt_info_to_pkt_flags(pkt_info);
+ rxm->ol_flags = pkt_flags;
+ rxm->packet_type = ixgbe_rxd_pkt_info_to_pkt_type(pkt_info);
+#else /* RTE_NEXT_ABI */
hlen_type_rss = rte_le_to_cpu_32(rxd.wb.lower.lo_dword.data);
/* Only valid if PKT_RX_VLAN_PKT set in pkt_flags */
rxm->vlan_tci = rte_le_to_cpu_16(rxd.wb.upper.vlan);
@@ -1336,6 +1480,7 @@ ixgbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
pkt_flags = pkt_flags | rx_desc_status_to_pkt_flags(staterr);
pkt_flags = pkt_flags | rx_desc_error_to_pkt_flags(staterr);
rxm->ol_flags = pkt_flags;
+#endif /* RTE_NEXT_ABI */
if (likely(pkt_flags & PKT_RX_RSS_HASH))
rxm->hash.rss = rxd.wb.lower.hi_dword.rss;
@@ -1409,6 +1554,23 @@ ixgbe_fill_cluster_head_buf(
uint8_t port_id,
uint32_t staterr)
{
+#ifdef RTE_NEXT_ABI
+ uint16_t pkt_info;
+ uint64_t pkt_flags;
+
+ head->port = port_id;
+
+ /* The vlan_tci field is only valid when PKT_RX_VLAN_PKT is
+ * set in the pkt_flags field.
+ */
+ head->vlan_tci = rte_le_to_cpu_16(desc->wb.upper.vlan);
+ pkt_info = rte_le_to_cpu_32(desc->wb.lower.lo_dword.hs_rss.pkt_info);
+ pkt_flags = rx_desc_status_to_pkt_flags(staterr);
+ pkt_flags |= rx_desc_error_to_pkt_flags(staterr);
+ pkt_flags |= ixgbe_rxd_pkt_info_to_pkt_flags(pkt_info);
+ head->ol_flags = pkt_flags;
+ head->packet_type = ixgbe_rxd_pkt_info_to_pkt_type(pkt_info);
+#else /* RTE_NEXT_ABI */
uint32_t hlen_type_rss;
uint64_t pkt_flags;
@@ -1424,6 +1586,7 @@ ixgbe_fill_cluster_head_buf(
pkt_flags |= rx_desc_status_to_pkt_flags(staterr);
pkt_flags |= rx_desc_error_to_pkt_flags(staterr);
head->ol_flags = pkt_flags;
+#endif /* RTE_NEXT_ABI */
if (likely(pkt_flags & PKT_RX_RSS_HASH))
head->hash.rss = rte_le_to_cpu_32(desc->wb.lower.hi_dword.rss);
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v10 05/19] i40e: replace bit mask based packet type with unified packet type
2015-07-09 16:31 ` [dpdk-dev] [PATCH v10 00/19] unified packet type Helin Zhang
` (3 preceding siblings ...)
2015-07-09 16:31 ` [dpdk-dev] [PATCH v10 04/19] ixgbe: " Helin Zhang
@ 2015-07-09 16:31 ` Helin Zhang
2015-07-09 16:31 ` [dpdk-dev] [PATCH v10 06/19] enic: " Helin Zhang
` (14 subsequent siblings)
19 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-07-09 16:31 UTC (permalink / raw)
To: dev
To unify packet types among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.
To avoid breaking ABI compatibility, all the changes would be
enabled by RTE_NEXT_ABI, which is disabled by default.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
drivers/net/i40e/i40e_rxtx.c | 554 +++++++++++++++++++++++++++++++++++++++++++
1 file changed, 554 insertions(+)
v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.
v5 changes:
* Re-worded the commit logs.
v6 changes:
* Disabled the code changes for unified packet type by default, to
avoid breaking ABI compatibility.
v7 changes:
* Renamed RTE_UNIFIED_PKT_TYPE to RTE_NEXT_ABI.
v9 changes:
* Renamed MAC to ETHER in packet type names.
diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
index 88b015d..c667bbc 100644
--- a/drivers/net/i40e/i40e_rxtx.c
+++ b/drivers/net/i40e/i40e_rxtx.c
@@ -176,6 +176,540 @@ i40e_rxd_error_to_pkt_flags(uint64_t qword)
return flags;
}
+#ifdef RTE_NEXT_ABI
+/* For each value it means, datasheet of hardware can tell more details */
+static inline uint32_t
+i40e_rxd_pkt_type_mapping(uint8_t ptype)
+{
+ static const uint32_t ptype_table[UINT8_MAX] __rte_cache_aligned = {
+ /* L2 types */
+ /* [0] reserved */
+ [1] = RTE_PTYPE_L2_ETHER,
+ [2] = RTE_PTYPE_L2_ETHER_TIMESYNC,
+ /* [3] - [5] reserved */
+ [6] = RTE_PTYPE_L2_ETHER_LLDP,
+ /* [7] - [10] reserved */
+ [11] = RTE_PTYPE_L2_ETHER_ARP,
+ /* [12] - [21] reserved */
+
+ /* Non tunneled IPv4 */
+ [22] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_FRAG,
+ [23] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_NONFRAG,
+ [24] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_UDP,
+ /* [25] reserved */
+ [26] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_TCP,
+ [27] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_SCTP,
+ [28] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_L4_ICMP,
+
+ /* IPv4 --> IPv4 */
+ [29] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [30] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [31] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [32] reserved */
+ [33] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [34] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [35] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv4 --> IPv6 */
+ [36] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [37] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [38] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [39] reserved */
+ [40] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [41] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [42] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv4 --> GRE/Teredo/VXLAN */
+ [43] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT,
+
+ /* IPv4 --> GRE/Teredo/VXLAN --> IPv4 */
+ [44] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [45] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [46] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [47] reserved */
+ [48] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [49] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [50] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv4 --> GRE/Teredo/VXLAN --> IPv6 */
+ [51] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [52] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [53] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [54] reserved */
+ [55] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [56] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [57] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv4 --> GRE/Teredo/VXLAN --> MAC */
+ [58] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER,
+
+ /* IPv4 --> GRE/Teredo/VXLAN --> MAC --> IPv4 */
+ [59] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [60] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [61] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [62] reserved */
+ [63] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [64] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [65] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv4 --> GRE/Teredo/VXLAN --> MAC --> IPv6 */
+ [66] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [67] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [68] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [69] reserved */
+ [70] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [71] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [72] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv4 --> GRE/Teredo/VXLAN --> MAC/VLAN */
+ [73] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L2_ETHER_VLAN,
+
+ /* IPv4 --> GRE/Teredo/VXLAN --> MAC/VLAN --> IPv4 */
+ [74] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L2_ETHER_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [75] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L2_ETHER_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [76] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L2_ETHER_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [77] reserved */
+ [78] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L2_ETHER_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [79] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L2_ETHER_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [80] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L2_ETHER_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv4 --> GRE/Teredo/VXLAN --> MAC/VLAN --> IPv6 */
+ [81] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L2_ETHER_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [82] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L2_ETHER_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [83] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L2_ETHER_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [84] reserved */
+ [85] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L2_ETHER_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [86] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L2_ETHER_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [87] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L2_ETHER_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* Non tunneled IPv6 */
+ [88] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_FRAG,
+ [89] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_NONFRAG,
+ [90] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_UDP,
+ /* [91] reserved */
+ [92] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_TCP,
+ [93] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_SCTP,
+ [94] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_L4_ICMP,
+
+ /* IPv6 --> IPv4 */
+ [95] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [96] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [97] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [98] reserved */
+ [99] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [100] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [101] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv6 --> IPv6 */
+ [102] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [103] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [104] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [105] reserved */
+ [106] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [107] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [108] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_IP |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv6 --> GRE/Teredo/VXLAN */
+ [109] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT,
+
+ /* IPv6 --> GRE/Teredo/VXLAN --> IPv4 */
+ [110] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [111] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [112] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [113] reserved */
+ [114] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [115] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [116] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv6 --> GRE/Teredo/VXLAN --> IPv6 */
+ [117] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [118] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [119] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [120] reserved */
+ [121] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [122] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [123] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv6 --> GRE/Teredo/VXLAN --> MAC */
+ [124] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER,
+
+ /* IPv6 --> GRE/Teredo/VXLAN --> MAC --> IPv4 */
+ [125] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [126] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [127] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [128] reserved */
+ [129] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [130] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [131] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv6 --> GRE/Teredo/VXLAN --> MAC --> IPv6 */
+ [132] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [133] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [134] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [135] reserved */
+ [136] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [137] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [138] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv6 --> GRE/Teredo/VXLAN --> MAC/VLAN */
+ [139] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L2_ETHER_VLAN,
+
+ /* IPv6 --> GRE/Teredo/VXLAN --> MAC/VLAN --> IPv4 */
+ [140] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L2_ETHER_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [141] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L2_ETHER_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [142] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L2_ETHER_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [143] reserved */
+ [144] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L2_ETHER_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [145] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L2_ETHER_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [146] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L2_ETHER_VLAN |
+ RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* IPv6 --> GRE/Teredo/VXLAN --> MAC/VLAN --> IPv6 */
+ [147] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L2_ETHER_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_FRAG,
+ [148] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L2_ETHER_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_NONFRAG,
+ [149] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L2_ETHER_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_UDP,
+ /* [150] reserved */
+ [151] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L2_ETHER_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_TCP,
+ [152] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L2_ETHER_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_SCTP,
+ [153] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_TUNNEL_GRENAT |
+ RTE_PTYPE_INNER_L2_ETHER_VLAN |
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN |
+ RTE_PTYPE_INNER_L4_ICMP,
+
+ /* All others reserved */
+ };
+
+ return ptype_table[ptype];
+}
+#else /* RTE_NEXT_ABI */
/* Translate pkt types to pkt flags */
static inline uint64_t
i40e_rxd_ptype_to_pkt_flags(uint64_t qword)
@@ -443,6 +977,7 @@ i40e_rxd_ptype_to_pkt_flags(uint64_t qword)
return ip_ptype_map[ptype];
}
+#endif /* RTE_NEXT_ABI */
#define I40E_RX_DESC_EXT_STATUS_FLEXBH_MASK 0x03
#define I40E_RX_DESC_EXT_STATUS_FLEXBH_FD_ID 0x01
@@ -730,11 +1265,18 @@ i40e_rx_scan_hw_ring(struct i40e_rx_queue *rxq)
i40e_rxd_to_vlan_tci(mb, &rxdp[j]);
pkt_flags = i40e_rxd_status_to_pkt_flags(qword1);
pkt_flags |= i40e_rxd_error_to_pkt_flags(qword1);
+#ifdef RTE_NEXT_ABI
+ mb->packet_type =
+ i40e_rxd_pkt_type_mapping((uint8_t)((qword1 &
+ I40E_RXD_QW1_PTYPE_MASK) >>
+ I40E_RXD_QW1_PTYPE_SHIFT));
+#else
pkt_flags |= i40e_rxd_ptype_to_pkt_flags(qword1);
mb->packet_type = (uint16_t)((qword1 &
I40E_RXD_QW1_PTYPE_MASK) >>
I40E_RXD_QW1_PTYPE_SHIFT);
+#endif /* RTE_NEXT_ABI */
if (pkt_flags & PKT_RX_RSS_HASH)
mb->hash.rss = rte_le_to_cpu_32(\
rxdp[j].wb.qword0.hi_dword.rss);
@@ -971,9 +1513,15 @@ i40e_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
i40e_rxd_to_vlan_tci(rxm, &rxd);
pkt_flags = i40e_rxd_status_to_pkt_flags(qword1);
pkt_flags |= i40e_rxd_error_to_pkt_flags(qword1);
+#ifdef RTE_NEXT_ABI
+ rxm->packet_type =
+ i40e_rxd_pkt_type_mapping((uint8_t)((qword1 &
+ I40E_RXD_QW1_PTYPE_MASK) >> I40E_RXD_QW1_PTYPE_SHIFT));
+#else
pkt_flags |= i40e_rxd_ptype_to_pkt_flags(qword1);
rxm->packet_type = (uint16_t)((qword1 & I40E_RXD_QW1_PTYPE_MASK) >>
I40E_RXD_QW1_PTYPE_SHIFT);
+#endif /* RTE_NEXT_ABI */
if (pkt_flags & PKT_RX_RSS_HASH)
rxm->hash.rss =
rte_le_to_cpu_32(rxd.wb.qword0.hi_dword.rss);
@@ -1129,10 +1677,16 @@ i40e_recv_scattered_pkts(void *rx_queue,
i40e_rxd_to_vlan_tci(first_seg, &rxd);
pkt_flags = i40e_rxd_status_to_pkt_flags(qword1);
pkt_flags |= i40e_rxd_error_to_pkt_flags(qword1);
+#ifdef RTE_NEXT_ABI
+ first_seg->packet_type =
+ i40e_rxd_pkt_type_mapping((uint8_t)((qword1 &
+ I40E_RXD_QW1_PTYPE_MASK) >> I40E_RXD_QW1_PTYPE_SHIFT));
+#else
pkt_flags |= i40e_rxd_ptype_to_pkt_flags(qword1);
first_seg->packet_type = (uint16_t)((qword1 &
I40E_RXD_QW1_PTYPE_MASK) >>
I40E_RXD_QW1_PTYPE_SHIFT);
+#endif /* RTE_NEXT_ABI */
if (pkt_flags & PKT_RX_RSS_HASH)
rxm->hash.rss =
rte_le_to_cpu_32(rxd.wb.qword0.hi_dword.rss);
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v10 06/19] enic: replace bit mask based packet type with unified packet type
2015-07-09 16:31 ` [dpdk-dev] [PATCH v10 00/19] unified packet type Helin Zhang
` (4 preceding siblings ...)
2015-07-09 16:31 ` [dpdk-dev] [PATCH v10 05/19] i40e: " Helin Zhang
@ 2015-07-09 16:31 ` Helin Zhang
2015-07-09 16:31 ` [dpdk-dev] [PATCH v10 07/19] vmxnet3: " Helin Zhang
` (13 subsequent siblings)
19 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-07-09 16:31 UTC (permalink / raw)
To: dev
To unify packet types among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.
To avoid breaking ABI compatibility, all the changes would be
enabled by RTE_NEXT_ABI, which is disabled by default.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
drivers/net/enic/enic_main.c | 26 ++++++++++++++++++++++++++
1 file changed, 26 insertions(+)
v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.
v5 changes:
* Re-worded the commit logs.
v6 changes:
* Disabled the code changes for unified packet type by default, to
avoid breaking ABI compatibility.
v7 changes:
* Renamed RTE_UNIFIED_PKT_TYPE to RTE_NEXT_ABI.
diff --git a/drivers/net/enic/enic_main.c b/drivers/net/enic/enic_main.c
index 15313c2..f47e96c 100644
--- a/drivers/net/enic/enic_main.c
+++ b/drivers/net/enic/enic_main.c
@@ -423,7 +423,11 @@ static int enic_rq_indicate_buf(struct vnic_rq *rq,
rx_pkt->pkt_len = bytes_written;
if (ipv4) {
+#ifdef RTE_NEXT_ABI
+ rx_pkt->packet_type = RTE_PTYPE_L3_IPV4;
+#else
rx_pkt->ol_flags |= PKT_RX_IPV4_HDR;
+#endif
if (!csum_not_calc) {
if (unlikely(!ipv4_csum_ok))
rx_pkt->ol_flags |= PKT_RX_IP_CKSUM_BAD;
@@ -432,7 +436,11 @@ static int enic_rq_indicate_buf(struct vnic_rq *rq,
rx_pkt->ol_flags |= PKT_RX_L4_CKSUM_BAD;
}
} else if (ipv6)
+#ifdef RTE_NEXT_ABI
+ rx_pkt->packet_type = RTE_PTYPE_L3_IPV6;
+#else
rx_pkt->ol_flags |= PKT_RX_IPV6_HDR;
+#endif
} else {
/* Header split */
if (sop && !eop) {
@@ -445,7 +453,11 @@ static int enic_rq_indicate_buf(struct vnic_rq *rq,
*rx_pkt_bucket = rx_pkt;
rx_pkt->pkt_len = bytes_written;
if (ipv4) {
+#ifdef RTE_NEXT_ABI
+ rx_pkt->packet_type = RTE_PTYPE_L3_IPV4;
+#else
rx_pkt->ol_flags |= PKT_RX_IPV4_HDR;
+#endif
if (!csum_not_calc) {
if (unlikely(!ipv4_csum_ok))
rx_pkt->ol_flags |=
@@ -457,13 +469,22 @@ static int enic_rq_indicate_buf(struct vnic_rq *rq,
PKT_RX_L4_CKSUM_BAD;
}
} else if (ipv6)
+#ifdef RTE_NEXT_ABI
+ rx_pkt->packet_type = RTE_PTYPE_L3_IPV6;
+#else
rx_pkt->ol_flags |= PKT_RX_IPV6_HDR;
+#endif
} else {
/* Payload */
hdr_rx_pkt = *rx_pkt_bucket;
hdr_rx_pkt->pkt_len += bytes_written;
if (ipv4) {
+#ifdef RTE_NEXT_ABI
+ hdr_rx_pkt->packet_type =
+ RTE_PTYPE_L3_IPV4;
+#else
hdr_rx_pkt->ol_flags |= PKT_RX_IPV4_HDR;
+#endif
if (!csum_not_calc) {
if (unlikely(!ipv4_csum_ok))
hdr_rx_pkt->ol_flags |=
@@ -475,7 +496,12 @@ static int enic_rq_indicate_buf(struct vnic_rq *rq,
PKT_RX_L4_CKSUM_BAD;
}
} else if (ipv6)
+#ifdef RTE_NEXT_ABI
+ hdr_rx_pkt->packet_type =
+ RTE_PTYPE_L3_IPV6;
+#else
hdr_rx_pkt->ol_flags |= PKT_RX_IPV6_HDR;
+#endif
}
}
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v10 07/19] vmxnet3: replace bit mask based packet type with unified packet type
2015-07-09 16:31 ` [dpdk-dev] [PATCH v10 00/19] unified packet type Helin Zhang
` (5 preceding siblings ...)
2015-07-09 16:31 ` [dpdk-dev] [PATCH v10 06/19] enic: " Helin Zhang
@ 2015-07-09 16:31 ` Helin Zhang
2015-07-09 16:31 ` [dpdk-dev] [PATCH v10 08/19] fm10k: " Helin Zhang
` (12 subsequent siblings)
19 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-07-09 16:31 UTC (permalink / raw)
To: dev
To unify packet types among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.
To avoid breaking ABI compatibility, all the changes would be
enabled by RTE_NEXT_ABI, which is disabled by default.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
drivers/net/vmxnet3/vmxnet3_rxtx.c | 8 ++++++++
1 file changed, 8 insertions(+)
v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.
v5 changes:
* Re-worded the commit logs.
v6 changes:
* Disabled the code changes for unified packet type by default, to
avoid breaking ABI compatibility.
v7 changes:
* Renamed RTE_UNIFIED_PKT_TYPE to RTE_NEXT_ABI.
diff --git a/drivers/net/vmxnet3/vmxnet3_rxtx.c b/drivers/net/vmxnet3/vmxnet3_rxtx.c
index a1eac45..25ae2f6 100644
--- a/drivers/net/vmxnet3/vmxnet3_rxtx.c
+++ b/drivers/net/vmxnet3/vmxnet3_rxtx.c
@@ -649,9 +649,17 @@ vmxnet3_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
struct ipv4_hdr *ip = (struct ipv4_hdr *)(eth + 1);
if (((ip->version_ihl & 0xf) << 2) > (int)sizeof(struct ipv4_hdr))
+#ifdef RTE_NEXT_ABI
+ rxm->packet_type = RTE_PTYPE_L3_IPV4_EXT;
+#else
rxm->ol_flags |= PKT_RX_IPV4_HDR_EXT;
+#endif
else
+#ifdef RTE_NEXT_ABI
+ rxm->packet_type = RTE_PTYPE_L3_IPV4;
+#else
rxm->ol_flags |= PKT_RX_IPV4_HDR;
+#endif
if (!rcd->cnc) {
if (!rcd->ipc)
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v10 08/19] fm10k: replace bit mask based packet type with unified packet type
2015-07-09 16:31 ` [dpdk-dev] [PATCH v10 00/19] unified packet type Helin Zhang
` (6 preceding siblings ...)
2015-07-09 16:31 ` [dpdk-dev] [PATCH v10 07/19] vmxnet3: " Helin Zhang
@ 2015-07-09 16:31 ` Helin Zhang
2015-07-09 16:31 ` [dpdk-dev] [PATCH v10 09/19] cxgbe: " Helin Zhang
` (11 subsequent siblings)
19 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-07-09 16:31 UTC (permalink / raw)
To: dev
To unify packet types among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.
To avoid breaking ABI compatibility, all the changes would be
enabled by RTE_NEXT_ABI, which is disabled by default.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
drivers/net/fm10k/fm10k_rxtx.c | 27 +++++++++++++++++++++++++++
1 file changed, 27 insertions(+)
v4 changes:
* Supported unified packet type of fm10k from v4.
v5 changes:
* Re-worded the commit logs.
v6 changes:
* Disabled the code changes for unified packet type by default, to
avoid breaking ABI compatibility.
v7 changes:
* Renamed RTE_UNIFIED_PKT_TYPE to RTE_NEXT_ABI.
v9 changes:
* Renamed MAC to ETHER in packet type names.
diff --git a/drivers/net/fm10k/fm10k_rxtx.c b/drivers/net/fm10k/fm10k_rxtx.c
index 7d5e32c..b5fa2e6 100644
--- a/drivers/net/fm10k/fm10k_rxtx.c
+++ b/drivers/net/fm10k/fm10k_rxtx.c
@@ -68,12 +68,37 @@ static inline void dump_rxd(union fm10k_rx_desc *rxd)
static inline void
rx_desc_to_ol_flags(struct rte_mbuf *m, const union fm10k_rx_desc *d)
{
+#ifdef RTE_NEXT_ABI
+ static const uint32_t
+ ptype_table[FM10K_RXD_PKTTYPE_MASK >> FM10K_RXD_PKTTYPE_SHIFT]
+ __rte_cache_aligned = {
+ [FM10K_PKTTYPE_OTHER] = RTE_PTYPE_L2_ETHER,
+ [FM10K_PKTTYPE_IPV4] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4,
+ [FM10K_PKTTYPE_IPV4_EX] = RTE_PTYPE_L2_ETHER |
+ RTE_PTYPE_L3_IPV4_EXT,
+ [FM10K_PKTTYPE_IPV6] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6,
+ [FM10K_PKTTYPE_IPV6_EX] = RTE_PTYPE_L2_ETHER |
+ RTE_PTYPE_L3_IPV6_EXT,
+ [FM10K_PKTTYPE_IPV4 | FM10K_PKTTYPE_TCP] = RTE_PTYPE_L2_ETHER |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_TCP,
+ [FM10K_PKTTYPE_IPV6 | FM10K_PKTTYPE_TCP] = RTE_PTYPE_L2_ETHER |
+ RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_TCP,
+ [FM10K_PKTTYPE_IPV4 | FM10K_PKTTYPE_UDP] = RTE_PTYPE_L2_ETHER |
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_UDP,
+ [FM10K_PKTTYPE_IPV6 | FM10K_PKTTYPE_UDP] = RTE_PTYPE_L2_ETHER |
+ RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_UDP,
+ };
+
+ m->packet_type = ptype_table[(d->w.pkt_info & FM10K_RXD_PKTTYPE_MASK)
+ >> FM10K_RXD_PKTTYPE_SHIFT];
+#else /* RTE_NEXT_ABI */
uint16_t ptype;
static const uint16_t pt_lut[] = { 0,
PKT_RX_IPV4_HDR, PKT_RX_IPV4_HDR_EXT,
PKT_RX_IPV6_HDR, PKT_RX_IPV6_HDR_EXT,
0, 0, 0
};
+#endif /* RTE_NEXT_ABI */
if (d->w.pkt_info & FM10K_RXD_RSSTYPE_MASK)
m->ol_flags |= PKT_RX_RSS_HASH;
@@ -97,9 +122,11 @@ rx_desc_to_ol_flags(struct rte_mbuf *m, const union fm10k_rx_desc *d)
if (unlikely(d->d.staterr & FM10K_RXD_STATUS_RXE))
m->ol_flags |= PKT_RX_RECIP_ERR;
+#ifndef RTE_NEXT_ABI
ptype = (d->d.data & FM10K_RXD_PKTTYPE_MASK_L3) >>
FM10K_RXD_PKTTYPE_SHIFT;
m->ol_flags |= pt_lut[(uint8_t)ptype];
+#endif
}
uint16_t
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v10 09/19] cxgbe: replace bit mask based packet type with unified packet type
2015-07-09 16:31 ` [dpdk-dev] [PATCH v10 00/19] unified packet type Helin Zhang
` (7 preceding siblings ...)
2015-07-09 16:31 ` [dpdk-dev] [PATCH v10 08/19] fm10k: " Helin Zhang
@ 2015-07-09 16:31 ` Helin Zhang
2015-07-09 16:31 ` [dpdk-dev] [PATCH v10 10/19] app/test-pipeline: " Helin Zhang
` (10 subsequent siblings)
19 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-07-09 16:31 UTC (permalink / raw)
To: dev
To unify packet types among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.
To avoid breaking ABI compatibility, all the changes would be enabled
by RTE_NEXT_ABI, which is disabled by default.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
drivers/net/cxgbe/sge.c | 8 ++++++++
1 file changed, 8 insertions(+)
v9 changes:
* Added unified packet type support in newly added cxgbe driver.
diff --git a/drivers/net/cxgbe/sge.c b/drivers/net/cxgbe/sge.c
index 359296e..fdae0b4 100644
--- a/drivers/net/cxgbe/sge.c
+++ b/drivers/net/cxgbe/sge.c
@@ -1326,14 +1326,22 @@ int t4_ethrx_handler(struct sge_rspq *q, const __be64 *rsp,
mbuf->port = pkt->iff;
if (pkt->l2info & htonl(F_RXF_IP)) {
+#ifdef RTE_NEXT_ABI
+ mbuf->packet_type = RTE_PTYPE_L3_IPV4;
+#else
mbuf->ol_flags |= PKT_RX_IPV4_HDR;
+#endif
if (unlikely(!csum_ok))
mbuf->ol_flags |= PKT_RX_IP_CKSUM_BAD;
if ((pkt->l2info & htonl(F_RXF_UDP | F_RXF_TCP)) && !csum_ok)
mbuf->ol_flags |= PKT_RX_L4_CKSUM_BAD;
} else if (pkt->l2info & htonl(F_RXF_IP6)) {
+#ifdef RTE_NEXT_ABI
+ mbuf->packet_type = RTE_PTYPE_L3_IPV6;
+#else
mbuf->ol_flags |= PKT_RX_IPV6_HDR;
+#endif
}
mbuf->port = pkt->iff;
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v10 10/19] app/test-pipeline: replace bit mask based packet type with unified packet type
2015-07-09 16:31 ` [dpdk-dev] [PATCH v10 00/19] unified packet type Helin Zhang
` (8 preceding siblings ...)
2015-07-09 16:31 ` [dpdk-dev] [PATCH v10 09/19] cxgbe: " Helin Zhang
@ 2015-07-09 16:31 ` Helin Zhang
2015-07-09 16:31 ` [dpdk-dev] [PATCH v10 11/19] app/testpmd: " Helin Zhang
` (9 subsequent siblings)
19 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-07-09 16:31 UTC (permalink / raw)
To: dev
To unify packet types among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.
To avoid breaking ABI compatibility, all the changes would be
enabled by RTE_NEXT_ABI, which is disabled by default.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
app/test-pipeline/pipeline_hash.c | 13 +++++++++++++
1 file changed, 13 insertions(+)
v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.
v5 changes:
* Re-worded the commit logs.
v6 changes:
* Disabled the code changes for unified packet type by default, to
avoid breaking ABI compatibility.
v7 changes:
* Renamed RTE_UNIFIED_PKT_TYPE to RTE_NEXT_ABI.
diff --git a/app/test-pipeline/pipeline_hash.c b/app/test-pipeline/pipeline_hash.c
index 4598ad4..aa3f9e5 100644
--- a/app/test-pipeline/pipeline_hash.c
+++ b/app/test-pipeline/pipeline_hash.c
@@ -459,20 +459,33 @@ app_main_loop_rx_metadata(void) {
signature = RTE_MBUF_METADATA_UINT32_PTR(m, 0);
key = RTE_MBUF_METADATA_UINT8_PTR(m, 32);
+#ifdef RTE_NEXT_ABI
+ if (RTE_ETH_IS_IPV4_HDR(m->packet_type)) {
+#else
if (m->ol_flags & PKT_RX_IPV4_HDR) {
+#endif
ip_hdr = (struct ipv4_hdr *)
&m_data[sizeof(struct ether_hdr)];
ip_dst = ip_hdr->dst_addr;
k32 = (uint32_t *) key;
k32[0] = ip_dst & 0xFFFFFF00;
+#ifdef RTE_NEXT_ABI
+ } else if (RTE_ETH_IS_IPV6_HDR(m->packet_type)) {
+#else
} else {
+#endif
ipv6_hdr = (struct ipv6_hdr *)
&m_data[sizeof(struct ether_hdr)];
ipv6_dst = ipv6_hdr->dst_addr;
memcpy(key, ipv6_dst, 16);
+#ifdef RTE_NEXT_ABI
+ } else
+ continue;
+#else
}
+#endif
*signature = test_hash(key, 0, 0);
}
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v10 11/19] app/testpmd: replace bit mask based packet type with unified packet type
2015-07-09 16:31 ` [dpdk-dev] [PATCH v10 00/19] unified packet type Helin Zhang
` (9 preceding siblings ...)
2015-07-09 16:31 ` [dpdk-dev] [PATCH v10 10/19] app/test-pipeline: " Helin Zhang
@ 2015-07-09 16:31 ` Helin Zhang
2015-07-09 16:31 ` [dpdk-dev] [PATCH v10 12/19] app/test: Remove useless code Helin Zhang
` (8 subsequent siblings)
19 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-07-09 16:31 UTC (permalink / raw)
To: dev
To unify packet types among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.
To avoid breaking ABI compatibility, all the changes would be
enabled by RTE_NEXT_ABI, which is disabled by default.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Signed-off-by: Jijiang Liu <jijiang.liu@intel.com>
---
app/test-pmd/csumonly.c | 14 ++++
app/test-pmd/rxonly.c | 183 ++++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 197 insertions(+)
v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.
v4 changes:
* Added printing logs of packet types of each received packet in rxonly mode.
v5 changes:
* Re-worded the commit logs.
v6 changes:
* Disabled the code changes for unified packet type by default, to
avoid breaking ABI compatibility.
v7 changes:
* Renamed RTE_UNIFIED_PKT_TYPE to RTE_NEXT_ABI.
v9 changes:
* Renamed MAC to ETHER in packet type names.
diff --git a/app/test-pmd/csumonly.c b/app/test-pmd/csumonly.c
index 4287940..1bf3485 100644
--- a/app/test-pmd/csumonly.c
+++ b/app/test-pmd/csumonly.c
@@ -202,8 +202,14 @@ parse_ethernet(struct ether_hdr *eth_hdr, struct testpmd_offload_info *info)
/* Parse a vxlan header */
static void
+#ifdef RTE_NEXT_ABI
+parse_vxlan(struct udp_hdr *udp_hdr,
+ struct testpmd_offload_info *info,
+ uint32_t pkt_type)
+#else
parse_vxlan(struct udp_hdr *udp_hdr, struct testpmd_offload_info *info,
uint64_t mbuf_olflags)
+#endif
{
struct ether_hdr *eth_hdr;
@@ -211,8 +217,12 @@ parse_vxlan(struct udp_hdr *udp_hdr, struct testpmd_offload_info *info,
* (rfc7348) or that the rx offload flag is set (i40e only
* currently) */
if (udp_hdr->dst_port != _htons(4789) &&
+#ifdef RTE_NEXT_ABI
+ RTE_ETH_IS_TUNNEL_PKT(pkt_type) == 0)
+#else
(mbuf_olflags & (PKT_RX_TUNNEL_IPV4_HDR |
PKT_RX_TUNNEL_IPV6_HDR)) == 0)
+#endif
return;
info->is_tunnel = 1;
@@ -549,7 +559,11 @@ pkt_burst_checksum_forward(struct fwd_stream *fs)
struct udp_hdr *udp_hdr;
udp_hdr = (struct udp_hdr *)((char *)l3_hdr +
info.l3_len);
+#ifdef RTE_NEXT_ABI
+ parse_vxlan(udp_hdr, &info, m->packet_type);
+#else
parse_vxlan(udp_hdr, &info, m->ol_flags);
+#endif
} else if (info.l4_proto == IPPROTO_GRE) {
struct simple_gre_hdr *gre_hdr;
gre_hdr = (struct simple_gre_hdr *)
diff --git a/app/test-pmd/rxonly.c b/app/test-pmd/rxonly.c
index 4a9f86e..632056d 100644
--- a/app/test-pmd/rxonly.c
+++ b/app/test-pmd/rxonly.c
@@ -91,7 +91,11 @@ pkt_burst_receive(struct fwd_stream *fs)
uint64_t ol_flags;
uint16_t nb_rx;
uint16_t i, packet_type;
+#ifdef RTE_NEXT_ABI
+ uint16_t is_encapsulation;
+#else
uint64_t is_encapsulation;
+#endif
#ifdef RTE_TEST_PMD_RECORD_CORE_CYCLES
uint64_t start_tsc;
@@ -135,8 +139,12 @@ pkt_burst_receive(struct fwd_stream *fs)
ol_flags = mb->ol_flags;
packet_type = mb->packet_type;
+#ifdef RTE_NEXT_ABI
+ is_encapsulation = RTE_ETH_IS_TUNNEL_PKT(packet_type);
+#else
is_encapsulation = ol_flags & (PKT_RX_TUNNEL_IPV4_HDR |
PKT_RX_TUNNEL_IPV6_HDR);
+#endif
print_ether_addr(" src=", ð_hdr->s_addr);
print_ether_addr(" - dst=", ð_hdr->d_addr);
@@ -163,6 +171,177 @@ pkt_burst_receive(struct fwd_stream *fs)
if (ol_flags & PKT_RX_QINQ_PKT)
printf(" - QinQ VLAN tci=0x%x, VLAN tci outer=0x%x",
mb->vlan_tci, mb->vlan_tci_outer);
+#ifdef RTE_NEXT_ABI
+ if (mb->packet_type) {
+ uint32_t ptype;
+
+ /* (outer) L2 packet type */
+ ptype = mb->packet_type & RTE_PTYPE_L2_MASK;
+ switch (ptype) {
+ case RTE_PTYPE_L2_ETHER:
+ printf(" - (outer) L2 type: ETHER");
+ break;
+ case RTE_PTYPE_L2_ETHER_TIMESYNC:
+ printf(" - (outer) L2 type: ETHER_Timesync");
+ break;
+ case RTE_PTYPE_L2_ETHER_ARP:
+ printf(" - (outer) L2 type: ETHER_ARP");
+ break;
+ case RTE_PTYPE_L2_ETHER_LLDP:
+ printf(" - (outer) L2 type: ETHER_LLDP");
+ break;
+ default:
+ printf(" - (outer) L2 type: Unknown");
+ break;
+ }
+
+ /* (outer) L3 packet type */
+ ptype = mb->packet_type & RTE_PTYPE_L3_MASK;
+ switch (ptype) {
+ case RTE_PTYPE_L3_IPV4:
+ printf(" - (outer) L3 type: IPV4");
+ break;
+ case RTE_PTYPE_L3_IPV4_EXT:
+ printf(" - (outer) L3 type: IPV4_EXT");
+ break;
+ case RTE_PTYPE_L3_IPV6:
+ printf(" - (outer) L3 type: IPV6");
+ break;
+ case RTE_PTYPE_L3_IPV4_EXT_UNKNOWN:
+ printf(" - (outer) L3 type: IPV4_EXT_UNKNOWN");
+ break;
+ case RTE_PTYPE_L3_IPV6_EXT:
+ printf(" - (outer) L3 type: IPV6_EXT");
+ break;
+ case RTE_PTYPE_L3_IPV6_EXT_UNKNOWN:
+ printf(" - (outer) L3 type: IPV6_EXT_UNKNOWN");
+ break;
+ default:
+ printf(" - (outer) L3 type: Unknown");
+ break;
+ }
+
+ /* (outer) L4 packet type */
+ ptype = mb->packet_type & RTE_PTYPE_L4_MASK;
+ switch (ptype) {
+ case RTE_PTYPE_L4_TCP:
+ printf(" - (outer) L4 type: TCP");
+ break;
+ case RTE_PTYPE_L4_UDP:
+ printf(" - (outer) L4 type: UDP");
+ break;
+ case RTE_PTYPE_L4_FRAG:
+ printf(" - (outer) L4 type: L4_FRAG");
+ break;
+ case RTE_PTYPE_L4_SCTP:
+ printf(" - (outer) L4 type: SCTP");
+ break;
+ case RTE_PTYPE_L4_ICMP:
+ printf(" - (outer) L4 type: ICMP");
+ break;
+ case RTE_PTYPE_L4_NONFRAG:
+ printf(" - (outer) L4 type: L4_NONFRAG");
+ break;
+ default:
+ printf(" - (outer) L4 type: Unknown");
+ break;
+ }
+
+ /* packet tunnel type */
+ ptype = mb->packet_type & RTE_PTYPE_TUNNEL_MASK;
+ switch (ptype) {
+ case RTE_PTYPE_TUNNEL_IP:
+ printf(" - Tunnel type: IP");
+ break;
+ case RTE_PTYPE_TUNNEL_GRE:
+ printf(" - Tunnel type: GRE");
+ break;
+ case RTE_PTYPE_TUNNEL_VXLAN:
+ printf(" - Tunnel type: VXLAN");
+ break;
+ case RTE_PTYPE_TUNNEL_NVGRE:
+ printf(" - Tunnel type: NVGRE");
+ break;
+ case RTE_PTYPE_TUNNEL_GENEVE:
+ printf(" - Tunnel type: GENEVE");
+ break;
+ case RTE_PTYPE_TUNNEL_GRENAT:
+ printf(" - Tunnel type: GRENAT");
+ break;
+ default:
+ printf(" - Tunnel type: Unknown");
+ break;
+ }
+
+ /* inner L2 packet type */
+ ptype = mb->packet_type & RTE_PTYPE_INNER_L2_MASK;
+ switch (ptype) {
+ case RTE_PTYPE_INNER_L2_ETHER:
+ printf(" - Inner L2 type: ETHER");
+ break;
+ case RTE_PTYPE_INNER_L2_ETHER_VLAN:
+ printf(" - Inner L2 type: ETHER_VLAN");
+ break;
+ default:
+ printf(" - Inner L2 type: Unknown");
+ break;
+ }
+
+ /* inner L3 packet type */
+ ptype = mb->packet_type & RTE_PTYPE_INNER_INNER_L3_MASK;
+ switch (ptype) {
+ case RTE_PTYPE_INNER_L3_IPV4:
+ printf(" - Inner L3 type: IPV4");
+ break;
+ case RTE_PTYPE_INNER_L3_IPV4_EXT:
+ printf(" - Inner L3 type: IPV4_EXT");
+ break;
+ case RTE_PTYPE_INNER_L3_IPV6:
+ printf(" - Inner L3 type: IPV6");
+ break;
+ case RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN:
+ printf(" - Inner L3 type: IPV4_EXT_UNKNOWN");
+ break;
+ case RTE_PTYPE_INNER_L3_IPV6_EXT:
+ printf(" - Inner L3 type: IPV6_EXT");
+ break;
+ case RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN:
+ printf(" - Inner L3 type: IPV6_EXT_UNKOWN");
+ break;
+ default:
+ printf(" - Inner L3 type: Unknown");
+ break;
+ }
+
+ /* inner L4 packet type */
+ ptype = mb->packet_type & RTE_PTYPE_INNER_L4_MASK;
+ switch (ptype) {
+ case RTE_PTYPE_INNER_L4_TCP:
+ printf(" - Inner L4 type: TCP");
+ break;
+ case RTE_PTYPE_INNER_L4_UDP:
+ printf(" - Inner L4 type: UDP");
+ break;
+ case RTE_PTYPE_INNER_L4_FRAG:
+ printf(" - Inner L4 type: L4_FRAG");
+ break;
+ case RTE_PTYPE_INNER_L4_SCTP:
+ printf(" - Inner L4 type: SCTP");
+ break;
+ case RTE_PTYPE_INNER_L4_ICMP:
+ printf(" - Inner L4 type: ICMP");
+ break;
+ case RTE_PTYPE_INNER_L4_NONFRAG:
+ printf(" - Inner L4 type: L4_NONFRAG");
+ break;
+ default:
+ printf(" - Inner L4 type: Unknown");
+ break;
+ }
+ printf("\n");
+ } else
+ printf("Unknown packet type\n");
+#endif /* RTE_NEXT_ABI */
if (is_encapsulation) {
struct ipv4_hdr *ipv4_hdr;
struct ipv6_hdr *ipv6_hdr;
@@ -176,7 +355,11 @@ pkt_burst_receive(struct fwd_stream *fs)
l2_len = sizeof(struct ether_hdr);
/* Do not support ipv4 option field */
+#ifdef RTE_NEXT_ABI
+ if (RTE_ETH_IS_IPV4_HDR(packet_type)) {
+#else
if (ol_flags & PKT_RX_TUNNEL_IPV4_HDR) {
+#endif
l3_len = sizeof(struct ipv4_hdr);
ipv4_hdr = rte_pktmbuf_mtod_offset(mb,
struct ipv4_hdr *,
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v10 12/19] app/test: Remove useless code
2015-07-09 16:31 ` [dpdk-dev] [PATCH v10 00/19] unified packet type Helin Zhang
` (10 preceding siblings ...)
2015-07-09 16:31 ` [dpdk-dev] [PATCH v10 11/19] app/testpmd: " Helin Zhang
@ 2015-07-09 16:31 ` Helin Zhang
2015-07-09 16:31 ` [dpdk-dev] [PATCH v10 13/19] examples/ip_fragmentation: replace bit mask based packet type with unified packet type Helin Zhang
` (7 subsequent siblings)
19 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-07-09 16:31 UTC (permalink / raw)
To: dev
Severl useless code lines are added accidently, which blocks packet
type unification. They should be deleted at all.
To avoid breaking ABI compatibility, all the changes would be
enabled by RTE_NEXT_ABI, which is disabled by default.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
app/test/packet_burst_generator.c | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
v4 changes:
* Removed several useless code lines which block packet type unification.
v5 changes:
* Re-worded the commit logs.
v6 changes:
* Disabled the code changes for unified packet type by default, to
avoid breaking ABI compatibility.
v7 changes:
* Renamed RTE_UNIFIED_PKT_TYPE to RTE_NEXT_ABI.
diff --git a/app/test/packet_burst_generator.c b/app/test/packet_burst_generator.c
index 28d9e25..d9d808b 100644
--- a/app/test/packet_burst_generator.c
+++ b/app/test/packet_burst_generator.c
@@ -273,19 +273,21 @@ nomore_mbuf:
if (ipv4) {
pkt->vlan_tci = ETHER_TYPE_IPv4;
pkt->l3_len = sizeof(struct ipv4_hdr);
-
+#ifndef RTE_NEXT_ABI
if (vlan_enabled)
pkt->ol_flags = PKT_RX_IPV4_HDR | PKT_RX_VLAN_PKT;
else
pkt->ol_flags = PKT_RX_IPV4_HDR;
+#endif
} else {
pkt->vlan_tci = ETHER_TYPE_IPv6;
pkt->l3_len = sizeof(struct ipv6_hdr);
-
+#ifndef RTE_NEXT_ABI
if (vlan_enabled)
pkt->ol_flags = PKT_RX_IPV6_HDR | PKT_RX_VLAN_PKT;
else
pkt->ol_flags = PKT_RX_IPV6_HDR;
+#endif
}
pkts_burst[nb_pkt] = pkt;
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v10 13/19] examples/ip_fragmentation: replace bit mask based packet type with unified packet type
2015-07-09 16:31 ` [dpdk-dev] [PATCH v10 00/19] unified packet type Helin Zhang
` (11 preceding siblings ...)
2015-07-09 16:31 ` [dpdk-dev] [PATCH v10 12/19] app/test: Remove useless code Helin Zhang
@ 2015-07-09 16:31 ` Helin Zhang
2015-07-09 16:31 ` [dpdk-dev] [PATCH v10 14/19] examples/ip_reassembly: " Helin Zhang
` (6 subsequent siblings)
19 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-07-09 16:31 UTC (permalink / raw)
To: dev
To unify packet types among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.
To avoid breaking ABI compatibility, all the changes would be
enabled by RTE_NEXT_ABI, which is disabled by default.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
examples/ip_fragmentation/main.c | 9 +++++++++
1 file changed, 9 insertions(+)
v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.
v5 changes:
* Re-worded the commit logs.
v6 changes:
* Disabled the code changes for unified packet type by default, to
avoid breaking ABI compatibility.
v7 changes:
* Renamed RTE_UNIFIED_PKT_TYPE to RTE_NEXT_ABI.
diff --git a/examples/ip_fragmentation/main.c b/examples/ip_fragmentation/main.c
index 0922ba6..b71d05f 100644
--- a/examples/ip_fragmentation/main.c
+++ b/examples/ip_fragmentation/main.c
@@ -283,7 +283,11 @@ l3fwd_simple_forward(struct rte_mbuf *m, struct lcore_queue_conf *qconf,
len = qconf->tx_mbufs[port_out].len;
/* if this is an IPv4 packet */
+#ifdef RTE_NEXT_ABI
+ if (RTE_ETH_IS_IPV4_HDR(m->packet_type)) {
+#else
if (m->ol_flags & PKT_RX_IPV4_HDR) {
+#endif
struct ipv4_hdr *ip_hdr;
uint32_t ip_dst;
/* Read the lookup key (i.e. ip_dst) from the input packet */
@@ -317,9 +321,14 @@ l3fwd_simple_forward(struct rte_mbuf *m, struct lcore_queue_conf *qconf,
if (unlikely (len2 < 0))
return;
}
+#ifdef RTE_NEXT_ABI
+ } else if (RTE_ETH_IS_IPV6_HDR(m->packet_type)) {
+ /* if this is an IPv6 packet */
+#else
}
/* if this is an IPv6 packet */
else if (m->ol_flags & PKT_RX_IPV6_HDR) {
+#endif
struct ipv6_hdr *ip_hdr;
ipv6 = 1;
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v10 14/19] examples/ip_reassembly: replace bit mask based packet type with unified packet type
2015-07-09 16:31 ` [dpdk-dev] [PATCH v10 00/19] unified packet type Helin Zhang
` (12 preceding siblings ...)
2015-07-09 16:31 ` [dpdk-dev] [PATCH v10 13/19] examples/ip_fragmentation: replace bit mask based packet type with unified packet type Helin Zhang
@ 2015-07-09 16:31 ` Helin Zhang
2015-07-09 16:31 ` [dpdk-dev] [PATCH v10 15/19] examples/l3fwd-acl: " Helin Zhang
` (5 subsequent siblings)
19 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-07-09 16:31 UTC (permalink / raw)
To: dev
To unify packet types among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.
To avoid breaking ABI compatibility, all the changes would be
enabled by RTE_NEXT_ABI, which is disabled by default.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
examples/ip_reassembly/main.c | 9 +++++++++
1 file changed, 9 insertions(+)
v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.
v5 changes:
* Re-worded the commit logs.
v6 changes:
* Disabled the code changes for unified packet type by default, to
avoid breaking ABI compatibility.
v7 changes:
* Renamed RTE_UNIFIED_PKT_TYPE to RTE_NEXT_ABI.
diff --git a/examples/ip_reassembly/main.c b/examples/ip_reassembly/main.c
index 9ecb6f9..f1c47ad 100644
--- a/examples/ip_reassembly/main.c
+++ b/examples/ip_reassembly/main.c
@@ -356,7 +356,11 @@ reassemble(struct rte_mbuf *m, uint8_t portid, uint32_t queue,
dst_port = portid;
/* if packet is IPv4 */
+#ifdef RTE_NEXT_ABI
+ if (RTE_ETH_IS_IPV4_HDR(m->packet_type)) {
+#else
if (m->ol_flags & (PKT_RX_IPV4_HDR)) {
+#endif
struct ipv4_hdr *ip_hdr;
uint32_t ip_dst;
@@ -396,9 +400,14 @@ reassemble(struct rte_mbuf *m, uint8_t portid, uint32_t queue,
}
eth_hdr->ether_type = rte_be_to_cpu_16(ETHER_TYPE_IPv4);
+#ifdef RTE_NEXT_ABI
+ } else if (RTE_ETH_IS_IPV6_HDR(m->packet_type)) {
+ /* if packet is IPv6 */
+#else
}
/* if packet is IPv6 */
else if (m->ol_flags & (PKT_RX_IPV6_HDR | PKT_RX_IPV6_HDR_EXT)) {
+#endif
struct ipv6_extension_fragment *frag_hdr;
struct ipv6_hdr *ip_hdr;
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v10 15/19] examples/l3fwd-acl: replace bit mask based packet type with unified packet type
2015-07-09 16:31 ` [dpdk-dev] [PATCH v10 00/19] unified packet type Helin Zhang
` (13 preceding siblings ...)
2015-07-09 16:31 ` [dpdk-dev] [PATCH v10 14/19] examples/ip_reassembly: " Helin Zhang
@ 2015-07-09 16:31 ` Helin Zhang
2015-07-09 16:31 ` [dpdk-dev] [PATCH v10 16/19] examples/l3fwd-power: " Helin Zhang
` (4 subsequent siblings)
19 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-07-09 16:31 UTC (permalink / raw)
To: dev
To unify packet types among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.
To avoid breaking ABI compatibility, all the changes would be
enabled by RTE_NEXT_ABI, which is disabled by default.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
examples/l3fwd-acl/main.c | 29 +++++++++++++++++++++++------
1 file changed, 23 insertions(+), 6 deletions(-)
v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.
v5 changes:
* Re-worded the commit logs.
v6 changes:
* Disabled the code changes for unified packet type by default, to
avoid breaking ABI compatibility.
v7 changes:
* Renamed RTE_UNIFIED_PKT_TYPE to RTE_NEXT_ABI.
diff --git a/examples/l3fwd-acl/main.c b/examples/l3fwd-acl/main.c
index 29cb25e..b2bdf2f 100644
--- a/examples/l3fwd-acl/main.c
+++ b/examples/l3fwd-acl/main.c
@@ -645,10 +645,13 @@ prepare_one_packet(struct rte_mbuf **pkts_in, struct acl_search_t *acl,
struct ipv4_hdr *ipv4_hdr;
struct rte_mbuf *pkt = pkts_in[index];
+#ifdef RTE_NEXT_ABI
+ if (RTE_ETH_IS_IPV4_HDR(pkt->packet_type)) {
+#else
int type = pkt->ol_flags & (PKT_RX_IPV4_HDR | PKT_RX_IPV6_HDR);
if (type == PKT_RX_IPV4_HDR) {
-
+#endif
ipv4_hdr = rte_pktmbuf_mtod_offset(pkt, struct ipv4_hdr *,
sizeof(struct ether_hdr));
@@ -667,9 +670,11 @@ prepare_one_packet(struct rte_mbuf **pkts_in, struct acl_search_t *acl,
/* Not a valid IPv4 packet */
rte_pktmbuf_free(pkt);
}
-
+#ifdef RTE_NEXT_ABI
+ } else if (RTE_ETH_IS_IPV6_HDR(pkt->packet_type)) {
+#else
} else if (type == PKT_RX_IPV6_HDR) {
-
+#endif
/* Fill acl structure */
acl->data_ipv6[acl->num_ipv6] = MBUF_IPV6_2PROTO(pkt);
acl->m_ipv6[(acl->num_ipv6)++] = pkt;
@@ -687,17 +692,22 @@ prepare_one_packet(struct rte_mbuf **pkts_in, struct acl_search_t *acl,
{
struct rte_mbuf *pkt = pkts_in[index];
+#ifdef RTE_NEXT_ABI
+ if (RTE_ETH_IS_IPV4_HDR(pkt->packet_type)) {
+#else
int type = pkt->ol_flags & (PKT_RX_IPV4_HDR | PKT_RX_IPV6_HDR);
if (type == PKT_RX_IPV4_HDR) {
-
+#endif
/* Fill acl structure */
acl->data_ipv4[acl->num_ipv4] = MBUF_IPV4_2PROTO(pkt);
acl->m_ipv4[(acl->num_ipv4)++] = pkt;
-
+#ifdef RTE_NEXT_ABI
+ } else if (RTE_ETH_IS_IPV6_HDR(pkt->packet_type)) {
+#else
} else if (type == PKT_RX_IPV6_HDR) {
-
+#endif
/* Fill acl structure */
acl->data_ipv6[acl->num_ipv6] = MBUF_IPV6_2PROTO(pkt);
acl->m_ipv6[(acl->num_ipv6)++] = pkt;
@@ -745,10 +755,17 @@ send_one_packet(struct rte_mbuf *m, uint32_t res)
/* in the ACL list, drop it */
#ifdef L3FWDACL_DEBUG
if ((res & ACL_DENY_SIGNATURE) != 0) {
+#ifdef RTE_NEXT_ABI
+ if (RTE_ETH_IS_IPV4_HDR(m->packet_type))
+ dump_acl4_rule(m, res);
+ else if (RTE_ETH_IS_IPV6_HDR(m->packet_type))
+ dump_acl6_rule(m, res);
+#else
if (m->ol_flags & PKT_RX_IPV4_HDR)
dump_acl4_rule(m, res);
else
dump_acl6_rule(m, res);
+#endif /* RTE_NEXT_ABI */
}
#endif
rte_pktmbuf_free(m);
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v10 16/19] examples/l3fwd-power: replace bit mask based packet type with unified packet type
2015-07-09 16:31 ` [dpdk-dev] [PATCH v10 00/19] unified packet type Helin Zhang
` (14 preceding siblings ...)
2015-07-09 16:31 ` [dpdk-dev] [PATCH v10 15/19] examples/l3fwd-acl: " Helin Zhang
@ 2015-07-09 16:31 ` Helin Zhang
2015-07-09 16:31 ` [dpdk-dev] [PATCH v10 17/19] examples/l3fwd: " Helin Zhang
` (3 subsequent siblings)
19 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-07-09 16:31 UTC (permalink / raw)
To: dev
To unify packet types among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.
To avoid breaking ABI compatibility, all the changes would be
enabled by RTE_NEXT_ABI, which is disabled by default.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
examples/l3fwd-power/main.c | 8 ++++++++
1 file changed, 8 insertions(+)
v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.
v5 changes:
* Re-worded the commit logs.
v6 changes:
* Disabled the code changes for unified packet type by default, to
avoid breaking ABI compatibility.
v7 changes:
* Renamed RTE_UNIFIED_PKT_TYPE to RTE_NEXT_ABI.
diff --git a/examples/l3fwd-power/main.c b/examples/l3fwd-power/main.c
index d4eba1a..dbbebdd 100644
--- a/examples/l3fwd-power/main.c
+++ b/examples/l3fwd-power/main.c
@@ -635,7 +635,11 @@ l3fwd_simple_forward(struct rte_mbuf *m, uint8_t portid,
eth_hdr = rte_pktmbuf_mtod(m, struct ether_hdr *);
+#ifdef RTE_NEXT_ABI
+ if (RTE_ETH_IS_IPV4_HDR(m->packet_type)) {
+#else
if (m->ol_flags & PKT_RX_IPV4_HDR) {
+#endif
/* Handle IPv4 headers.*/
ipv4_hdr =
rte_pktmbuf_mtod_offset(m, struct ipv4_hdr *,
@@ -670,8 +674,12 @@ l3fwd_simple_forward(struct rte_mbuf *m, uint8_t portid,
ether_addr_copy(&ports_eth_addr[dst_port], ð_hdr->s_addr);
send_single_packet(m, dst_port);
+#ifdef RTE_NEXT_ABI
+ } else if (RTE_ETH_IS_IPV6_HDR(m->packet_type)) {
+#else
}
else {
+#endif
/* Handle IPv6 headers.*/
#if (APP_LOOKUP_METHOD == APP_LOOKUP_EXACT_MATCH)
struct ipv6_hdr *ipv6_hdr;
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v10 17/19] examples/l3fwd: replace bit mask based packet type with unified packet type
2015-07-09 16:31 ` [dpdk-dev] [PATCH v10 00/19] unified packet type Helin Zhang
` (15 preceding siblings ...)
2015-07-09 16:31 ` [dpdk-dev] [PATCH v10 16/19] examples/l3fwd-power: " Helin Zhang
@ 2015-07-09 16:31 ` Helin Zhang
2015-07-09 16:31 ` [dpdk-dev] [PATCH v10 18/19] examples/tep_termination: " Helin Zhang
` (2 subsequent siblings)
19 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-07-09 16:31 UTC (permalink / raw)
To: dev
To unify packet types among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.
To avoid breaking ABI compatibility, all the changes would be
enabled by RTE_NEXT_ABI, which is disabled by default.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
examples/l3fwd/main.c | 123 ++++++++++++++++++++++++++++++++++++++++++++++++--
1 file changed, 120 insertions(+), 3 deletions(-)
v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.
v3 changes:
* Minor bug fixes and enhancements.
v5 changes:
* Re-worded the commit logs.
v6 changes:
* Disabled the code changes for unified packet type by default, to
avoid breaking ABI compatibility.
v7 changes:
* Renamed RTE_UNIFIED_PKT_TYPE to RTE_NEXT_ABI.
diff --git a/examples/l3fwd/main.c b/examples/l3fwd/main.c
index 5c22ed1..b1bcb35 100644
--- a/examples/l3fwd/main.c
+++ b/examples/l3fwd/main.c
@@ -939,7 +939,11 @@ l3fwd_simple_forward(struct rte_mbuf *m, uint8_t portid, struct lcore_conf *qcon
eth_hdr = rte_pktmbuf_mtod(m, struct ether_hdr *);
+#ifdef RTE_NEXT_ABI
+ if (RTE_ETH_IS_IPV4_HDR(m->packet_type)) {
+#else
if (m->ol_flags & PKT_RX_IPV4_HDR) {
+#endif
/* Handle IPv4 headers.*/
ipv4_hdr = rte_pktmbuf_mtod_offset(m, struct ipv4_hdr *,
sizeof(struct ether_hdr));
@@ -970,8 +974,11 @@ l3fwd_simple_forward(struct rte_mbuf *m, uint8_t portid, struct lcore_conf *qcon
ether_addr_copy(&ports_eth_addr[dst_port], ð_hdr->s_addr);
send_single_packet(m, dst_port);
-
+#ifdef RTE_NEXT_ABI
+ } else if (RTE_ETH_IS_IPV6_HDR(m->packet_type)) {
+#else
} else {
+#endif
/* Handle IPv6 headers.*/
struct ipv6_hdr *ipv6_hdr;
@@ -990,8 +997,13 @@ l3fwd_simple_forward(struct rte_mbuf *m, uint8_t portid, struct lcore_conf *qcon
ether_addr_copy(&ports_eth_addr[dst_port], ð_hdr->s_addr);
send_single_packet(m, dst_port);
+#ifdef RTE_NEXT_ABI
+ } else
+ /* Free the mbuf that contains non-IPV4/IPV6 packet */
+ rte_pktmbuf_free(m);
+#else
}
-
+#endif
}
#ifdef DO_RFC_1812_CHECKS
@@ -1015,12 +1027,19 @@ l3fwd_simple_forward(struct rte_mbuf *m, uint8_t portid, struct lcore_conf *qcon
* to BAD_PORT value.
*/
static inline __attribute__((always_inline)) void
+#ifdef RTE_NEXT_ABI
+rfc1812_process(struct ipv4_hdr *ipv4_hdr, uint16_t *dp, uint32_t ptype)
+#else
rfc1812_process(struct ipv4_hdr *ipv4_hdr, uint16_t *dp, uint32_t flags)
+#endif
{
uint8_t ihl;
+#ifdef RTE_NEXT_ABI
+ if (RTE_ETH_IS_IPV4_HDR(ptype)) {
+#else
if ((flags & PKT_RX_IPV4_HDR) != 0) {
-
+#endif
ihl = ipv4_hdr->version_ihl - IPV4_MIN_VER_IHL;
ipv4_hdr->time_to_live--;
@@ -1050,11 +1069,19 @@ get_dst_port(const struct lcore_conf *qconf, struct rte_mbuf *pkt,
struct ipv6_hdr *ipv6_hdr;
struct ether_hdr *eth_hdr;
+#ifdef RTE_NEXT_ABI
+ if (RTE_ETH_IS_IPV4_HDR(pkt->packet_type)) {
+#else
if (pkt->ol_flags & PKT_RX_IPV4_HDR) {
+#endif
if (rte_lpm_lookup(qconf->ipv4_lookup_struct, dst_ipv4,
&next_hop) != 0)
next_hop = portid;
+#ifdef RTE_NEXT_ABI
+ } else if (RTE_ETH_IS_IPV6_HDR(pkt->packet_type)) {
+#else
} else if (pkt->ol_flags & PKT_RX_IPV6_HDR) {
+#endif
eth_hdr = rte_pktmbuf_mtod(pkt, struct ether_hdr *);
ipv6_hdr = (struct ipv6_hdr *)(eth_hdr + 1);
if (rte_lpm6_lookup(qconf->ipv6_lookup_struct,
@@ -1088,12 +1115,52 @@ process_packet(struct lcore_conf *qconf, struct rte_mbuf *pkt,
ve = val_eth[dp];
dst_port[0] = dp;
+#ifdef RTE_NEXT_ABI
+ rfc1812_process(ipv4_hdr, dst_port, pkt->packet_type);
+#else
rfc1812_process(ipv4_hdr, dst_port, pkt->ol_flags);
+#endif
te = _mm_blend_epi16(te, ve, MASK_ETH);
_mm_store_si128((__m128i *)eth_hdr, te);
}
+#ifdef RTE_NEXT_ABI
+/*
+ * Read packet_type and destination IPV4 addresses from 4 mbufs.
+ */
+static inline void
+processx4_step1(struct rte_mbuf *pkt[FWDSTEP],
+ __m128i *dip,
+ uint32_t *ipv4_flag)
+{
+ struct ipv4_hdr *ipv4_hdr;
+ struct ether_hdr *eth_hdr;
+ uint32_t x0, x1, x2, x3;
+
+ eth_hdr = rte_pktmbuf_mtod(pkt[0], struct ether_hdr *);
+ ipv4_hdr = (struct ipv4_hdr *)(eth_hdr + 1);
+ x0 = ipv4_hdr->dst_addr;
+ ipv4_flag[0] = pkt[0]->packet_type & RTE_PTYPE_L3_IPV4;
+
+ eth_hdr = rte_pktmbuf_mtod(pkt[1], struct ether_hdr *);
+ ipv4_hdr = (struct ipv4_hdr *)(eth_hdr + 1);
+ x1 = ipv4_hdr->dst_addr;
+ ipv4_flag[0] &= pkt[1]->packet_type;
+
+ eth_hdr = rte_pktmbuf_mtod(pkt[2], struct ether_hdr *);
+ ipv4_hdr = (struct ipv4_hdr *)(eth_hdr + 1);
+ x2 = ipv4_hdr->dst_addr;
+ ipv4_flag[0] &= pkt[2]->packet_type;
+
+ eth_hdr = rte_pktmbuf_mtod(pkt[3], struct ether_hdr *);
+ ipv4_hdr = (struct ipv4_hdr *)(eth_hdr + 1);
+ x3 = ipv4_hdr->dst_addr;
+ ipv4_flag[0] &= pkt[3]->packet_type;
+
+ dip[0] = _mm_set_epi32(x3, x2, x1, x0);
+}
+#else /* RTE_NEXT_ABI */
/*
* Read ol_flags and destination IPV4 addresses from 4 mbufs.
*/
@@ -1126,14 +1193,24 @@ processx4_step1(struct rte_mbuf *pkt[FWDSTEP], __m128i *dip, uint32_t *flag)
dip[0] = _mm_set_epi32(x3, x2, x1, x0);
}
+#endif /* RTE_NEXT_ABI */
/*
* Lookup into LPM for destination port.
* If lookup fails, use incoming port (portid) as destination port.
*/
static inline void
+#ifdef RTE_NEXT_ABI
+processx4_step2(const struct lcore_conf *qconf,
+ __m128i dip,
+ uint32_t ipv4_flag,
+ uint8_t portid,
+ struct rte_mbuf *pkt[FWDSTEP],
+ uint16_t dprt[FWDSTEP])
+#else
processx4_step2(const struct lcore_conf *qconf, __m128i dip, uint32_t flag,
uint8_t portid, struct rte_mbuf *pkt[FWDSTEP], uint16_t dprt[FWDSTEP])
+#endif /* RTE_NEXT_ABI */
{
rte_xmm_t dst;
const __m128i bswap_mask = _mm_set_epi8(12, 13, 14, 15, 8, 9, 10, 11,
@@ -1143,7 +1220,11 @@ processx4_step2(const struct lcore_conf *qconf, __m128i dip, uint32_t flag,
dip = _mm_shuffle_epi8(dip, bswap_mask);
/* if all 4 packets are IPV4. */
+#ifdef RTE_NEXT_ABI
+ if (likely(ipv4_flag)) {
+#else
if (likely(flag != 0)) {
+#endif
rte_lpm_lookupx4(qconf->ipv4_lookup_struct, dip, dprt, portid);
} else {
dst.x = dip;
@@ -1193,6 +1274,16 @@ processx4_step3(struct rte_mbuf *pkt[FWDSTEP], uint16_t dst_port[FWDSTEP])
_mm_store_si128(p[2], te[2]);
_mm_store_si128(p[3], te[3]);
+#ifdef RTE_NEXT_ABI
+ rfc1812_process((struct ipv4_hdr *)((struct ether_hdr *)p[0] + 1),
+ &dst_port[0], pkt[0]->packet_type);
+ rfc1812_process((struct ipv4_hdr *)((struct ether_hdr *)p[1] + 1),
+ &dst_port[1], pkt[1]->packet_type);
+ rfc1812_process((struct ipv4_hdr *)((struct ether_hdr *)p[2] + 1),
+ &dst_port[2], pkt[2]->packet_type);
+ rfc1812_process((struct ipv4_hdr *)((struct ether_hdr *)p[3] + 1),
+ &dst_port[3], pkt[3]->packet_type);
+#else /* RTE_NEXT_ABI */
rfc1812_process((struct ipv4_hdr *)((struct ether_hdr *)p[0] + 1),
&dst_port[0], pkt[0]->ol_flags);
rfc1812_process((struct ipv4_hdr *)((struct ether_hdr *)p[1] + 1),
@@ -1201,6 +1292,7 @@ processx4_step3(struct rte_mbuf *pkt[FWDSTEP], uint16_t dst_port[FWDSTEP])
&dst_port[2], pkt[2]->ol_flags);
rfc1812_process((struct ipv4_hdr *)((struct ether_hdr *)p[3] + 1),
&dst_port[3], pkt[3]->ol_flags);
+#endif /* RTE_NEXT_ABI */
}
/*
@@ -1387,7 +1479,11 @@ main_loop(__attribute__((unused)) void *dummy)
uint16_t *lp;
uint16_t dst_port[MAX_PKT_BURST];
__m128i dip[MAX_PKT_BURST / FWDSTEP];
+#ifdef RTE_NEXT_ABI
+ uint32_t ipv4_flag[MAX_PKT_BURST / FWDSTEP];
+#else
uint32_t flag[MAX_PKT_BURST / FWDSTEP];
+#endif
uint16_t pnum[MAX_PKT_BURST + 1];
#endif
@@ -1457,6 +1553,18 @@ main_loop(__attribute__((unused)) void *dummy)
*/
int32_t n = RTE_ALIGN_FLOOR(nb_rx, 4);
for (j = 0; j < n ; j+=4) {
+#ifdef RTE_NEXT_ABI
+ uint32_t pkt_type =
+ pkts_burst[j]->packet_type &
+ pkts_burst[j+1]->packet_type &
+ pkts_burst[j+2]->packet_type &
+ pkts_burst[j+3]->packet_type;
+ if (pkt_type & RTE_PTYPE_L3_IPV4) {
+ simple_ipv4_fwd_4pkts(
+ &pkts_burst[j], portid, qconf);
+ } else if (pkt_type &
+ RTE_PTYPE_L3_IPV6) {
+#else /* RTE_NEXT_ABI */
uint32_t ol_flag = pkts_burst[j]->ol_flags
& pkts_burst[j+1]->ol_flags
& pkts_burst[j+2]->ol_flags
@@ -1465,6 +1573,7 @@ main_loop(__attribute__((unused)) void *dummy)
simple_ipv4_fwd_4pkts(&pkts_burst[j],
portid, qconf);
} else if (ol_flag & PKT_RX_IPV6_HDR) {
+#endif /* RTE_NEXT_ABI */
simple_ipv6_fwd_4pkts(&pkts_burst[j],
portid, qconf);
} else {
@@ -1489,13 +1598,21 @@ main_loop(__attribute__((unused)) void *dummy)
for (j = 0; j != k; j += FWDSTEP) {
processx4_step1(&pkts_burst[j],
&dip[j / FWDSTEP],
+#ifdef RTE_NEXT_ABI
+ &ipv4_flag[j / FWDSTEP]);
+#else
&flag[j / FWDSTEP]);
+#endif
}
k = RTE_ALIGN_FLOOR(nb_rx, FWDSTEP);
for (j = 0; j != k; j += FWDSTEP) {
processx4_step2(qconf, dip[j / FWDSTEP],
+#ifdef RTE_NEXT_ABI
+ ipv4_flag[j / FWDSTEP], portid,
+#else
flag[j / FWDSTEP], portid,
+#endif
&pkts_burst[j], &dst_port[j]);
}
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v10 18/19] examples/tep_termination: replace bit mask based packet type with unified packet type
2015-07-09 16:31 ` [dpdk-dev] [PATCH v10 00/19] unified packet type Helin Zhang
` (16 preceding siblings ...)
2015-07-09 16:31 ` [dpdk-dev] [PATCH v10 17/19] examples/l3fwd: " Helin Zhang
@ 2015-07-09 16:31 ` Helin Zhang
2015-07-09 16:31 ` [dpdk-dev] [PATCH v10 19/19] mbuf: remove old packet type bit masks Helin Zhang
2015-07-15 23:00 ` [dpdk-dev] [PATCH v10 00/19] unified " Thomas Monjalon
19 siblings, 0 replies; 257+ messages in thread
From: Helin Zhang @ 2015-07-09 16:31 UTC (permalink / raw)
To: dev
To unify packet types among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.
To avoid breaking ABI compatibility, all the changes would be enabled
by RTE_NEXT_ABI, which is disabled by default.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
examples/tep_termination/vxlan.c | 4 ++++
1 file changed, 4 insertions(+)
v9 changes:
* Used unified packet type to check if it is a VXLAN packet, included in
RTE_NEXT_ABI which is disabled by default.
v10 changes:
* Fixed a compile error.
diff --git a/examples/tep_termination/vxlan.c b/examples/tep_termination/vxlan.c
index b2a2f53..e98a29f 100644
--- a/examples/tep_termination/vxlan.c
+++ b/examples/tep_termination/vxlan.c
@@ -180,8 +180,12 @@ decapsulation(struct rte_mbuf *pkt)
* (rfc7348) or that the rx offload flag is set (i40e only
* currently)*/
if (udp_hdr->dst_port != rte_cpu_to_be_16(DEFAULT_VXLAN_PORT) &&
+#ifdef RTE_NEXT_ABI
+ (pkt->packet_type & RTE_PTYPE_TUNNEL_MASK) == 0)
+#else
(pkt->ol_flags & (PKT_RX_TUNNEL_IPV4_HDR |
PKT_RX_TUNNEL_IPV6_HDR)) == 0)
+#endif
return -1;
outer_header_len = info.outer_l2_len + info.outer_l3_len
+ sizeof(struct udp_hdr) + sizeof(struct vxlan_hdr);
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH v10 19/19] mbuf: remove old packet type bit masks
2015-07-09 16:31 ` [dpdk-dev] [PATCH v10 00/19] unified packet type Helin Zhang
` (17 preceding siblings ...)
2015-07-09 16:31 ` [dpdk-dev] [PATCH v10 18/19] examples/tep_termination: " Helin Zhang
@ 2015-07-09 16:31 ` Helin Zhang
2015-07-13 16:13 ` Thomas Monjalon
2015-07-15 23:00 ` [dpdk-dev] [PATCH v10 00/19] unified " Thomas Monjalon
19 siblings, 1 reply; 257+ messages in thread
From: Helin Zhang @ 2015-07-09 16:31 UTC (permalink / raw)
To: dev
As unified packet types are used instead, those old bit masks and
the relevant macros for packet type indication need to be removed.
To avoid breaking ABI compatibility, all the changes would be
enabled by RTE_NEXT_ABI, which is disabled by default.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
lib/librte_mbuf/rte_mbuf.c | 4 ++++
lib/librte_mbuf/rte_mbuf.h | 4 ++++
2 files changed, 8 insertions(+)
v2 changes:
* Used redefined packet types and enlarged packet_type field in mbuf.
* Redefined the bit masks for packet RX offload flags.
v5 changes:
* Rolled back the bit masks of RX flags, for ABI compatibility.
v6 changes:
* Disabled the code changes for unified packet type by default, to
avoid breaking ABI compatibility.
v7 changes:
* Renamed RTE_UNIFIED_PKT_TYPE to RTE_NEXT_ABI.
diff --git a/lib/librte_mbuf/rte_mbuf.c b/lib/librte_mbuf/rte_mbuf.c
index f506517..4320dd4 100644
--- a/lib/librte_mbuf/rte_mbuf.c
+++ b/lib/librte_mbuf/rte_mbuf.c
@@ -251,14 +251,18 @@ const char *rte_get_rx_ol_flag_name(uint64_t mask)
/* case PKT_RX_HBUF_OVERFLOW: return "PKT_RX_HBUF_OVERFLOW"; */
/* case PKT_RX_RECIP_ERR: return "PKT_RX_RECIP_ERR"; */
/* case PKT_RX_MAC_ERR: return "PKT_RX_MAC_ERR"; */
+#ifndef RTE_NEXT_ABI
case PKT_RX_IPV4_HDR: return "PKT_RX_IPV4_HDR";
case PKT_RX_IPV4_HDR_EXT: return "PKT_RX_IPV4_HDR_EXT";
case PKT_RX_IPV6_HDR: return "PKT_RX_IPV6_HDR";
case PKT_RX_IPV6_HDR_EXT: return "PKT_RX_IPV6_HDR_EXT";
+#endif /* RTE_NEXT_ABI */
case PKT_RX_IEEE1588_PTP: return "PKT_RX_IEEE1588_PTP";
case PKT_RX_IEEE1588_TMST: return "PKT_RX_IEEE1588_TMST";
+#ifndef RTE_NEXT_ABI
case PKT_RX_TUNNEL_IPV4_HDR: return "PKT_RX_TUNNEL_IPV4_HDR";
case PKT_RX_TUNNEL_IPV6_HDR: return "PKT_RX_TUNNEL_IPV6_HDR";
+#endif /* RTE_NEXT_ABI */
default: return NULL;
}
}
diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
index 3a17d95..b90c73f 100644
--- a/lib/librte_mbuf/rte_mbuf.h
+++ b/lib/librte_mbuf/rte_mbuf.h
@@ -92,14 +92,18 @@ extern "C" {
#define PKT_RX_HBUF_OVERFLOW (0ULL << 0) /**< Header buffer overflow. */
#define PKT_RX_RECIP_ERR (0ULL << 0) /**< Hardware processing error. */
#define PKT_RX_MAC_ERR (0ULL << 0) /**< MAC error. */
+#ifndef RTE_NEXT_ABI
#define PKT_RX_IPV4_HDR (1ULL << 5) /**< RX packet with IPv4 header. */
#define PKT_RX_IPV4_HDR_EXT (1ULL << 6) /**< RX packet with extended IPv4 header. */
#define PKT_RX_IPV6_HDR (1ULL << 7) /**< RX packet with IPv6 header. */
#define PKT_RX_IPV6_HDR_EXT (1ULL << 8) /**< RX packet with extended IPv6 header. */
+#endif /* RTE_NEXT_ABI */
#define PKT_RX_IEEE1588_PTP (1ULL << 9) /**< RX IEEE1588 L2 Ethernet PT Packet. */
#define PKT_RX_IEEE1588_TMST (1ULL << 10) /**< RX IEEE1588 L2/L4 timestamped packet.*/
+#ifndef RTE_NEXT_ABI
#define PKT_RX_TUNNEL_IPV4_HDR (1ULL << 11) /**< RX tunnel packet with IPv4 header.*/
#define PKT_RX_TUNNEL_IPV6_HDR (1ULL << 12) /**< RX tunnel packet with IPv6 header. */
+#endif /* RTE_NEXT_ABI */
#define PKT_RX_FDIR_ID (1ULL << 13) /**< FD id reported if FDIR match. */
#define PKT_RX_FDIR_FLX (1ULL << 14) /**< Flexible bytes reported if FDIR match. */
#define PKT_RX_QINQ_PKT (1ULL << 15) /**< RX packet with double VLAN stripped. */
--
1.9.3
^ permalink raw reply [flat|nested] 257+ messages in thread
* Re: [dpdk-dev] [PATCH v10 01/19] mbuf: redefine packet_type in rte_mbuf
2015-07-09 16:31 ` [dpdk-dev] [PATCH v10 01/19] mbuf: redefine packet_type in rte_mbuf Helin Zhang
@ 2015-07-13 15:53 ` Thomas Monjalon
0 siblings, 0 replies; 257+ messages in thread
From: Thomas Monjalon @ 2015-07-13 15:53 UTC (permalink / raw)
To: Helin Zhang; +Cc: dev
2015-07-10 00:31, Helin Zhang:
> To avoid breaking ABI compatibility, all the changes would be enabled by RTE_NEXT_ABI,
> which is disabled by default.
It is enabled by default.
This comment will be removed from all patches of the series.
^ permalink raw reply [flat|nested] 257+ messages in thread
* Re: [dpdk-dev] [PATCH v10 19/19] mbuf: remove old packet type bit masks
2015-07-09 16:31 ` [dpdk-dev] [PATCH v10 19/19] mbuf: remove old packet type bit masks Helin Zhang
@ 2015-07-13 16:13 ` Thomas Monjalon
2015-07-13 16:25 ` Zhang, Helin
` (2 more replies)
0 siblings, 3 replies; 257+ messages in thread
From: Thomas Monjalon @ 2015-07-13 16:13 UTC (permalink / raw)
To: Helin Zhang; +Cc: dev
2015-07-10 00:31, Helin Zhang:
> As unified packet types are used instead, those old bit masks and
> the relevant macros for packet type indication need to be removed.
It breaks mlx4 and cxgbe drivers.
The mlx4 driver didn't have the chance to be updated in this series.
Adrien, please, could you help Helin to convert ol_flags to packet type?
The cxgbe changes need to be updated after
78fc1a716ae8 ("cxgbe: improve Rx performance")
I suggest this update:
--- a/drivers/net/cxgbe/sge.c
+++ b/drivers/net/cxgbe/sge.c
@@ -1419,7 +1419,11 @@ static int process_responses(struct sge_rspq *q, int budget,
unmap_rx_buf(&rxq->fl);
if (cpl->l2info & htonl(F_RXF_IP)) {
+#ifdef RTE_NEXT_ABI
+ mbuf->packet_type = RTE_PTYPE_L3_IPV4;
+#else
pkt->ol_flags |= PKT_RX_IPV4_HDR;
+#endif
if (unlikely(!csum_ok))
pkt->ol_flags |= PKT_RX_IP_CKSUM_BAD;
@@ -1427,7 +1431,11 @@ static int process_responses(struct sge_rspq *q, int budget,
htonl(F_RXF_UDP | F_RXF_TCP)) && !csum_ok)
pkt->ol_flags |= PKT_RX_L4_CKSUM_BAD;
} else if (cpl->l2info & htonl(F_RXF_IP6)) {
+#ifdef RTE_NEXT_ABI
+ mbuf->packet_type = RTE_PTYPE_L3_IPV6;
+#else
pkt->ol_flags |= PKT_RX_IPV6_HDR;
+#endif
}
^ permalink raw reply [flat|nested] 257+ messages in thread
* Re: [dpdk-dev] [PATCH v10 19/19] mbuf: remove old packet type bit masks
2015-07-13 16:13 ` Thomas Monjalon
@ 2015-07-13 16:25 ` Zhang, Helin
2015-07-13 16:27 ` Thomas Monjalon
2015-07-13 17:58 ` Zhang, Helin
2015-07-15 17:32 ` [dpdk-dev] [PATCH] mlx4: replace some offload flags with packet type Thomas Monjalon
2 siblings, 1 reply; 257+ messages in thread
From: Zhang, Helin @ 2015-07-13 16:25 UTC (permalink / raw)
To: Thomas Monjalon; +Cc: dev
Hi Thomas
> -----Original Message-----
> From: Thomas Monjalon [mailto:thomas.monjalon@6wind.com]
> Sent: Monday, July 13, 2015 9:13 AM
> To: Zhang, Helin
> Cc: dev@dpdk.org; Adrien Mazarguil
> Subject: Re: [dpdk-dev] [PATCH v10 19/19] mbuf: remove old packet type bit
> masks
>
> 2015-07-10 00:31, Helin Zhang:
> > As unified packet types are used instead, those old bit masks and the
> > relevant macros for packet type indication need to be removed.
>
> It breaks mlx4 and cxgbe drivers.
>
> The mlx4 driver didn't have the chance to be updated in this series.
> Adrien, please, could you help Helin to convert ol_flags to packet type?
I think I have already reworked with that changes in v9 and v10 recently.
http://www.dpdk.org/dev/patchwork/patch/6253/
Regards,
Helin
>
> The cxgbe changes need to be updated after
> 78fc1a716ae8 ("cxgbe: improve Rx performance") I suggest this update:
>
> --- a/drivers/net/cxgbe/sge.c
> +++ b/drivers/net/cxgbe/sge.c
> @@ -1419,7 +1419,11 @@ static int process_responses(struct sge_rspq *q, int
> budget,
> unmap_rx_buf(&rxq->fl);
>
> if (cpl->l2info & htonl(F_RXF_IP)) {
> +#ifdef RTE_NEXT_ABI
> + mbuf->packet_type = RTE_PTYPE_L3_IPV4;
> +#else
> pkt->ol_flags |= PKT_RX_IPV4_HDR;
> +#endif
> if (unlikely(!csum_ok))
> pkt->ol_flags |=
> PKT_RX_IP_CKSUM_BAD;
>
> @@ -1427,7 +1431,11 @@ static int process_responses(struct sge_rspq *q, int
> budget,
> htonl(F_RXF_UDP | F_RXF_TCP))
> && !csum_ok)
> pkt->ol_flags |=
> PKT_RX_L4_CKSUM_BAD;
> } else if (cpl->l2info & htonl(F_RXF_IP6)) {
> +#ifdef RTE_NEXT_ABI
> + mbuf->packet_type = RTE_PTYPE_L3_IPV6;
> +#else
> pkt->ol_flags |= PKT_RX_IPV6_HDR;
> +#endif
> }
^ permalink raw reply [flat|nested] 257+ messages in thread
* Re: [dpdk-dev] [PATCH v10 19/19] mbuf: remove old packet type bit masks
2015-07-13 16:25 ` Zhang, Helin
@ 2015-07-13 16:27 ` Thomas Monjalon
2015-07-13 16:32 ` Zhang, Helin
0 siblings, 1 reply; 257+ messages in thread
From: Thomas Monjalon @ 2015-07-13 16:27 UTC (permalink / raw)
To: Zhang, Helin; +Cc: dev
2015-07-13 16:25, Zhang, Helin:
> From: Thomas Monjalon [mailto:thomas.monjalon@6wind.com]
> > 2015-07-10 00:31, Helin Zhang:
> > > As unified packet types are used instead, those old bit masks and the
> > > relevant macros for packet type indication need to be removed.
> >
> > It breaks mlx4 and cxgbe drivers.
[...]
> I think I have already reworked with that changes in v9 and v10 recently.
> http://www.dpdk.org/dev/patchwork/patch/6253/
No, your changes were for t4_ethrx_handler().
Since recent cxgbe change, we also need to update process_responses().
> > The cxgbe changes need to be updated after
> > 78fc1a716ae8 ("cxgbe: improve Rx performance") I suggest this update:
> >
> > --- a/drivers/net/cxgbe/sge.c
> > +++ b/drivers/net/cxgbe/sge.c
> > @@ -1419,7 +1419,11 @@ static int process_responses(struct sge_rspq *q, int
> > budget,
> > unmap_rx_buf(&rxq->fl);
> >
> > if (cpl->l2info & htonl(F_RXF_IP)) {
> > +#ifdef RTE_NEXT_ABI
> > + mbuf->packet_type = RTE_PTYPE_L3_IPV4;
> > +#else
> > pkt->ol_flags |= PKT_RX_IPV4_HDR;
> > +#endif
> > if (unlikely(!csum_ok))
> > pkt->ol_flags |=
> > PKT_RX_IP_CKSUM_BAD;
> >
> > @@ -1427,7 +1431,11 @@ static int process_responses(struct sge_rspq *q, int
> > budget,
> > htonl(F_RXF_UDP | F_RXF_TCP))
> > && !csum_ok)
> > pkt->ol_flags |=
> > PKT_RX_L4_CKSUM_BAD;
> > } else if (cpl->l2info & htonl(F_RXF_IP6)) {
> > +#ifdef RTE_NEXT_ABI
> > + mbuf->packet_type = RTE_PTYPE_L3_IPV6;
> > +#else
> > pkt->ol_flags |= PKT_RX_IPV6_HDR;
> > +#endif
> > }
>
^ permalink raw reply [flat|nested] 257+ messages in thread
* Re: [dpdk-dev] [PATCH v10 19/19] mbuf: remove old packet type bit masks
2015-07-13 16:27 ` Thomas Monjalon
@ 2015-07-13 16:32 ` Zhang, Helin
0 siblings, 0 replies; 257+ messages in thread
From: Zhang, Helin @ 2015-07-13 16:32 UTC (permalink / raw)
To: Thomas Monjalon; +Cc: dev
> -----Original Message-----
> From: Thomas Monjalon [mailto:thomas.monjalon@6wind.com]
> Sent: Monday, July 13, 2015 9:28 AM
> To: Zhang, Helin
> Cc: dev@dpdk.org; Adrien Mazarguil; Rahul Lakkireddy
> Subject: Re: [dpdk-dev] [PATCH v10 19/19] mbuf: remove old packet type bit
> masks
>
> 2015-07-13 16:25, Zhang, Helin:
> > From: Thomas Monjalon [mailto:thomas.monjalon@6wind.com]
> > > 2015-07-10 00:31, Helin Zhang:
> > > > As unified packet types are used instead, those old bit masks and
> > > > the relevant macros for packet type indication need to be removed.
> > >
> > > It breaks mlx4 and cxgbe drivers.
> [...]
> > I think I have already reworked with that changes in v9 and v10 recently.
> > http://www.dpdk.org/dev/patchwork/patch/6253/
>
> No, your changes were for t4_ethrx_handler().
> Since recent cxgbe change, we also need to update process_responses().
OK. Thanks! It involves so many components.
Thank you all for helping on unified packet type!
- Helin
>
> > > The cxgbe changes need to be updated after
> > > 78fc1a716ae8 ("cxgbe: improve Rx performance") I suggest this update:
> > >
> > > --- a/drivers/net/cxgbe/sge.c
> > > +++ b/drivers/net/cxgbe/sge.c
> > > @@ -1419,7 +1419,11 @@ static int process_responses(struct sge_rspq
> > > *q, int budget,
> > > unmap_rx_buf(&rxq->fl);
> > >
> > > if (cpl->l2info & htonl(F_RXF_IP)) {
> > > +#ifdef RTE_NEXT_ABI
> > > + mbuf->packet_type =
> > > +RTE_PTYPE_L3_IPV4; #else
> > > pkt->ol_flags |= PKT_RX_IPV4_HDR;
> > > +#endif
> > > if (unlikely(!csum_ok))
> > > pkt->ol_flags |=
> > > PKT_RX_IP_CKSUM_BAD;
> > >
> > > @@ -1427,7 +1431,11 @@ static int process_responses(struct sge_rspq
> > > *q, int budget,
> > > htonl(F_RXF_UDP | F_RXF_TCP))
> > > && !csum_ok)
> > > pkt->ol_flags |=
> > > PKT_RX_L4_CKSUM_BAD;
> > > } else if (cpl->l2info & htonl(F_RXF_IP6)) {
> > > +#ifdef RTE_NEXT_ABI
> > > + mbuf->packet_type =
> > > +RTE_PTYPE_L3_IPV6; #else
> > > pkt->ol_flags |= PKT_RX_IPV6_HDR;
> > > +#endif
> > > }
> >
>
^ permalink raw reply [flat|nested] 257+ messages in thread
* Re: [dpdk-dev] [PATCH v10 19/19] mbuf: remove old packet type bit masks
2015-07-13 16:13 ` Thomas Monjalon
2015-07-13 16:25 ` Zhang, Helin
@ 2015-07-13 17:58 ` Zhang, Helin
2015-07-15 17:32 ` [dpdk-dev] [PATCH] mlx4: replace some offload flags with packet type Thomas Monjalon
2 siblings, 0 replies; 257+ messages in thread
From: Zhang, Helin @ 2015-07-13 17:58 UTC (permalink / raw)
To: Thomas Monjalon; +Cc: dev
> -----Original Message-----
> From: Thomas Monjalon [mailto:thomas.monjalon@6wind.com]
> Sent: Monday, July 13, 2015 9:13 AM
> To: Zhang, Helin
> Cc: dev@dpdk.org; Adrien Mazarguil
> Subject: Re: [dpdk-dev] [PATCH v10 19/19] mbuf: remove old packet type bit
> masks
>
> 2015-07-10 00:31, Helin Zhang:
> > As unified packet types are used instead, those old bit masks and the
> > relevant macros for packet type indication need to be removed.
>
> It breaks mlx4 and cxgbe drivers.
>
> The mlx4 driver didn't have the chance to be updated in this series.
> Adrien, please, could you help Helin to convert ol_flags to packet type?
>
> The cxgbe changes need to be updated after
> 78fc1a716ae8 ("cxgbe: improve Rx performance") I suggest this update:
>
> --- a/drivers/net/cxgbe/sge.c
> +++ b/drivers/net/cxgbe/sge.c
> @@ -1419,7 +1419,11 @@ static int process_responses(struct sge_rspq *q, int
> budget,
> unmap_rx_buf(&rxq->fl);
>
> if (cpl->l2info & htonl(F_RXF_IP)) {
> +#ifdef RTE_NEXT_ABI
> + mbuf->packet_type = RTE_PTYPE_L3_IPV4;
> +#else
> pkt->ol_flags |= PKT_RX_IPV4_HDR;
> +#endif
> if (unlikely(!csum_ok))
> pkt->ol_flags |=
> PKT_RX_IP_CKSUM_BAD;
>
> @@ -1427,7 +1431,11 @@ static int process_responses(struct sge_rspq *q, int
> budget,
> htonl(F_RXF_UDP | F_RXF_TCP))
> && !csum_ok)
> pkt->ol_flags |=
> PKT_RX_L4_CKSUM_BAD;
> } else if (cpl->l2info & htonl(F_RXF_IP6)) {
> +#ifdef RTE_NEXT_ABI
> + mbuf->packet_type = RTE_PTYPE_L3_IPV6;
> +#else
> pkt->ol_flags |= PKT_RX_IPV6_HDR;
> +#endif
> }
Acked-by: Helin Zhang <helin.zhang@intel.com>
^ permalink raw reply [flat|nested] 257+ messages in thread
* Re: [dpdk-dev] [PATCH v10 02/19] mbuf: add definitions of unified packet types
2015-07-09 16:31 ` [dpdk-dev] [PATCH v10 02/19] mbuf: add definitions of unified packet types Helin Zhang
@ 2015-07-15 10:19 ` Olivier MATZ
0 siblings, 0 replies; 257+ messages in thread
From: Olivier MATZ @ 2015-07-15 10:19 UTC (permalink / raw)
To: Helin Zhang, dev
On 07/09/2015 06:31 PM, Helin Zhang wrote:
> As there are only 6 bit flags in ol_flags for indicating packet
> types, which is not enough to describe all the possible packet
> types hardware can recognize. For example, i40e hardware can
> recognize more than 150 packet types. Unified packet type is
> composed of L2 type, L3 type, L4 type, tunnel type, inner L2 type,
> inner L3 type and inner L4 type fields, and can be stored in
> 'struct rte_mbuf' of 32 bits field 'packet_type'.
> To avoid breaking ABI compatibility, all the changes would be
> enabled by RTE_NEXT_ABI, which is disabled by default.
>
> Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
^ permalink raw reply [flat|nested] 257+ messages in thread
* [dpdk-dev] [PATCH] mlx4: replace some offload flags with packet type
2015-07-13 16:13 ` Thomas Monjalon
2015-07-13 16:25 ` Zhang, Helin
2015-07-13 17:58 ` Zhang, Helin
@ 2015-07-15 17:32 ` Thomas Monjalon
2015-07-15 18:06 ` Zhang, Helin
2 siblings, 1 reply; 257+ messages in thread
From: Thomas Monjalon @ 2015-07-15 17:32 UTC (permalink / raw)
To: helin.zhang; +Cc: dev
The workaround for Tx tunnel offloading can now be replaced with packet
type flag checking.
The ol_flags for IPv4/IPv6 and tunnel Rx offloading are replaced with
packet type flags.
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Acked-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
---
On Rx side, the tunnel type cannot be set.
So RTE_ETH_IS_TUNNEL_PKT() will return wrong even if RTE_PTYPE_INNER_* is set.
What about fixing RTE_ETH_IS_TUNNEL_PKT() to handle this case?
drivers/net/mlx4/mlx4.c | 58 ++++++++++++++++++++++++++++++++++++++++++++-----
1 file changed, 53 insertions(+), 5 deletions(-)
diff --git a/drivers/net/mlx4/mlx4.c b/drivers/net/mlx4/mlx4.c
index f4491e7..3f5e9f3 100644
--- a/drivers/net/mlx4/mlx4.c
+++ b/drivers/net/mlx4/mlx4.c
@@ -1263,14 +1263,17 @@ mlx4_tx_burst(void *dpdk_txq, struct rte_mbuf **pkts, uint16_t pkts_n)
/* HW does not support checksum offloads at arbitrary
* offsets but automatically recognizes the packet
* type. For inner L3/L4 checksums, only VXLAN (UDP)
- * tunnels are currently supported.
- *
- * FIXME: since PKT_TX_UDP_TUNNEL_PKT has been removed,
+ * tunnels are currently supported. */
+#ifdef RTE_NEXT_ABI
+ if (RTE_ETH_IS_TUNNEL_PKT(buf->packet_type))
+#else
+ /* FIXME: since PKT_TX_UDP_TUNNEL_PKT has been removed,
* the outer packet type is unknown. All we know is
* that the L2 header is of unusual length (not
* ETHER_HDR_LEN with or without 802.1Q header). */
if ((buf->l2_len != ETHER_HDR_LEN) &&
(buf->l2_len != (ETHER_HDR_LEN + 4)))
+#endif
send_flags |= IBV_EXP_QP_BURST_TUNNEL;
}
if (likely(segs == 1)) {
@@ -2485,6 +2488,41 @@ rxq_cleanup(struct rxq *rxq)
memset(rxq, 0, sizeof(*rxq));
}
+#ifdef RTE_NEXT_ABI
+/**
+ * Translate RX completion flags to packet type.
+ *
+ * @param flags
+ * RX completion flags returned by poll_length_flags().
+ *
+ * @return
+ * Packet type for struct rte_mbuf.
+ */
+static inline uint32_t
+rxq_cq_to_pkt_type(uint32_t flags)
+{
+ uint32_t pkt_type = 0;
+
+ if (flags & IBV_EXP_CQ_RX_TUNNEL_PACKET)
+ pkt_type |=
+ TRANSPOSE(flags,
+ IBV_EXP_CQ_RX_OUTER_IPV4_PACKET, RTE_PTYPE_L3_IPV4) |
+ TRANSPOSE(flags,
+ IBV_EXP_CQ_RX_OUTER_IPV6_PACKET, RTE_PTYPE_L3_IPV6) |
+ TRANSPOSE(flags,
+ IBV_EXP_CQ_RX_IPV4_PACKET, RTE_PTYPE_INNER_L3_IPV4) |
+ TRANSPOSE(flags,
+ IBV_EXP_CQ_RX_IPV6_PACKET, RTE_PTYPE_INNER_L3_IPV6);
+ else
+ pkt_type |=
+ TRANSPOSE(flags,
+ IBV_EXP_CQ_RX_IPV4_PACKET, RTE_PTYPE_L3_IPV4) |
+ TRANSPOSE(flags,
+ IBV_EXP_CQ_RX_IPV6_PACKET, RTE_PTYPE_L3_IPV6);
+ return pkt_type;
+}
+#endif /* RTE_NEXT_ABI */
+
/**
* Translate RX completion flags to offload flags.
*
@@ -2499,11 +2537,13 @@ rxq_cleanup(struct rxq *rxq)
static inline uint32_t
rxq_cq_to_ol_flags(const struct rxq *rxq, uint32_t flags)
{
- uint32_t ol_flags;
+ uint32_t ol_flags = 0;
- ol_flags =
+#ifndef RTE_NEXT_ABI
+ ol_flags |=
TRANSPOSE(flags, IBV_EXP_CQ_RX_IPV4_PACKET, PKT_RX_IPV4_HDR) |
TRANSPOSE(flags, IBV_EXP_CQ_RX_IPV6_PACKET, PKT_RX_IPV6_HDR);
+#endif
if (rxq->csum)
ol_flags |=
TRANSPOSE(~flags,
@@ -2519,12 +2559,14 @@ rxq_cq_to_ol_flags(const struct rxq *rxq, uint32_t flags)
*/
if ((flags & IBV_EXP_CQ_RX_TUNNEL_PACKET) && (rxq->csum_l2tun))
ol_flags |=
+#ifndef RTE_NEXT_ABI
TRANSPOSE(flags,
IBV_EXP_CQ_RX_OUTER_IPV4_PACKET,
PKT_RX_TUNNEL_IPV4_HDR) |
TRANSPOSE(flags,
IBV_EXP_CQ_RX_OUTER_IPV6_PACKET,
PKT_RX_TUNNEL_IPV6_HDR) |
+#endif
TRANSPOSE(~flags,
IBV_EXP_CQ_RX_OUTER_IP_CSUM_OK,
PKT_RX_IP_CKSUM_BAD) |
@@ -2716,6 +2758,9 @@ mlx4_rx_burst_sp(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n)
NB_SEGS(pkt_buf) = j;
PORT(pkt_buf) = rxq->port_id;
PKT_LEN(pkt_buf) = pkt_buf_len;
+#ifdef RTE_NEXT_ABI
+ pkt_buf->packet_type = rxq_cq_to_pkt_type(flags);
+#endif
pkt_buf->ol_flags = rxq_cq_to_ol_flags(rxq, flags);
/* Return packet. */
@@ -2876,6 +2921,9 @@ mlx4_rx_burst(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n)
NEXT(seg) = NULL;
PKT_LEN(seg) = len;
DATA_LEN(seg) = len;
+#ifdef RTE_NEXT_ABI
+ seg->packet_type = rxq_cq_to_pkt_type(flags);
+#endif
seg->ol_flags = rxq_cq_to_ol_flags(rxq, flags);
/* Return packet. */
--
2.4.2
^ permalink raw reply [flat|nested] 257+ messages in thread
* Re: [dpdk-dev] [PATCH] mlx4: replace some offload flags with packet type
2015-07-15 17:32 ` [dpdk-dev] [PATCH] mlx4: replace some offload flags with packet type Thomas Monjalon
@ 2015-07-15 18:06 ` Zhang, Helin
2015-07-15 23:05 ` Thomas Monjalon
0 siblings, 1 reply; 257+ messages in thread
From: Zhang, Helin @ 2015-07-15 18:06 UTC (permalink / raw)
To: Thomas Monjalon; +Cc: dev
> -----Original Message-----
> From: Thomas Monjalon [mailto:thomas.monjalon@6wind.com]
> Sent: Wednesday, July 15, 2015 10:32 AM
> To: Zhang, Helin
> Cc: dev@dpdk.org
> Subject: [PATCH] mlx4: replace some offload flags with packet type
>
> The workaround for Tx tunnel offloading can now be replaced with packet type
> flag checking.
> The ol_flags for IPv4/IPv6 and tunnel Rx offloading are replaced with packet type
> flags.
>
> Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
> Acked-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
> ---
>
> On Rx side, the tunnel type cannot be set.
> So RTE_ETH_IS_TUNNEL_PKT() will return wrong even if RTE_PTYPE_INNER_* is
> set.
> What about fixing RTE_ETH_IS_TUNNEL_PKT() to handle this case?
>
> drivers/net/mlx4/mlx4.c | 58
> ++++++++++++++++++++++++++++++++++++++++++++-----
> 1 file changed, 53 insertions(+), 5 deletions(-)
>
> diff --git a/drivers/net/mlx4/mlx4.c b/drivers/net/mlx4/mlx4.c index
> f4491e7..3f5e9f3 100644
> --- a/drivers/net/mlx4/mlx4.c
> +++ b/drivers/net/mlx4/mlx4.c
> @@ -1263,14 +1263,17 @@ mlx4_tx_burst(void *dpdk_txq, struct rte_mbuf
> **pkts, uint16_t pkts_n)
> /* HW does not support checksum offloads at arbitrary
> * offsets but automatically recognizes the packet
> * type. For inner L3/L4 checksums, only VXLAN (UDP)
> - * tunnels are currently supported.
> - *
> - * FIXME: since PKT_TX_UDP_TUNNEL_PKT has been removed,
> + * tunnels are currently supported. */ #ifdef RTE_NEXT_ABI
> + if (RTE_ETH_IS_TUNNEL_PKT(buf->packet_type))
> +#else
> + /* FIXME: since PKT_TX_UDP_TUNNEL_PKT has been removed,
> * the outer packet type is unknown. All we know is
> * that the L2 header is of unusual length (not
> * ETHER_HDR_LEN with or without 802.1Q header). */
> if ((buf->l2_len != ETHER_HDR_LEN) &&
> (buf->l2_len != (ETHER_HDR_LEN + 4)))
> +#endif
> send_flags |= IBV_EXP_QP_BURST_TUNNEL;
> }
> if (likely(segs == 1)) {
> @@ -2485,6 +2488,41 @@ rxq_cleanup(struct rxq *rxq)
> memset(rxq, 0, sizeof(*rxq));
> }
>
> +#ifdef RTE_NEXT_ABI
> +/**
> + * Translate RX completion flags to packet type.
> + *
> + * @param flags
> + * RX completion flags returned by poll_length_flags().
> + *
> + * @return
> + * Packet type for struct rte_mbuf.
> + */
> +static inline uint32_t
> +rxq_cq_to_pkt_type(uint32_t flags)
> +{
> + uint32_t pkt_type = 0;
Initial value of 0 seems not needed.
> +
> + if (flags & IBV_EXP_CQ_RX_TUNNEL_PACKET)
> + pkt_type |=
Operand of 'OR' is not needed at all.
> + TRANSPOSE(flags,
> + IBV_EXP_CQ_RX_OUTER_IPV4_PACKET,
> RTE_PTYPE_L3_IPV4) |
> + TRANSPOSE(flags,
> + IBV_EXP_CQ_RX_OUTER_IPV6_PACKET,
> RTE_PTYPE_L3_IPV6) |
> + TRANSPOSE(flags,
> + IBV_EXP_CQ_RX_IPV4_PACKET,
> RTE_PTYPE_INNER_L3_IPV4) |
> + TRANSPOSE(flags,
> + IBV_EXP_CQ_RX_IPV6_PACKET,
> RTE_PTYPE_INNER_L3_IPV6);
> + else
> + pkt_type |=
Operand of 'OR' is not needed at all.
Regards,
Helin
> + TRANSPOSE(flags,
> + IBV_EXP_CQ_RX_IPV4_PACKET, RTE_PTYPE_L3_IPV4)
> |
> + TRANSPOSE(flags,
> + IBV_EXP_CQ_RX_IPV6_PACKET, RTE_PTYPE_L3_IPV6);
> + return pkt_type;
> +}
> +#endif /* RTE_NEXT_ABI */
> +
> /**
> * Translate RX completion flags to offload flags.
> *
> @@ -2499,11 +2537,13 @@ rxq_cleanup(struct rxq *rxq) static inline uint32_t
> rxq_cq_to_ol_flags(const struct rxq *rxq, uint32_t flags) {
> - uint32_t ol_flags;
> + uint32_t ol_flags = 0;
>
> - ol_flags =
> +#ifndef RTE_NEXT_ABI
> + ol_flags |=
> TRANSPOSE(flags, IBV_EXP_CQ_RX_IPV4_PACKET, PKT_RX_IPV4_HDR)
> |
> TRANSPOSE(flags, IBV_EXP_CQ_RX_IPV6_PACKET,
> PKT_RX_IPV6_HDR);
> +#endif
> if (rxq->csum)
> ol_flags |=
> TRANSPOSE(~flags,
> @@ -2519,12 +2559,14 @@ rxq_cq_to_ol_flags(const struct rxq *rxq, uint32_t
> flags)
> */
> if ((flags & IBV_EXP_CQ_RX_TUNNEL_PACKET) && (rxq->csum_l2tun))
> ol_flags |=
> +#ifndef RTE_NEXT_ABI
> TRANSPOSE(flags,
> IBV_EXP_CQ_RX_OUTER_IPV4_PACKET,
> PKT_RX_TUNNEL_IPV4_HDR) |
> TRANSPOSE(flags,
> IBV_EXP_CQ_RX_OUTER_IPV6_PACKET,
> PKT_RX_TUNNEL_IPV6_HDR) |
> +#endif
> TRANSPOSE(~flags,
> IBV_EXP_CQ_RX_OUTER_IP_CSUM_OK,
> PKT_RX_IP_CKSUM_BAD) |
> @@ -2716,6 +2758,9 @@ mlx4_rx_burst_sp(void *dpdk_rxq, struct rte_mbuf
> **pkts, uint16_t pkts_n)
> NB_SEGS(pkt_buf) = j;
> PORT(pkt_buf) = rxq->port_id;
> PKT_LEN(pkt_buf) = pkt_buf_len;
> +#ifdef RTE_NEXT_ABI
> + pkt_buf->packet_type = rxq_cq_to_pkt_type(flags); #endif
> pkt_buf->ol_flags = rxq_cq_to_ol_flags(rxq, flags);
>
> /* Return packet. */
> @@ -2876,6 +2921,9 @@ mlx4_rx_burst(void *dpdk_rxq, struct rte_mbuf **pkts,
> uint16_t pkts_n)
> NEXT(seg) = NULL;
> PKT_LEN(seg) = len;
> DATA_LEN(seg) = len;
> +#ifdef RTE_NEXT_ABI
> + seg->packet_type = rxq_cq_to_pkt_type(flags); #endif
> seg->ol_flags = rxq_cq_to_ol_flags(rxq, flags);
>
> /* Return packet. */
> --
> 2.4.2
^ permalink raw reply [flat|nested] 257+ messages in thread
* Re: [dpdk-dev] [PATCH v10 00/19] unified packet type
2015-07-09 16:31 ` [dpdk-dev] [PATCH v10 00/19] unified packet type Helin Zhang
` (18 preceding siblings ...)
2015-07-09 16:31 ` [dpdk-dev] [PATCH v10 19/19] mbuf: remove old packet type bit masks Helin Zhang
@ 2015-07-15 23:00 ` Thomas Monjalon
2015-07-15 23:51 ` Zhang, Helin
19 siblings, 1 reply; 257+ messages in thread
From: Thomas Monjalon @ 2015-07-15 23:00 UTC (permalink / raw)
To: Helin Zhang; +Cc: dev
2015-07-10 00:31, Helin Zhang:
> Currently only 6 bits which are stored in ol_flags are used to indicate the
> packet types. This is not enough, as some NIC hardware can recognize quite
> a lot of packet types, e.g i40e hardware can recognize more than 150 packet
> types. Hiding those packet types hides hardware offload capabilities which
> could be quite useful for improving performance and for end users.
> So an unified packet types are needed to support all possible PMDs. A 16
> bits packet_type in mbuf structure can be changed to 32 bits and used for
> this purpose. In addition, all packet types stored in ol_flag field should
> be deleted at all, and 6 bits of ol_flags can be save as the benifit.
>
> Initially, 32 bits of packet_type can be divided into several sub fields to
> indicate different packet type information of a packet. The initial design
> is to divide those bits into fields for L2 types, L3 types, L4 types, tunnel
> types, inner L2 types, inner L3 types and inner L4 types. All PMDs should
> translate the offloaded packet types into these 7 fields of information, for
> user applications.
>
> To avoid breaking ABI compatibility, currently all the code changes for
> unified packet type are disabled at compile time by default. Users can enable
> it manually by defining the macro of RTE_NEXT_ABI. The code changes will be
> valid by default in a future release, and the old version will be deleted
> accordingly, after the ABI change process is done.
Applied with fixes for cxgbe and mlx4, thanks everyone
The macro RTE_ETH_IS_TUNNEL_PKT may need to take RTE_PTYPE_INNER_* into account.
^ permalink raw reply [flat|nested] 257+ messages in thread
* Re: [dpdk-dev] [PATCH] mlx4: replace some offload flags with packet type
2015-07-15 18:06 ` Zhang, Helin
@ 2015-07-15 23:05 ` Thomas Monjalon
0 siblings, 0 replies; 257+ messages in thread
From: Thomas Monjalon @ 2015-07-15 23:05 UTC (permalink / raw)
To: Zhang, Helin; +Cc: dev
2015-07-15 18:06, Zhang, Helin:
> > The workaround for Tx tunnel offloading can now be replaced with packet type
> > flag checking.
> > The ol_flags for IPv4/IPv6 and tunnel Rx offloading are replaced with packet type
> > flags.
> >
> > Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
> > Acked-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
[...]
> > + uint32_t pkt_type = 0;
> Initial value of 0 seems not needed.
>
> > +
> > + if (flags & IBV_EXP_CQ_RX_TUNNEL_PACKET)
> > + pkt_type |=
> Operand of 'OR' is not needed at all.
Matter of taste (OR allows to add more flags before).
Applied with above changes.
^ permalink raw reply [flat|nested] 257+ messages in thread
* Re: [dpdk-dev] [PATCH v10 00/19] unified packet type
2015-07-15 23:00 ` [dpdk-dev] [PATCH v10 00/19] unified " Thomas Monjalon
@ 2015-07-15 23:51 ` Zhang, Helin
0 siblings, 0 replies; 257+ messages in thread
From: Zhang, Helin @ 2015-07-15 23:51 UTC (permalink / raw)
To: Thomas Monjalon; +Cc: dev
> -----Original Message-----
> From: Thomas Monjalon [mailto:thomas.monjalon@6wind.com]
> Sent: Wednesday, July 15, 2015 4:01 PM
> To: Zhang, Helin
> Cc: dev@dpdk.org
> Subject: Re: [dpdk-dev] [PATCH v10 00/19] unified packet type
>
> 2015-07-10 00:31, Helin Zhang:
> > Currently only 6 bits which are stored in ol_flags are used to
> > indicate the packet types. This is not enough, as some NIC hardware
> > can recognize quite a lot of packet types, e.g i40e hardware can
> > recognize more than 150 packet types. Hiding those packet types hides
> > hardware offload capabilities which could be quite useful for improving
> performance and for end users.
> > So an unified packet types are needed to support all possible PMDs. A
> > 16 bits packet_type in mbuf structure can be changed to 32 bits and
> > used for this purpose. In addition, all packet types stored in ol_flag
> > field should be deleted at all, and 6 bits of ol_flags can be save as the benifit.
> >
> > Initially, 32 bits of packet_type can be divided into several sub
> > fields to indicate different packet type information of a packet. The
> > initial design is to divide those bits into fields for L2 types, L3
> > types, L4 types, tunnel types, inner L2 types, inner L3 types and
> > inner L4 types. All PMDs should translate the offloaded packet types
> > into these 7 fields of information, for user applications.
> >
> > To avoid breaking ABI compatibility, currently all the code changes
> > for unified packet type are disabled at compile time by default. Users
> > can enable it manually by defining the macro of RTE_NEXT_ABI. The code
> > changes will be valid by default in a future release, and the old
> > version will be deleted accordingly, after the ABI change process is done.
>
> Applied with fixes for cxgbe and mlx4, thanks everyone
>
> The macro RTE_ETH_IS_TUNNEL_PKT may need to take RTE_PTYPE_INNER_*
> into account.
Thank you so much!
Thanks to all the contributors!
Helin
^ permalink raw reply [flat|nested] 257+ messages in thread
end of thread, other threads:[~2015-07-15 23:51 UTC | newest]
Thread overview: 257+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
[not found] <1421637666-16872-1-git-send-email-helin.zhang@intel.com>
2015-01-29 3:15 ` [dpdk-dev] [PATCH 00/17] unified packet type Helin Zhang
2015-01-29 3:15 ` [dpdk-dev] [PATCH 01/17] mbuf: add definitions of unified packet types Helin Zhang
2015-01-30 13:56 ` Olivier MATZ
2015-02-02 1:43 ` Zhang, Helin
[not found] ` <54CF5CF8.2090605@6wind.com>
2015-02-03 3:18 ` Zhang, Helin
2015-02-03 6:37 ` Zhang, Helin
2015-02-03 9:12 ` Olivier MATZ
2015-01-29 3:15 ` [dpdk-dev] [PATCH 02/17] e1000: support of unified packet type Helin Zhang
2015-01-29 3:15 ` [dpdk-dev] [PATCH 03/17] ixgbe: " Helin Zhang
2015-01-29 3:15 ` [dpdk-dev] [PATCH 04/17] " Helin Zhang
2015-01-29 23:30 ` Bruce Richardson
2015-01-29 23:52 ` Liang, Cunming
2015-01-30 3:39 ` Bruce Richardson
2015-01-30 6:09 ` Zhang, Helin
2015-01-29 3:15 ` [dpdk-dev] [PATCH 05/17] i40e: " Helin Zhang
2015-01-29 3:15 ` [dpdk-dev] [PATCH 06/17] bond: " Helin Zhang
2015-02-11 15:01 ` Declan Doherty
2015-02-13 0:36 ` Zhang, Helin
2015-01-29 3:15 ` [dpdk-dev] [PATCH 07/17] enic: " Helin Zhang
2015-01-29 3:15 ` [dpdk-dev] [PATCH 08/17] vmxnet3: " Helin Zhang
2015-01-29 3:15 ` [dpdk-dev] [PATCH 09/17] app/test-pipeline: " Helin Zhang
2015-01-29 3:15 ` [dpdk-dev] [PATCH 10/17] app/test-pmd: " Helin Zhang
2015-01-29 3:15 ` [dpdk-dev] [PATCH 11/17] app/test: " Helin Zhang
2015-01-29 3:16 ` [dpdk-dev] [PATCH 12/17] examples/ip_fragmentation: " Helin Zhang
2015-01-29 3:16 ` [dpdk-dev] [PATCH 13/17] examples/ip_reassembly: " Helin Zhang
2015-01-29 3:16 ` [dpdk-dev] [PATCH 14/17] examples/l3fwd-acl: " Helin Zhang
2015-01-29 3:16 ` [dpdk-dev] [PATCH 15/17] examples/l3fwd-power: " Helin Zhang
2015-01-29 3:16 ` [dpdk-dev] [PATCH 16/17] examples/l3fwd: " Helin Zhang
2015-01-29 3:16 ` [dpdk-dev] [PATCH 17/17] mbuf: remove old packet type bit masks for ol_flags Helin Zhang
2015-01-30 13:37 ` Olivier MATZ
2015-02-02 1:53 ` Zhang, Helin
2015-01-30 13:31 ` [dpdk-dev] [PATCH 00/17] unified packet type Olivier MATZ
2015-02-02 2:44 ` Zhang, Helin
[not found] ` <54CF617B.5010009@6wind.com>
[not found] ` <2601191342CEEE43887BDE71AB977258213E28EC@irsmsx105.ger.corp.intel.com>
2015-02-03 3:25 ` Zhang, Helin
2015-02-03 8:55 ` Olivier MATZ
2015-02-09 6:40 ` [dpdk-dev] [PATCH v2 00/15] " Helin Zhang
2015-02-09 6:40 ` [dpdk-dev] [PATCH v2 01/15] mbuf: add definitions of unified packet types Helin Zhang
2015-02-09 10:27 ` Bruce Richardson
2015-02-10 0:53 ` Zhang, Helin
2015-02-10 10:12 ` Bruce Richardson
2015-02-09 6:40 ` [dpdk-dev] [PATCH v2 02/15] e1000: support of unified packet type Helin Zhang
2015-02-09 6:40 ` [dpdk-dev] [PATCH v2 03/15] ixgbe: " Helin Zhang
2015-02-09 6:40 ` [dpdk-dev] [PATCH v2 04/15] ixgbe: support of unified packet type for vector Helin Zhang
2015-02-09 6:40 ` [dpdk-dev] [PATCH v2 05/15] i40e: support of unified packet type Helin Zhang
2015-02-09 6:40 ` [dpdk-dev] [PATCH v2 06/15] enic: " Helin Zhang
2015-02-09 6:40 ` [dpdk-dev] [PATCH v2 07/15] vmxnet3: " Helin Zhang
2015-02-11 1:46 ` Yong Wang
2015-02-09 6:40 ` [dpdk-dev] [PATCH v2 08/15] app/test-pipeline: " Helin Zhang
2015-02-09 6:40 ` [dpdk-dev] [PATCH v2 09/15] app/test: " Helin Zhang
2015-02-09 6:40 ` [dpdk-dev] [PATCH v2 10/15] examples/ip_fragmentation: " Helin Zhang
2015-02-09 6:40 ` [dpdk-dev] [PATCH v2 11/15] examples/ip_reassembly: " Helin Zhang
2015-02-09 6:40 ` [dpdk-dev] [PATCH v2 12/15] examples/l3fwd-acl: " Helin Zhang
2015-02-09 6:40 ` [dpdk-dev] [PATCH v2 13/15] examples/l3fwd-power: " Helin Zhang
2015-02-09 6:40 ` [dpdk-dev] [PATCH v2 14/15] examples/l3fwd: " Helin Zhang
2015-02-16 17:04 ` Ananyev, Konstantin
2015-02-17 2:57 ` Zhang, Helin
2015-02-09 6:40 ` [dpdk-dev] [PATCH v2 15/15] mbuf: remove old packet type bit masks Helin Zhang
2015-02-17 6:59 ` [dpdk-dev] [PATCH v3 00/16] unified packet type Helin Zhang
2015-02-17 6:59 ` [dpdk-dev] [PATCH v3 01/16] mbuf: redefinition of packet_type in rte_mbuf Helin Zhang
2015-02-17 6:59 ` [dpdk-dev] [PATCH v3 02/16] ixgbe: support of unified packet type for vector Helin Zhang
2015-02-17 6:59 ` [dpdk-dev] [PATCH v3 03/16] mbuf: add definitions of unified packet types Helin Zhang
2015-02-17 9:01 ` Olivier MATZ
2015-02-20 14:26 ` Zhang, Helin
2015-02-24 9:09 ` Olivier MATZ
2015-02-24 13:38 ` Zhang, Helin
2015-02-17 6:59 ` [dpdk-dev] [PATCH v3 04/16] e1000: support of unified packet type Helin Zhang
2015-02-17 6:59 ` [dpdk-dev] [PATCH v3 05/16] ixgbe: " Helin Zhang
2015-02-17 6:59 ` [dpdk-dev] [PATCH v3 06/16] i40e: " Helin Zhang
2015-02-17 6:59 ` [dpdk-dev] [PATCH v3 07/16] enic: " Helin Zhang
2015-02-17 6:59 ` [dpdk-dev] [PATCH v3 08/16] vmxnet3: " Helin Zhang
2015-02-27 11:25 ` Thomas Monjalon
2015-02-27 12:26 ` Zhang, Helin
2015-02-17 6:59 ` [dpdk-dev] [PATCH v3 09/16] app/test-pipeline: " Helin Zhang
2015-02-17 6:59 ` [dpdk-dev] [PATCH v3 10/16] app/testpmd: " Helin Zhang
2015-02-17 6:59 ` [dpdk-dev] [PATCH v3 11/16] examples/ip_fragmentation: " Helin Zhang
2015-02-17 6:59 ` [dpdk-dev] [PATCH v3 12/16] examples/ip_reassembly: " Helin Zhang
2015-02-17 6:59 ` [dpdk-dev] [PATCH v3 13/16] examples/l3fwd-acl: " Helin Zhang
2015-02-17 6:59 ` [dpdk-dev] [PATCH v3 14/16] examples/l3fwd-power: " Helin Zhang
2015-02-17 6:59 ` [dpdk-dev] [PATCH v3 15/16] examples/l3fwd: " Helin Zhang
2015-02-17 6:59 ` [dpdk-dev] [PATCH v3 16/16] mbuf: remove old packet type bit masks Helin Zhang
2015-02-17 7:03 ` [dpdk-dev] [PATCH v3 00/16] unified packet type Liang, Cunming
2015-02-17 9:46 ` Ananyev, Konstantin
2015-02-27 13:11 ` [dpdk-dev] [PATCH v4 00/18] " Helin Zhang
2015-02-27 13:11 ` [dpdk-dev] [PATCH v4 01/18] mbuf: redefinition of packet_type in rte_mbuf Helin Zhang
2015-03-02 11:47 ` Chilikin, Andrey
2015-03-04 8:34 ` Zhang, Helin
2015-03-04 10:58 ` Chilikin, Andrey
2015-03-05 0:55 ` Zhang, Helin
2015-02-27 13:11 ` [dpdk-dev] [PATCH v4 02/18] ixgbe: support of unified packet type for vector Helin Zhang
2015-02-27 13:11 ` [dpdk-dev] [PATCH v4 03/18] mbuf: add definitions of unified packet types Helin Zhang
2015-02-27 15:02 ` Olivier MATZ
2015-02-27 13:11 ` [dpdk-dev] [PATCH v4 04/18] e1000: support of unified packet type Helin Zhang
2015-02-27 13:11 ` [dpdk-dev] [PATCH v4 05/18] ixgbe: " Helin Zhang
2015-02-27 13:11 ` [dpdk-dev] [PATCH v4 06/18] i40e: " Helin Zhang
2015-02-27 13:11 ` [dpdk-dev] [PATCH v4 07/18] enic: " Helin Zhang
2015-02-27 13:11 ` [dpdk-dev] [PATCH v4 08/18] vmxnet3: " Helin Zhang
2015-02-27 13:11 ` [dpdk-dev] [PATCH v4 09/18] fm10k: " Helin Zhang
2015-02-27 13:11 ` [dpdk-dev] [PATCH v4 10/18] app/test-pipeline: " Helin Zhang
2015-02-27 13:11 ` [dpdk-dev] [PATCH v4 11/18] app/testpmd: " Helin Zhang
2015-02-27 13:11 ` [dpdk-dev] [PATCH v4 12/18] app/test: Remove useless code Helin Zhang
2015-02-27 16:01 ` Gajdzica, MaciejX T
2015-02-27 13:11 ` [dpdk-dev] [PATCH v4 13/18] examples/ip_fragmentation: support of unified packet type Helin Zhang
2015-02-27 13:11 ` [dpdk-dev] [PATCH v4 14/18] examples/ip_reassembly: " Helin Zhang
2015-02-27 13:11 ` [dpdk-dev] [PATCH v4 15/18] examples/l3fwd-acl: " Helin Zhang
2015-02-27 13:11 ` [dpdk-dev] [PATCH v4 16/18] examples/l3fwd-power: " Helin Zhang
2015-02-27 13:11 ` [dpdk-dev] [PATCH v4 17/18] examples/l3fwd: " Helin Zhang
2015-02-27 13:11 ` [dpdk-dev] [PATCH v4 18/18] mbuf: remove old packet type bit masks Helin Zhang
2015-05-22 8:44 ` [dpdk-dev] [PATCH v5 00/18] unified packet type Helin Zhang
2015-05-22 8:44 ` [dpdk-dev] [PATCH v5 01/18] mbuf: redefine packet_type in rte_mbuf Helin Zhang
2015-05-22 10:09 ` Neil Horman
2015-05-22 8:44 ` [dpdk-dev] [PATCH v5 02/18] ixgbe: support unified packet type in vectorized PMD Helin Zhang
2015-05-22 8:44 ` [dpdk-dev] [PATCH v5 03/18] mbuf: add definitions of unified packet types Helin Zhang
2015-05-22 8:44 ` [dpdk-dev] [PATCH v5 04/18] e1000: replace bit mask based packet type with unified packet type Helin Zhang
2015-05-22 8:44 ` [dpdk-dev] [PATCH v5 05/18] ixgbe: " Helin Zhang
2015-05-22 8:44 ` [dpdk-dev] [PATCH v5 06/18] i40e: " Helin Zhang
2015-05-22 8:44 ` [dpdk-dev] [PATCH v5 07/18] enic: " Helin Zhang
2015-05-22 8:44 ` [dpdk-dev] [PATCH v5 08/18] vmxnet3: " Helin Zhang
2015-05-22 8:44 ` [dpdk-dev] [PATCH v5 09/18] fm10k: " Helin Zhang
2015-05-22 8:44 ` [dpdk-dev] [PATCH v5 10/18] app/test-pipeline: " Helin Zhang
2015-05-22 8:44 ` [dpdk-dev] [PATCH v5 11/18] app/testpmd: " Helin Zhang
2015-05-22 8:44 ` [dpdk-dev] [PATCH v5 12/18] app/test: Remove useless code Helin Zhang
2015-05-22 8:44 ` [dpdk-dev] [PATCH v5 13/18] examples/ip_fragmentation: replace bit mask based packet type with unified packet type Helin Zhang
2015-05-22 8:44 ` [dpdk-dev] [PATCH v5 14/18] examples/ip_reassembly: " Helin Zhang
2015-05-22 8:44 ` [dpdk-dev] [PATCH v5 15/18] examples/l3fwd-acl: " Helin Zhang
2015-05-22 8:44 ` [dpdk-dev] [PATCH v5 16/18] examples/l3fwd-power: " Helin Zhang
2015-05-22 8:44 ` [dpdk-dev] [PATCH v5 17/18] examples/l3fwd: " Helin Zhang
2015-05-22 8:44 ` [dpdk-dev] [PATCH v5 18/18] mbuf: remove old packet type bit masks Helin Zhang
2015-06-01 7:33 ` [dpdk-dev] [PATCH v6 00/18] unified packet type Helin Zhang
2015-06-01 7:33 ` [dpdk-dev] [PATCH v6 01/18] mbuf: redefine packet_type in rte_mbuf Helin Zhang
2015-06-01 8:14 ` Olivier MATZ
2015-06-02 13:27 ` O'Driscoll, Tim
2015-06-10 14:32 ` Olivier MATZ
2015-06-10 14:51 ` Zhang, Helin
2015-06-10 15:39 ` Ananyev, Konstantin
2015-06-12 3:22 ` Zhang, Helin
2015-06-10 16:14 ` Thomas Monjalon
2015-06-12 7:24 ` Panu Matilainen
2015-06-12 7:43 ` Zhang, Helin
2015-06-12 8:15 ` Panu Matilainen
2015-06-12 8:28 ` Zhang, Helin
2015-06-12 9:00 ` Panu Matilainen
2015-06-12 9:07 ` Bruce Richardson
2015-06-01 7:33 ` [dpdk-dev] [PATCH v6 02/18] ixgbe: support unified packet type in vectorized PMD Helin Zhang
2015-06-01 7:33 ` [dpdk-dev] [PATCH v6 03/18] mbuf: add definitions of unified packet types Helin Zhang
2015-06-01 7:33 ` [dpdk-dev] [PATCH v6 04/18] e1000: replace bit mask based packet type with unified packet type Helin Zhang
2015-06-01 7:33 ` [dpdk-dev] [PATCH v6 05/18] ixgbe: " Helin Zhang
2015-06-01 7:33 ` [dpdk-dev] [PATCH v6 06/18] i40e: " Helin Zhang
2015-06-01 7:33 ` [dpdk-dev] [PATCH v6 07/18] enic: " Helin Zhang
2015-06-01 7:33 ` [dpdk-dev] [PATCH v6 08/18] vmxnet3: " Helin Zhang
2015-06-01 7:33 ` [dpdk-dev] [PATCH v6 09/18] fm10k: " Helin Zhang
2015-06-01 7:33 ` [dpdk-dev] [PATCH v6 10/18] app/test-pipeline: " Helin Zhang
2015-06-01 7:33 ` [dpdk-dev] [PATCH v6 11/18] app/testpmd: " Helin Zhang
2015-06-01 7:33 ` [dpdk-dev] [PATCH v6 12/18] app/test: Remove useless code Helin Zhang
2015-06-01 7:34 ` [dpdk-dev] [PATCH v6 13/18] examples/ip_fragmentation: replace bit mask based packet type with unified packet type Helin Zhang
2015-06-01 7:34 ` [dpdk-dev] [PATCH v6 14/18] examples/ip_reassembly: " Helin Zhang
2015-06-01 7:34 ` [dpdk-dev] [PATCH v6 15/18] examples/l3fwd-acl: " Helin Zhang
2015-06-01 7:34 ` [dpdk-dev] [PATCH v6 16/18] examples/l3fwd-power: " Helin Zhang
2015-06-01 7:34 ` [dpdk-dev] [PATCH v6 17/18] examples/l3fwd: " Helin Zhang
2015-06-01 7:34 ` [dpdk-dev] [PATCH v6 18/18] mbuf: remove old packet type bit masks Helin Zhang
2015-06-19 8:14 ` [dpdk-dev] [PATCH v7 00/18] unified packet type Helin Zhang
2015-06-19 8:14 ` [dpdk-dev] [PATCH v7 01/18] mbuf: redefine packet_type in rte_mbuf Helin Zhang
2015-06-19 8:14 ` [dpdk-dev] [PATCH v7 02/18] ixgbe: support unified packet type in vectorized PMD Helin Zhang
2015-06-19 8:14 ` [dpdk-dev] [PATCH v7 03/18] mbuf: add definitions of unified packet types Helin Zhang
2015-06-19 8:14 ` [dpdk-dev] [PATCH v7 04/18] e1000: replace bit mask based packet type with unified packet type Helin Zhang
2015-06-19 8:14 ` [dpdk-dev] [PATCH v7 05/18] ixgbe: " Helin Zhang
2015-06-19 8:14 ` [dpdk-dev] [PATCH v7 06/18] i40e: " Helin Zhang
2015-06-19 8:14 ` [dpdk-dev] [PATCH v7 07/18] enic: " Helin Zhang
2015-06-19 8:14 ` [dpdk-dev] [PATCH v7 08/18] vmxnet3: " Helin Zhang
2015-06-19 8:14 ` [dpdk-dev] [PATCH v7 09/18] fm10k: " Helin Zhang
2015-06-19 8:14 ` [dpdk-dev] [PATCH v7 10/18] app/test-pipeline: " Helin Zhang
2015-06-19 8:14 ` [dpdk-dev] [PATCH v7 11/18] app/testpmd: " Helin Zhang
2015-06-19 8:14 ` [dpdk-dev] [PATCH v7 12/18] app/test: Remove useless code Helin Zhang
2015-06-19 8:14 ` [dpdk-dev] [PATCH v7 13/18] examples/ip_fragmentation: replace bit mask based packet type with unified packet type Helin Zhang
2015-06-19 8:14 ` [dpdk-dev] [PATCH v7 14/18] examples/ip_reassembly: " Helin Zhang
2015-06-19 8:14 ` [dpdk-dev] [PATCH v7 15/18] examples/l3fwd-acl: " Helin Zhang
2015-06-19 8:14 ` [dpdk-dev] [PATCH v7 16/18] examples/l3fwd-power: " Helin Zhang
2015-06-19 8:14 ` [dpdk-dev] [PATCH v7 17/18] examples/l3fwd: " Helin Zhang
2015-06-19 8:14 ` [dpdk-dev] [PATCH v7 18/18] mbuf: remove old packet type bit masks Helin Zhang
2015-06-23 1:50 ` [dpdk-dev] [PATCH v8 00/18] unified packet type Helin Zhang
2015-06-23 1:50 ` [dpdk-dev] [PATCH v8 01/18] mbuf: redefine packet_type in rte_mbuf Helin Zhang
2015-07-02 9:03 ` Thomas Monjalon
2015-07-03 1:11 ` Zhang, Helin
2015-06-23 1:50 ` [dpdk-dev] [PATCH v8 02/18] ixgbe: support unified packet type in vectorized PMD Helin Zhang
2015-06-23 1:50 ` [dpdk-dev] [PATCH v8 03/18] mbuf: add definitions of unified packet types Helin Zhang
2015-06-30 8:43 ` Olivier MATZ
2015-07-02 1:30 ` Zhang, Helin
2015-07-02 9:31 ` Olivier MATZ
2015-07-03 1:30 ` Zhang, Helin
2015-06-23 1:50 ` [dpdk-dev] [PATCH v8 04/18] e1000: replace bit mask based packet type with unified packet type Helin Zhang
2015-06-23 1:50 ` [dpdk-dev] [PATCH v8 05/18] ixgbe: " Helin Zhang
2015-06-23 1:50 ` [dpdk-dev] [PATCH v8 06/18] i40e: " Helin Zhang
2015-06-23 1:50 ` [dpdk-dev] [PATCH v8 07/18] enic: " Helin Zhang
2015-06-23 1:50 ` [dpdk-dev] [PATCH v8 08/18] vmxnet3: " Helin Zhang
2015-06-23 1:50 ` [dpdk-dev] [PATCH v8 09/18] fm10k: " Helin Zhang
2015-06-23 1:50 ` [dpdk-dev] [PATCH v8 10/18] app/test-pipeline: " Helin Zhang
2015-06-23 1:50 ` [dpdk-dev] [PATCH v8 11/18] app/testpmd: " Helin Zhang
2015-06-23 1:50 ` [dpdk-dev] [PATCH v8 12/18] app/test: Remove useless code Helin Zhang
2015-06-23 1:50 ` [dpdk-dev] [PATCH v8 13/18] examples/ip_fragmentation: replace bit mask based packet type with unified packet type Helin Zhang
2015-06-23 1:50 ` [dpdk-dev] [PATCH v8 14/18] examples/ip_reassembly: " Helin Zhang
2015-06-23 1:50 ` [dpdk-dev] [PATCH v8 15/18] examples/l3fwd-acl: " Helin Zhang
2015-06-23 1:50 ` [dpdk-dev] [PATCH v8 16/18] examples/l3fwd-power: " Helin Zhang
2015-06-23 1:50 ` [dpdk-dev] [PATCH v8 17/18] examples/l3fwd: " Helin Zhang
2015-06-23 1:50 ` [dpdk-dev] [PATCH v8 18/18] mbuf: remove old packet type bit masks Helin Zhang
2015-06-23 16:13 ` [dpdk-dev] [PATCH v8 00/18] unified packet type Ananyev, Konstantin
2015-07-02 8:45 ` Liu, Yong
2015-07-03 8:32 ` [dpdk-dev] [PATCH v9 00/19] " Helin Zhang
2015-07-03 8:32 ` [dpdk-dev] [PATCH v9 01/19] mbuf: redefine packet_type in rte_mbuf Helin Zhang
2015-07-03 8:32 ` [dpdk-dev] [PATCH v9 02/19] mbuf: add definitions of unified packet types Helin Zhang
2015-07-03 8:32 ` [dpdk-dev] [PATCH v9 03/19] e1000: replace bit mask based packet type with unified packet type Helin Zhang
2015-07-03 8:32 ` [dpdk-dev] [PATCH v9 04/19] ixgbe: " Helin Zhang
2015-07-03 8:32 ` [dpdk-dev] [PATCH v9 05/19] i40e: " Helin Zhang
2015-07-03 8:32 ` [dpdk-dev] [PATCH v9 06/19] enic: " Helin Zhang
2015-07-03 8:32 ` [dpdk-dev] [PATCH v9 07/19] vmxnet3: " Helin Zhang
2015-07-03 8:32 ` [dpdk-dev] [PATCH v9 08/19] fm10k: " Helin Zhang
2015-07-03 8:32 ` [dpdk-dev] [PATCH v9 09/19] cxgbe: " Helin Zhang
2015-07-03 8:32 ` [dpdk-dev] [PATCH v9 10/19] app/test-pipeline: " Helin Zhang
2015-07-03 8:32 ` [dpdk-dev] [PATCH v9 11/19] app/testpmd: " Helin Zhang
2015-07-03 8:32 ` [dpdk-dev] [PATCH v9 12/19] app/test: Remove useless code Helin Zhang
2015-07-03 8:32 ` [dpdk-dev] [PATCH v9 13/19] examples/ip_fragmentation: replace bit mask based packet type with unified packet type Helin Zhang
2015-07-03 8:32 ` [dpdk-dev] [PATCH v9 14/19] examples/ip_reassembly: " Helin Zhang
2015-07-03 8:32 ` [dpdk-dev] [PATCH v9 15/19] examples/l3fwd-acl: " Helin Zhang
2015-07-03 8:32 ` [dpdk-dev] [PATCH v9 16/19] examples/l3fwd-power: " Helin Zhang
2015-07-03 8:32 ` [dpdk-dev] [PATCH v9 17/19] examples/l3fwd: " Helin Zhang
2015-07-03 8:32 ` [dpdk-dev] [PATCH v9 18/19] examples/tep_termination: " Helin Zhang
2015-07-03 8:32 ` [dpdk-dev] [PATCH v9 19/19] mbuf: remove old packet type bit masks Helin Zhang
2015-07-09 16:31 ` [dpdk-dev] [PATCH v10 00/19] unified packet type Helin Zhang
2015-07-09 16:31 ` [dpdk-dev] [PATCH v10 01/19] mbuf: redefine packet_type in rte_mbuf Helin Zhang
2015-07-13 15:53 ` Thomas Monjalon
2015-07-09 16:31 ` [dpdk-dev] [PATCH v10 02/19] mbuf: add definitions of unified packet types Helin Zhang
2015-07-15 10:19 ` Olivier MATZ
2015-07-09 16:31 ` [dpdk-dev] [PATCH v10 03/19] e1000: replace bit mask based packet type with unified packet type Helin Zhang
2015-07-09 16:31 ` [dpdk-dev] [PATCH v10 04/19] ixgbe: " Helin Zhang
2015-07-09 16:31 ` [dpdk-dev] [PATCH v10 05/19] i40e: " Helin Zhang
2015-07-09 16:31 ` [dpdk-dev] [PATCH v10 06/19] enic: " Helin Zhang
2015-07-09 16:31 ` [dpdk-dev] [PATCH v10 07/19] vmxnet3: " Helin Zhang
2015-07-09 16:31 ` [dpdk-dev] [PATCH v10 08/19] fm10k: " Helin Zhang
2015-07-09 16:31 ` [dpdk-dev] [PATCH v10 09/19] cxgbe: " Helin Zhang
2015-07-09 16:31 ` [dpdk-dev] [PATCH v10 10/19] app/test-pipeline: " Helin Zhang
2015-07-09 16:31 ` [dpdk-dev] [PATCH v10 11/19] app/testpmd: " Helin Zhang
2015-07-09 16:31 ` [dpdk-dev] [PATCH v10 12/19] app/test: Remove useless code Helin Zhang
2015-07-09 16:31 ` [dpdk-dev] [PATCH v10 13/19] examples/ip_fragmentation: replace bit mask based packet type with unified packet type Helin Zhang
2015-07-09 16:31 ` [dpdk-dev] [PATCH v10 14/19] examples/ip_reassembly: " Helin Zhang
2015-07-09 16:31 ` [dpdk-dev] [PATCH v10 15/19] examples/l3fwd-acl: " Helin Zhang
2015-07-09 16:31 ` [dpdk-dev] [PATCH v10 16/19] examples/l3fwd-power: " Helin Zhang
2015-07-09 16:31 ` [dpdk-dev] [PATCH v10 17/19] examples/l3fwd: " Helin Zhang
2015-07-09 16:31 ` [dpdk-dev] [PATCH v10 18/19] examples/tep_termination: " Helin Zhang
2015-07-09 16:31 ` [dpdk-dev] [PATCH v10 19/19] mbuf: remove old packet type bit masks Helin Zhang
2015-07-13 16:13 ` Thomas Monjalon
2015-07-13 16:25 ` Zhang, Helin
2015-07-13 16:27 ` Thomas Monjalon
2015-07-13 16:32 ` Zhang, Helin
2015-07-13 17:58 ` Zhang, Helin
2015-07-15 17:32 ` [dpdk-dev] [PATCH] mlx4: replace some offload flags with packet type Thomas Monjalon
2015-07-15 18:06 ` Zhang, Helin
2015-07-15 23:05 ` Thomas Monjalon
2015-07-15 23:00 ` [dpdk-dev] [PATCH v10 00/19] unified " Thomas Monjalon
2015-07-15 23:51 ` Zhang, Helin
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).