* [dpdk-dev] [PATCH RFC 0/6] support of QinQ stripping and insertion of i40e
@ 2015-05-05 2:32 Helin Zhang
2015-05-05 2:32 ` [dpdk-dev] [PATCH RFC 1/6] mbuf: update mbuf structure for QinQ support Helin Zhang
` (6 more replies)
0 siblings, 7 replies; 55+ messages in thread
From: Helin Zhang @ 2015-05-05 2:32 UTC (permalink / raw)
To: dev
As i40e hardware can be reconfigured to support QinQ stripping and
insertion, this patch set is to enable that with the update of
'struct rte_mbuf', and testpmd commands.
Note that the Vector-PMD will be updated soon later.
Helin Zhang (6):
mbuf: update mbuf structure for QinQ support
i40e: reconfigure the hardware to support QinQ stripping/insertion
i40e: support of QinQ stripping/insertion in RX/TX
ethdev: add QinQ offload capability flags
i40e: update of offload capability flags
app/testpmd: support of QinQ stripping and insertion
app/test-pmd/cmdline.c | 78 +++++++++++++++++++++++++++++---
app/test-pmd/config.c | 23 +++++++++-
app/test-pmd/flowgen.c | 8 ++--
app/test-pmd/macfwd.c | 5 ++-
app/test-pmd/macswap.c | 5 ++-
app/test-pmd/rxonly.c | 5 ++-
app/test-pmd/testpmd.h | 6 ++-
app/test-pmd/txonly.c | 10 +++--
app/test/packet_burst_generator.c | 4 +-
lib/librte_ether/rte_ethdev.h | 28 ++++++------
lib/librte_ether/rte_ether.h | 4 +-
lib/librte_mbuf/rte_mbuf.h | 22 +++++++--
lib/librte_pmd_e1000/em_rxtx.c | 8 ++--
lib/librte_pmd_e1000/igb_rxtx.c | 8 ++--
lib/librte_pmd_enic/enic_ethdev.c | 2 +-
lib/librte_pmd_enic/enic_main.c | 2 +-
lib/librte_pmd_fm10k/fm10k_rxtx.c | 2 +-
lib/librte_pmd_i40e/i40e_ethdev.c | 50 +++++++++++++++++++++
lib/librte_pmd_i40e/i40e_ethdev_vf.c | 13 ++++++
lib/librte_pmd_i40e/i40e_rxtx.c | 85 +++++++++++++++++++++++------------
lib/librte_pmd_ixgbe/ixgbe_rxtx.c | 11 +++--
lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c | 6 +--
22 files changed, 297 insertions(+), 88 deletions(-)
--
1.9.3
^ permalink raw reply [flat|nested] 55+ messages in thread
* [dpdk-dev] [PATCH RFC 1/6] mbuf: update mbuf structure for QinQ support
2015-05-05 2:32 [dpdk-dev] [PATCH RFC 0/6] support of QinQ stripping and insertion of i40e Helin Zhang
@ 2015-05-05 2:32 ` Helin Zhang
2015-05-05 11:04 ` Ananyev, Konstantin
2015-05-05 2:32 ` [dpdk-dev] [PATCH RFC 2/6] i40e: reconfigure the hardware to support QinQ stripping/insertion Helin Zhang
` (5 subsequent siblings)
6 siblings, 1 reply; 55+ messages in thread
From: Helin Zhang @ 2015-05-05 2:32 UTC (permalink / raw)
To: dev
To support QinQ, 'vlan_tci' should be replaced by 'vlan_tci0' and
'vlan_tci1'. Also new offload flags of 'PKT_RX_QINQ_PKT' and
'PKT_TX_QINQ_PKT' should be added.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
app/test-pmd/flowgen.c | 2 +-
app/test-pmd/macfwd.c | 2 +-
app/test-pmd/macswap.c | 2 +-
app/test-pmd/rxonly.c | 2 +-
app/test-pmd/txonly.c | 2 +-
app/test/packet_burst_generator.c | 4 ++--
lib/librte_ether/rte_ether.h | 4 ++--
lib/librte_mbuf/rte_mbuf.h | 22 +++++++++++++++++++---
lib/librte_pmd_e1000/em_rxtx.c | 8 ++++----
lib/librte_pmd_e1000/igb_rxtx.c | 8 ++++----
lib/librte_pmd_enic/enic_ethdev.c | 2 +-
lib/librte_pmd_enic/enic_main.c | 2 +-
lib/librte_pmd_fm10k/fm10k_rxtx.c | 2 +-
lib/librte_pmd_i40e/i40e_rxtx.c | 8 ++++----
lib/librte_pmd_ixgbe/ixgbe_rxtx.c | 11 +++++------
lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c | 6 +++---
16 files changed, 51 insertions(+), 36 deletions(-)
diff --git a/app/test-pmd/flowgen.c b/app/test-pmd/flowgen.c
index 72016c9..f24b00c 100644
--- a/app/test-pmd/flowgen.c
+++ b/app/test-pmd/flowgen.c
@@ -207,7 +207,7 @@ pkt_burst_flow_gen(struct fwd_stream *fs)
pkt->nb_segs = 1;
pkt->pkt_len = pkt_size;
pkt->ol_flags = ol_flags;
- pkt->vlan_tci = vlan_tci;
+ pkt->vlan_tci0 = vlan_tci;
pkt->l2_len = sizeof(struct ether_hdr);
pkt->l3_len = sizeof(struct ipv4_hdr);
pkts_burst[nb_pkt] = pkt;
diff --git a/app/test-pmd/macfwd.c b/app/test-pmd/macfwd.c
index 035e5eb..590b613 100644
--- a/app/test-pmd/macfwd.c
+++ b/app/test-pmd/macfwd.c
@@ -120,7 +120,7 @@ pkt_burst_mac_forward(struct fwd_stream *fs)
mb->ol_flags = ol_flags;
mb->l2_len = sizeof(struct ether_hdr);
mb->l3_len = sizeof(struct ipv4_hdr);
- mb->vlan_tci = txp->tx_vlan_id;
+ mb->vlan_tci0 = txp->tx_vlan_id;
}
nb_tx = rte_eth_tx_burst(fs->tx_port, fs->tx_queue, pkts_burst, nb_rx);
fs->tx_packets += nb_tx;
diff --git a/app/test-pmd/macswap.c b/app/test-pmd/macswap.c
index 6729849..c355399 100644
--- a/app/test-pmd/macswap.c
+++ b/app/test-pmd/macswap.c
@@ -122,7 +122,7 @@ pkt_burst_mac_swap(struct fwd_stream *fs)
mb->ol_flags = ol_flags;
mb->l2_len = sizeof(struct ether_hdr);
mb->l3_len = sizeof(struct ipv4_hdr);
- mb->vlan_tci = txp->tx_vlan_id;
+ mb->vlan_tci0 = txp->tx_vlan_id;
}
nb_tx = rte_eth_tx_burst(fs->tx_port, fs->tx_queue, pkts_burst, nb_rx);
fs->tx_packets += nb_tx;
diff --git a/app/test-pmd/rxonly.c b/app/test-pmd/rxonly.c
index ac56090..aa2cf7f 100644
--- a/app/test-pmd/rxonly.c
+++ b/app/test-pmd/rxonly.c
@@ -159,7 +159,7 @@ pkt_burst_receive(struct fwd_stream *fs)
mb->hash.fdir.hash, mb->hash.fdir.id);
}
if (ol_flags & PKT_RX_VLAN_PKT)
- printf(" - VLAN tci=0x%x", mb->vlan_tci);
+ printf(" - VLAN tci=0x%x", mb->vlan_tci0);
if (is_encapsulation) {
struct ipv4_hdr *ipv4_hdr;
struct ipv6_hdr *ipv6_hdr;
diff --git a/app/test-pmd/txonly.c b/app/test-pmd/txonly.c
index ca32c85..4a2827f 100644
--- a/app/test-pmd/txonly.c
+++ b/app/test-pmd/txonly.c
@@ -266,7 +266,7 @@ pkt_burst_transmit(struct fwd_stream *fs)
pkt->nb_segs = tx_pkt_nb_segs;
pkt->pkt_len = tx_pkt_length;
pkt->ol_flags = ol_flags;
- pkt->vlan_tci = vlan_tci;
+ pkt->vlan_tci0 = vlan_tci;
pkt->l2_len = sizeof(struct ether_hdr);
pkt->l3_len = sizeof(struct ipv4_hdr);
pkts_burst[nb_pkt] = pkt;
diff --git a/app/test/packet_burst_generator.c b/app/test/packet_burst_generator.c
index b46eed7..959644c 100644
--- a/app/test/packet_burst_generator.c
+++ b/app/test/packet_burst_generator.c
@@ -270,7 +270,7 @@ nomore_mbuf:
pkt->l2_len = eth_hdr_size;
if (ipv4) {
- pkt->vlan_tci = ETHER_TYPE_IPv4;
+ pkt->vlan_tci0 = ETHER_TYPE_IPv4;
pkt->l3_len = sizeof(struct ipv4_hdr);
if (vlan_enabled)
@@ -278,7 +278,7 @@ nomore_mbuf:
else
pkt->ol_flags = PKT_RX_IPV4_HDR;
} else {
- pkt->vlan_tci = ETHER_TYPE_IPv6;
+ pkt->vlan_tci0 = ETHER_TYPE_IPv6;
pkt->l3_len = sizeof(struct ipv6_hdr);
if (vlan_enabled)
diff --git a/lib/librte_ether/rte_ether.h b/lib/librte_ether/rte_ether.h
index 49f4576..6d682a2 100644
--- a/lib/librte_ether/rte_ether.h
+++ b/lib/librte_ether/rte_ether.h
@@ -357,7 +357,7 @@ static inline int rte_vlan_strip(struct rte_mbuf *m)
struct vlan_hdr *vh = (struct vlan_hdr *)(eh + 1);
m->ol_flags |= PKT_RX_VLAN_PKT;
- m->vlan_tci = rte_be_to_cpu_16(vh->vlan_tci);
+ m->vlan_tci0 = rte_be_to_cpu_16(vh->vlan_tci);
/* Copy ether header over rather than moving whole packet */
memmove(rte_pktmbuf_adj(m, sizeof(struct vlan_hdr)),
@@ -404,7 +404,7 @@ static inline int rte_vlan_insert(struct rte_mbuf **m)
nh->ether_type = rte_cpu_to_be_16(ETHER_TYPE_VLAN);
vh = (struct vlan_hdr *) (nh + 1);
- vh->vlan_tci = rte_cpu_to_be_16((*m)->vlan_tci);
+ vh->vlan_tci = rte_cpu_to_be_16((*m)->vlan_tci0);
return 0;
}
diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
index 70b0987..6eed54f 100644
--- a/lib/librte_mbuf/rte_mbuf.h
+++ b/lib/librte_mbuf/rte_mbuf.h
@@ -101,11 +101,17 @@ extern "C" {
#define PKT_RX_TUNNEL_IPV6_HDR (1ULL << 12) /**< RX tunnel packet with IPv6 header. */
#define PKT_RX_FDIR_ID (1ULL << 13) /**< FD id reported if FDIR match. */
#define PKT_RX_FDIR_FLX (1ULL << 14) /**< Flexible bytes reported if FDIR match. */
+#define PKT_RX_QINQ_PKT (1ULL << 15) /**< RX packet with double VLAN stripped. */
/* add new RX flags here */
/* add new TX flags here */
/**
+ * Second VLAN insertion (QinQ) flag.
+ */
+#define PKT_TX_QINQ_PKT (1ULL << 49)
+
+/**
* TCP segmentation offload. To enable this offload feature for a
* packet to be transmitted on hardware supporting TSO:
* - set the PKT_TX_TCP_SEG flag in mbuf->ol_flags (this flag implies
@@ -268,7 +274,6 @@ struct rte_mbuf {
uint16_t data_len; /**< Amount of data in segment buffer. */
uint32_t pkt_len; /**< Total pkt len: sum of all segments. */
- uint16_t vlan_tci; /**< VLAN Tag Control Identifier (CPU order) */
uint16_t reserved;
union {
uint32_t rss; /**< RSS hash result if RSS enabled */
@@ -289,6 +294,15 @@ struct rte_mbuf {
uint32_t usr; /**< User defined tags. See rte_distributor_process() */
} hash; /**< hash information */
+ /* VLAN tags */
+ union {
+ uint32_t vlan_tags;
+ struct {
+ uint16_t vlan_tci0;
+ uint16_t vlan_tci1;
+ };
+ };
+
uint32_t seqn; /**< Sequence number. See also rte_reorder_insert() */
/* second cache line - fields only used in slow path or on TX */
@@ -766,7 +780,8 @@ static inline void rte_pktmbuf_reset(struct rte_mbuf *m)
m->next = NULL;
m->pkt_len = 0;
m->tx_offload = 0;
- m->vlan_tci = 0;
+ m->vlan_tci0 = 0;
+ m->vlan_tci1 = 0;
m->nb_segs = 1;
m->port = 0xff;
@@ -838,7 +853,8 @@ static inline void rte_pktmbuf_attach(struct rte_mbuf *mi, struct rte_mbuf *m)
mi->data_off = m->data_off;
mi->data_len = m->data_len;
mi->port = m->port;
- mi->vlan_tci = m->vlan_tci;
+ mi->vlan_tci0 = m->vlan_tci0;
+ mi->vlan_tci1 = m->vlan_tci1;
mi->tx_offload = m->tx_offload;
mi->hash = m->hash;
diff --git a/lib/librte_pmd_e1000/em_rxtx.c b/lib/librte_pmd_e1000/em_rxtx.c
index 64d067c..422f4ed 100644
--- a/lib/librte_pmd_e1000/em_rxtx.c
+++ b/lib/librte_pmd_e1000/em_rxtx.c
@@ -438,7 +438,7 @@ eth_em_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
/* If hardware offload required */
tx_ol_req = (ol_flags & (PKT_TX_IP_CKSUM | PKT_TX_L4_MASK));
if (tx_ol_req) {
- hdrlen.f.vlan_tci = tx_pkt->vlan_tci;
+ hdrlen.f.vlan_tci = tx_pkt->vlan_tci0;
hdrlen.f.l2_len = tx_pkt->l2_len;
hdrlen.f.l3_len = tx_pkt->l3_len;
/* If new context to be built or reuse the exist ctx. */
@@ -534,7 +534,7 @@ eth_em_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
/* Set VLAN Tag offload fields. */
if (ol_flags & PKT_TX_VLAN_PKT) {
cmd_type_len |= E1000_TXD_CMD_VLE;
- popts_spec = tx_pkt->vlan_tci << E1000_TXD_VLAN_SHIFT;
+ popts_spec = tx_pkt->vlan_tci0 << E1000_TXD_VLAN_SHIFT;
}
if (tx_ol_req) {
@@ -800,7 +800,7 @@ eth_em_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
rx_desc_error_to_pkt_flags(rxd.errors);
/* Only valid if PKT_RX_VLAN_PKT set in pkt_flags */
- rxm->vlan_tci = rte_le_to_cpu_16(rxd.special);
+ rxm->vlan_tci0 = rte_le_to_cpu_16(rxd.special);
/*
* Store the mbuf address into the next entry of the array
@@ -1026,7 +1026,7 @@ eth_em_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
rx_desc_error_to_pkt_flags(rxd.errors);
/* Only valid if PKT_RX_VLAN_PKT set in pkt_flags */
- rxm->vlan_tci = rte_le_to_cpu_16(rxd.special);
+ rxm->vlan_tci0 = rte_le_to_cpu_16(rxd.special);
/* Prefetch data of first segment, if configured to do so. */
rte_packet_prefetch((char *)first_seg->buf_addr +
diff --git a/lib/librte_pmd_e1000/igb_rxtx.c b/lib/librte_pmd_e1000/igb_rxtx.c
index 80d05c0..fda273f 100644
--- a/lib/librte_pmd_e1000/igb_rxtx.c
+++ b/lib/librte_pmd_e1000/igb_rxtx.c
@@ -401,7 +401,7 @@ eth_igb_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
ol_flags = tx_pkt->ol_flags;
l2_l3_len.l2_len = tx_pkt->l2_len;
l2_l3_len.l3_len = tx_pkt->l3_len;
- vlan_macip_lens.f.vlan_tci = tx_pkt->vlan_tci;
+ vlan_macip_lens.f.vlan_tci = tx_pkt->vlan_tci0;
vlan_macip_lens.f.l2_l3_len = l2_l3_len.u16;
tx_ol_req = ol_flags & IGB_TX_OFFLOAD_MASK;
@@ -784,7 +784,7 @@ eth_igb_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
rxm->hash.rss = rxd.wb.lower.hi_dword.rss;
hlen_type_rss = rte_le_to_cpu_32(rxd.wb.lower.lo_dword.data);
/* Only valid if PKT_RX_VLAN_PKT set in pkt_flags */
- rxm->vlan_tci = rte_le_to_cpu_16(rxd.wb.upper.vlan);
+ rxm->vlan_tci0 = rte_le_to_cpu_16(rxd.wb.upper.vlan);
pkt_flags = rx_desc_hlen_type_rss_to_pkt_flags(hlen_type_rss);
pkt_flags = pkt_flags | rx_desc_status_to_pkt_flags(staterr);
@@ -1015,10 +1015,10 @@ eth_igb_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
first_seg->hash.rss = rxd.wb.lower.hi_dword.rss;
/*
- * The vlan_tci field is only valid when PKT_RX_VLAN_PKT is
+ * The vlan_tci0 field is only valid when PKT_RX_VLAN_PKT is
* set in the pkt_flags field.
*/
- first_seg->vlan_tci = rte_le_to_cpu_16(rxd.wb.upper.vlan);
+ first_seg->vlan_tci0 = rte_le_to_cpu_16(rxd.wb.upper.vlan);
hlen_type_rss = rte_le_to_cpu_32(rxd.wb.lower.lo_dword.data);
pkt_flags = rx_desc_hlen_type_rss_to_pkt_flags(hlen_type_rss);
pkt_flags = pkt_flags | rx_desc_status_to_pkt_flags(staterr);
diff --git a/lib/librte_pmd_enic/enic_ethdev.c b/lib/librte_pmd_enic/enic_ethdev.c
index 69ad01b..45c0e14 100644
--- a/lib/librte_pmd_enic/enic_ethdev.c
+++ b/lib/librte_pmd_enic/enic_ethdev.c
@@ -506,7 +506,7 @@ static uint16_t enicpmd_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
return index;
}
pkt_len = tx_pkt->pkt_len;
- vlan_id = tx_pkt->vlan_tci;
+ vlan_id = tx_pkt->vlan_tci0;
ol_flags = tx_pkt->ol_flags;
for (frags = 0; inc_len < pkt_len; frags++) {
if (!tx_pkt)
diff --git a/lib/librte_pmd_enic/enic_main.c b/lib/librte_pmd_enic/enic_main.c
index 15313c2..d1660a1 100644
--- a/lib/librte_pmd_enic/enic_main.c
+++ b/lib/librte_pmd_enic/enic_main.c
@@ -490,7 +490,7 @@ static int enic_rq_indicate_buf(struct vnic_rq *rq,
if (vlan_tci) {
rx_pkt->ol_flags |= PKT_RX_VLAN_PKT;
- rx_pkt->vlan_tci = vlan_tci;
+ rx_pkt->vlan_tci0 = vlan_tci;
}
return eop;
diff --git a/lib/librte_pmd_fm10k/fm10k_rxtx.c b/lib/librte_pmd_fm10k/fm10k_rxtx.c
index 83bddfc..ba3b8aa 100644
--- a/lib/librte_pmd_fm10k/fm10k_rxtx.c
+++ b/lib/librte_pmd_fm10k/fm10k_rxtx.c
@@ -410,7 +410,7 @@ static inline void tx_xmit_pkt(struct fm10k_tx_queue *q, struct rte_mbuf *mb)
/* set vlan if requested */
if (mb->ol_flags & PKT_TX_VLAN_PKT)
- q->hw_ring[q->next_free].vlan = mb->vlan_tci;
+ q->hw_ring[q->next_free].vlan = mb->vlan_tci0;
/* fill up the rings */
for (; mb != NULL; mb = mb->next) {
diff --git a/lib/librte_pmd_i40e/i40e_rxtx.c b/lib/librte_pmd_i40e/i40e_rxtx.c
index 493cfa3..1fe377c 100644
--- a/lib/librte_pmd_i40e/i40e_rxtx.c
+++ b/lib/librte_pmd_i40e/i40e_rxtx.c
@@ -703,7 +703,7 @@ i40e_rx_scan_hw_ring(struct i40e_rx_queue *rxq)
I40E_RXD_QW1_LENGTH_PBUF_SHIFT) - rxq->crc_len;
mb->data_len = pkt_len;
mb->pkt_len = pkt_len;
- mb->vlan_tci = rx_status &
+ mb->vlan_tci0 = rx_status &
(1 << I40E_RX_DESC_STATUS_L2TAG1P_SHIFT) ?
rte_le_to_cpu_16(\
rxdp[j].wb.qword0.lo_dword.l2tag1) : 0;
@@ -947,7 +947,7 @@ i40e_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
rxm->data_len = rx_packet_len;
rxm->port = rxq->port_id;
- rxm->vlan_tci = rx_status &
+ rxm->vlan_tci0 = rx_status &
(1 << I40E_RX_DESC_STATUS_L2TAG1P_SHIFT) ?
rte_le_to_cpu_16(rxd.wb.qword0.lo_dword.l2tag1) : 0;
pkt_flags = i40e_rxd_status_to_pkt_flags(qword1);
@@ -1106,7 +1106,7 @@ i40e_recv_scattered_pkts(void *rx_queue,
}
first_seg->port = rxq->port_id;
- first_seg->vlan_tci = (rx_status &
+ first_seg->vlan_tci0 = (rx_status &
(1 << I40E_RX_DESC_STATUS_L2TAG1P_SHIFT)) ?
rte_le_to_cpu_16(rxd.wb.qword0.lo_dword.l2tag1) : 0;
pkt_flags = i40e_rxd_status_to_pkt_flags(qword1);
@@ -1291,7 +1291,7 @@ i40e_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
/* Descriptor based VLAN insertion */
if (ol_flags & PKT_TX_VLAN_PKT) {
- tx_flags |= tx_pkt->vlan_tci <<
+ tx_flags |= tx_pkt->vlan_tci0 <<
I40E_TX_FLAG_L2TAG1_SHIFT;
tx_flags |= I40E_TX_FLAG_INSERT_VLAN;
td_cmd |= I40E_TX_DESC_CMD_IL2TAG1;
diff --git a/lib/librte_pmd_ixgbe/ixgbe_rxtx.c b/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
index 7f15f15..fd664da 100644
--- a/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
+++ b/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
@@ -612,7 +612,7 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_offload.l2_len = tx_pkt->l2_len;
tx_offload.l3_len = tx_pkt->l3_len;
tx_offload.l4_len = tx_pkt->l4_len;
- tx_offload.vlan_tci = tx_pkt->vlan_tci;
+ tx_offload.vlan_tci = tx_pkt->vlan_tci0;
tx_offload.tso_segsz = tx_pkt->tso_segsz;
/* If new context need be built or reuse the exist ctx. */
@@ -981,8 +981,7 @@ ixgbe_rx_scan_hw_ring(struct ixgbe_rx_queue *rxq)
pkt_len = (uint16_t)(rxdp[j].wb.upper.length - rxq->crc_len);
mb->data_len = pkt_len;
mb->pkt_len = pkt_len;
- mb->vlan_tci = rxdp[j].wb.upper.vlan;
- mb->vlan_tci = rte_le_to_cpu_16(rxdp[j].wb.upper.vlan);
+ mb->vlan_tci0 = rte_le_to_cpu_16(rxdp[j].wb.upper.vlan);
/* convert descriptor fields to rte mbuf flags */
pkt_flags = rx_desc_hlen_type_rss_to_pkt_flags(
@@ -1327,7 +1326,7 @@ ixgbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
hlen_type_rss = rte_le_to_cpu_32(rxd.wb.lower.lo_dword.data);
/* Only valid if PKT_RX_VLAN_PKT set in pkt_flags */
- rxm->vlan_tci = rte_le_to_cpu_16(rxd.wb.upper.vlan);
+ rxm->vlan_tci0 = rte_le_to_cpu_16(rxd.wb.upper.vlan);
pkt_flags = rx_desc_hlen_type_rss_to_pkt_flags(hlen_type_rss);
pkt_flags = pkt_flags | rx_desc_status_to_pkt_flags(staterr);
@@ -1412,10 +1411,10 @@ ixgbe_fill_cluster_head_buf(
head->port = port_id;
/*
- * The vlan_tci field is only valid when PKT_RX_VLAN_PKT is
+ * The vlan_tci0 field is only valid when PKT_RX_VLAN_PKT is
* set in the pkt_flags field.
*/
- head->vlan_tci = rte_le_to_cpu_16(desc->wb.upper.vlan);
+ head->vlan_tci0 = rte_le_to_cpu_16(desc->wb.upper.vlan);
hlen_type_rss = rte_le_to_cpu_32(desc->wb.lower.lo_dword.data);
pkt_flags = rx_desc_hlen_type_rss_to_pkt_flags(hlen_type_rss);
pkt_flags |= rx_desc_status_to_pkt_flags(staterr);
diff --git a/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c b/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c
index d8019f5..57a33c9 100644
--- a/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c
+++ b/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c
@@ -405,7 +405,7 @@ vmxnet3_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
/* Add VLAN tag if requested */
if (txm->ol_flags & PKT_TX_VLAN_PKT) {
txd->ti = 1;
- txd->tci = rte_cpu_to_le_16(txm->vlan_tci);
+ txd->tci = rte_cpu_to_le_16(txm->vlan_tci0);
}
/* Record current mbuf for freeing it later in tx complete */
@@ -629,10 +629,10 @@ vmxnet3_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
rcd->tci);
rxm->ol_flags = PKT_RX_VLAN_PKT;
/* Copy vlan tag in packet buffer */
- rxm->vlan_tci = rte_le_to_cpu_16((uint16_t)rcd->tci);
+ rxm->vlan_tci0 = rte_le_to_cpu_16((uint16_t)rcd->tci);
} else {
rxm->ol_flags = 0;
- rxm->vlan_tci = 0;
+ rxm->vlan_tci0 = 0;
}
/* Initialize newly received packet buffer */
--
1.9.3
^ permalink raw reply [flat|nested] 55+ messages in thread
* [dpdk-dev] [PATCH RFC 2/6] i40e: reconfigure the hardware to support QinQ stripping/insertion
2015-05-05 2:32 [dpdk-dev] [PATCH RFC 0/6] support of QinQ stripping and insertion of i40e Helin Zhang
2015-05-05 2:32 ` [dpdk-dev] [PATCH RFC 1/6] mbuf: update mbuf structure for QinQ support Helin Zhang
@ 2015-05-05 2:32 ` Helin Zhang
2015-05-05 2:32 ` [dpdk-dev] [PATCH RFC 3/6] i40e: support of QinQ stripping/insertion in RX/TX Helin Zhang
` (4 subsequent siblings)
6 siblings, 0 replies; 55+ messages in thread
From: Helin Zhang @ 2015-05-05 2:32 UTC (permalink / raw)
To: dev
Reconfiguration is needed to support QinQ stripping and insertion,
as hardware does not support them by default.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
lib/librte_pmd_i40e/i40e_ethdev.c | 48 +++++++++++++++++++++++++++++++++++++++
1 file changed, 48 insertions(+)
diff --git a/lib/librte_pmd_i40e/i40e_ethdev.c b/lib/librte_pmd_i40e/i40e_ethdev.c
index 43762f2..9b4bf06 100644
--- a/lib/librte_pmd_i40e/i40e_ethdev.c
+++ b/lib/librte_pmd_i40e/i40e_ethdev.c
@@ -211,6 +211,7 @@ static int i40e_dev_filter_ctrl(struct rte_eth_dev *dev,
void *arg);
static void i40e_configure_registers(struct i40e_hw *hw);
static void i40e_hw_init(struct i40e_hw *hw);
+static int i40e_config_qinq(struct i40e_hw *hw, struct i40e_vsi *vsi);
static const struct rte_pci_id pci_id_i40e_map[] = {
#define RTE_PCI_DEV_ID_DECL_I40E(vend, dev) {RTE_PCI_DEVICE(vend, dev)},
@@ -3055,6 +3056,7 @@ i40e_vsi_setup(struct i40e_pf *pf,
* macvlan filter which is expected and cannot be removed.
*/
i40e_update_default_filter_setting(vsi);
+ i40e_config_qinq(hw, vsi);
} else if (type == I40E_VSI_SRIOV) {
memset(&ctxt, 0, sizeof(ctxt));
/**
@@ -3095,6 +3097,8 @@ i40e_vsi_setup(struct i40e_pf *pf,
* Since VSI is not created yet, only configure parameter,
* will add vsi below.
*/
+
+ i40e_config_qinq(hw, vsi);
} else if (type == I40E_VSI_VMDQ2) {
memset(&ctxt, 0, sizeof(ctxt));
/*
@@ -5714,3 +5718,47 @@ i40e_configure_registers(struct i40e_hw *hw)
"0x%"PRIx32, reg_table[i].val, reg_table[i].addr);
}
}
+
+#define I40E_VSI_TSR(_i) (0x00050800 + ((_i) * 4))
+#define I40E_VSI_TSR_QINQ_CONFIG 0xc030
+#define I40E_VSI_L2TAGSTXVALID(_i) (0x00042800 + ((_i) * 4))
+#define I40E_VSI_L2TAGSTXVALID_QINQ 0xab
+static int
+i40e_config_qinq(struct i40e_hw *hw, struct i40e_vsi *vsi)
+{
+ uint32_t reg;
+ int ret;
+
+ if (vsi->vsi_id >= I40E_MAX_NUM_VSIS) {
+ PMD_DRV_LOG(ERR, "VSI ID exceeds the maximum");
+ return -EINVAL;
+ }
+
+ /* Configure for double VLAN RX stripping */
+ reg = I40E_READ_REG(hw, I40E_VSI_TSR(vsi->vsi_id));
+ if ((reg & I40E_VSI_TSR_QINQ_CONFIG) != I40E_VSI_TSR_QINQ_CONFIG) {
+ reg |= I40E_VSI_TSR_QINQ_CONFIG;
+ ret = i40e_aq_debug_write_register(hw,
+ I40E_VSI_TSR(vsi->vsi_id), reg, NULL);
+ if (ret < 0) {
+ PMD_DRV_LOG(ERR, "Failed to update VSI_TSR[%d]",
+ vsi->vsi_id);
+ return I40E_ERR_CONFIG;
+ }
+ }
+
+ /* Configure for double VLAN TX insertion */
+ reg = I40E_READ_REG(hw, I40E_VSI_L2TAGSTXVALID(vsi->vsi_id));
+ if ((reg & 0xff) != I40E_VSI_L2TAGSTXVALID_QINQ) {
+ reg = I40E_VSI_L2TAGSTXVALID_QINQ;
+ ret = i40e_aq_debug_write_register(hw,
+ I40E_VSI_L2TAGSTXVALID(vsi->vsi_id), reg, NULL);
+ if (ret < 0) {
+ PMD_DRV_LOG(ERR, "Failed to update "
+ "VSI_L2TAGSTXVALID[%d]", vsi->vsi_id);
+ return I40E_ERR_CONFIG;
+ }
+ }
+
+ return 0;
+}
--
1.9.3
^ permalink raw reply [flat|nested] 55+ messages in thread
* [dpdk-dev] [PATCH RFC 3/6] i40e: support of QinQ stripping/insertion in RX/TX
2015-05-05 2:32 [dpdk-dev] [PATCH RFC 0/6] support of QinQ stripping and insertion of i40e Helin Zhang
2015-05-05 2:32 ` [dpdk-dev] [PATCH RFC 1/6] mbuf: update mbuf structure for QinQ support Helin Zhang
2015-05-05 2:32 ` [dpdk-dev] [PATCH RFC 2/6] i40e: reconfigure the hardware to support QinQ stripping/insertion Helin Zhang
@ 2015-05-05 2:32 ` Helin Zhang
2015-05-05 2:32 ` [dpdk-dev] [PATCH RFC 4/6] ethdev: add QinQ offload capability flags Helin Zhang
` (3 subsequent siblings)
6 siblings, 0 replies; 55+ messages in thread
From: Helin Zhang @ 2015-05-05 2:32 UTC (permalink / raw)
To: dev
To support QinQ stripping and insertion, QinQ L2 tags should be
extracted from RX descriptors and stored in mbuf for RX stripping,
and should be read from mbuf and set correspondingly in TX
descriptors.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
lib/librte_pmd_i40e/i40e_rxtx.c | 83 +++++++++++++++++++++++++++--------------
1 file changed, 55 insertions(+), 28 deletions(-)
diff --git a/lib/librte_pmd_i40e/i40e_rxtx.c b/lib/librte_pmd_i40e/i40e_rxtx.c
index 1fe377c..e8c96af 100644
--- a/lib/librte_pmd_i40e/i40e_rxtx.c
+++ b/lib/librte_pmd_i40e/i40e_rxtx.c
@@ -95,18 +95,41 @@ static uint16_t i40e_xmit_pkts_simple(void *tx_queue,
struct rte_mbuf **tx_pkts,
uint16_t nb_pkts);
+static inline void
+i40e_rxd_to_vlan_tci(struct rte_mbuf *mb, volatile union i40e_rx_desc *rxdp)
+{
+ if (rte_le_to_cpu_64(rxdp->wb.qword1.status_error_len) &
+ (1 << I40E_RX_DESC_STATUS_L2TAG1P_SHIFT)) {
+ mb->ol_flags |= PKT_RX_VLAN_PKT;
+ mb->vlan_tci0 =
+ rte_le_to_cpu_16(rxdp->wb.qword0.lo_dword.l2tag1);
+ PMD_RX_LOG(DEBUG, "Descriptor l2tag1: %u",
+ rte_le_to_cpu_16(rxdp->wb.qword0.lo_dword.l2tag1));
+ } else
+ mb->vlan_tci0 = 0;
+#ifndef RTE_LIBRTE_I40E_16BYTE_RX_DESC
+ if (rte_le_to_cpu_16(rxdp->wb.qword2.ext_status) &
+ (1 << I40E_RX_DESC_EXT_STATUS_L2TAG2P_SHIFT)) {
+ mb->ol_flags |= PKT_RX_QINQ_PKT;
+ mb->vlan_tci1 = rte_le_to_cpu_16(rxdp->wb.qword2.l2tag2_2);
+ PMD_RX_LOG(DEBUG, "Descriptor l2tag2_1: %u, l2tag2_2: %u",
+ rte_le_to_cpu_16(rxdp->wb.qword2.l2tag2_1),
+ rte_le_to_cpu_16(rxdp->wb.qword2.l2tag2_2));
+ } else
+ mb->vlan_tci1 = 0;
+#endif
+ PMD_RX_LOG(DEBUG, "Mbuf vlan_tci0: %u, vlan_tci1: %u",
+ mb->vlan_tci0, mb->vlan_tci1);
+}
+
/* Translate the rx descriptor status to pkt flags */
static inline uint64_t
i40e_rxd_status_to_pkt_flags(uint64_t qword)
{
uint64_t flags;
- /* Check if VLAN packet */
- flags = qword & (1 << I40E_RX_DESC_STATUS_L2TAG1P_SHIFT) ?
- PKT_RX_VLAN_PKT : 0;
-
/* Check if RSS_HASH */
- flags |= (((qword >> I40E_RX_DESC_STATUS_FLTSTAT_SHIFT) &
+ flags = (((qword >> I40E_RX_DESC_STATUS_FLTSTAT_SHIFT) &
I40E_RX_DESC_FLTSTAT_RSS_HASH) ==
I40E_RX_DESC_FLTSTAT_RSS_HASH) ? PKT_RX_RSS_HASH : 0;
@@ -697,16 +720,12 @@ i40e_rx_scan_hw_ring(struct i40e_rx_queue *rxq)
mb = rxep[j].mbuf;
qword1 = rte_le_to_cpu_64(\
rxdp[j].wb.qword1.status_error_len);
- rx_status = (qword1 & I40E_RXD_QW1_STATUS_MASK) >>
- I40E_RXD_QW1_STATUS_SHIFT;
pkt_len = ((qword1 & I40E_RXD_QW1_LENGTH_PBUF_MASK) >>
I40E_RXD_QW1_LENGTH_PBUF_SHIFT) - rxq->crc_len;
mb->data_len = pkt_len;
mb->pkt_len = pkt_len;
- mb->vlan_tci0 = rx_status &
- (1 << I40E_RX_DESC_STATUS_L2TAG1P_SHIFT) ?
- rte_le_to_cpu_16(\
- rxdp[j].wb.qword0.lo_dword.l2tag1) : 0;
+ mb->ol_flags = 0;
+ i40e_rxd_to_vlan_tci(mb, &rxdp[j]);
pkt_flags = i40e_rxd_status_to_pkt_flags(qword1);
pkt_flags |= i40e_rxd_error_to_pkt_flags(qword1);
pkt_flags |= i40e_rxd_ptype_to_pkt_flags(qword1);
@@ -720,7 +739,7 @@ i40e_rx_scan_hw_ring(struct i40e_rx_queue *rxq)
if (pkt_flags & PKT_RX_FDIR)
pkt_flags |= i40e_rxd_build_fdir(&rxdp[j], mb);
- mb->ol_flags = pkt_flags;
+ mb->ol_flags |= pkt_flags;
}
for (j = 0; j < I40E_LOOK_AHEAD; j++)
@@ -946,10 +965,8 @@ i40e_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
rxm->pkt_len = rx_packet_len;
rxm->data_len = rx_packet_len;
rxm->port = rxq->port_id;
-
- rxm->vlan_tci0 = rx_status &
- (1 << I40E_RX_DESC_STATUS_L2TAG1P_SHIFT) ?
- rte_le_to_cpu_16(rxd.wb.qword0.lo_dword.l2tag1) : 0;
+ rxm->ol_flags = 0;
+ i40e_rxd_to_vlan_tci(rxm, &rxd);
pkt_flags = i40e_rxd_status_to_pkt_flags(qword1);
pkt_flags |= i40e_rxd_error_to_pkt_flags(qword1);
pkt_flags |= i40e_rxd_ptype_to_pkt_flags(qword1);
@@ -961,7 +978,7 @@ i40e_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
if (pkt_flags & PKT_RX_FDIR)
pkt_flags |= i40e_rxd_build_fdir(&rxd, rxm);
- rxm->ol_flags = pkt_flags;
+ rxm->ol_flags |= pkt_flags;
rx_pkts[nb_rx++] = rxm;
}
@@ -1106,9 +1123,8 @@ i40e_recv_scattered_pkts(void *rx_queue,
}
first_seg->port = rxq->port_id;
- first_seg->vlan_tci0 = (rx_status &
- (1 << I40E_RX_DESC_STATUS_L2TAG1P_SHIFT)) ?
- rte_le_to_cpu_16(rxd.wb.qword0.lo_dword.l2tag1) : 0;
+ first_seg->ol_flags = 0;
+ i40e_rxd_to_vlan_tci(first_seg, &rxd);
pkt_flags = i40e_rxd_status_to_pkt_flags(qword1);
pkt_flags |= i40e_rxd_error_to_pkt_flags(qword1);
pkt_flags |= i40e_rxd_ptype_to_pkt_flags(qword1);
@@ -1121,7 +1137,7 @@ i40e_recv_scattered_pkts(void *rx_queue,
if (pkt_flags & PKT_RX_FDIR)
pkt_flags |= i40e_rxd_build_fdir(&rxd, rxm);
- first_seg->ol_flags = pkt_flags;
+ first_seg->ol_flags |= pkt_flags;
/* Prefetch data of first segment, if configured to do so. */
rte_prefetch0(RTE_PTR_ADD(first_seg->buf_addr,
@@ -1159,17 +1175,15 @@ i40e_recv_scattered_pkts(void *rx_queue,
static inline uint16_t
i40e_calc_context_desc(uint64_t flags)
{
- uint64_t mask = 0ULL;
-
- mask |= (PKT_TX_OUTER_IP_CKSUM | PKT_TX_TCP_SEG);
+ static uint64_t mask = PKT_TX_OUTER_IP_CKSUM |
+ PKT_TX_TCP_SEG |
+ PKT_TX_QINQ_PKT;
#ifdef RTE_LIBRTE_IEEE1588
mask |= PKT_TX_IEEE1588_TMST;
#endif
- if (flags & mask)
- return 1;
- return 0;
+ return ((flags & mask) ? 1 : 0);
}
/* set i40e TSO context descriptor */
@@ -1292,7 +1306,14 @@ i40e_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
/* Descriptor based VLAN insertion */
if (ol_flags & PKT_TX_VLAN_PKT) {
tx_flags |= tx_pkt->vlan_tci0 <<
- I40E_TX_FLAG_L2TAG1_SHIFT;
+ I40E_TX_FLAG_L2TAG1_SHIFT;
+ tx_flags |= I40E_TX_FLAG_INSERT_VLAN;
+ td_cmd |= I40E_TX_DESC_CMD_IL2TAG1;
+ td_tag = (tx_flags & I40E_TX_FLAG_L2TAG1_MASK) >>
+ I40E_TX_FLAG_L2TAG1_SHIFT;
+ } else if (ol_flags & PKT_TX_QINQ_PKT) {
+ tx_flags |= tx_pkt->vlan_tci1 <<
+ I40E_TX_FLAG_L2TAG1_SHIFT;
tx_flags |= I40E_TX_FLAG_INSERT_VLAN;
td_cmd |= I40E_TX_DESC_CMD_IL2TAG1;
td_tag = (tx_flags & I40E_TX_FLAG_L2TAG1_MASK) >>
@@ -1340,6 +1361,12 @@ i40e_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
ctx_txd->tunneling_params =
rte_cpu_to_le_32(cd_tunneling_params);
+ if (ol_flags & PKT_TX_QINQ_PKT) {
+ cd_l2tag2 = tx_pkt->vlan_tci0;
+ cd_type_cmd_tso_mss |=
+ ((uint64_t)I40E_TX_CTX_DESC_IL2TAG2 <<
+ I40E_TXD_CTX_QW1_CMD_SHIFT);
+ }
ctx_txd->l2tag2 = rte_cpu_to_le_16(cd_l2tag2);
ctx_txd->type_cmd_tso_mss =
rte_cpu_to_le_64(cd_type_cmd_tso_mss);
--
1.9.3
^ permalink raw reply [flat|nested] 55+ messages in thread
* [dpdk-dev] [PATCH RFC 4/6] ethdev: add QinQ offload capability flags
2015-05-05 2:32 [dpdk-dev] [PATCH RFC 0/6] support of QinQ stripping and insertion of i40e Helin Zhang
` (2 preceding siblings ...)
2015-05-05 2:32 ` [dpdk-dev] [PATCH RFC 3/6] i40e: support of QinQ stripping/insertion in RX/TX Helin Zhang
@ 2015-05-05 2:32 ` Helin Zhang
2015-05-05 2:32 ` [dpdk-dev] [PATCH RFC 5/6] i40e: update of " Helin Zhang
` (2 subsequent siblings)
6 siblings, 0 replies; 55+ messages in thread
From: Helin Zhang @ 2015-05-05 2:32 UTC (permalink / raw)
To: dev
As offload capabilities of QinQ stripping and insertion are
supported by some of the supported hardware, the offload capability
flags should be added accordingly.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
lib/librte_ether/rte_ethdev.h | 28 +++++++++++++++-------------
1 file changed, 15 insertions(+), 13 deletions(-)
diff --git a/lib/librte_ether/rte_ethdev.h b/lib/librte_ether/rte_ethdev.h
index 4648290..1855b2e 100644
--- a/lib/librte_ether/rte_ethdev.h
+++ b/lib/librte_ether/rte_ethdev.h
@@ -881,23 +881,25 @@ struct rte_eth_conf {
/**
* RX offload capabilities of a device.
*/
-#define DEV_RX_OFFLOAD_VLAN_STRIP 0x00000001
-#define DEV_RX_OFFLOAD_IPV4_CKSUM 0x00000002
-#define DEV_RX_OFFLOAD_UDP_CKSUM 0x00000004
-#define DEV_RX_OFFLOAD_TCP_CKSUM 0x00000008
-#define DEV_RX_OFFLOAD_TCP_LRO 0x00000010
+#define DEV_RX_OFFLOAD_VLAN_STRIP 0x00000001
+#define DEV_RX_OFFLOAD_QINQ_STRIP 0x00000002
+#define DEV_RX_OFFLOAD_IPV4_CKSUM 0x00000004
+#define DEV_RX_OFFLOAD_UDP_CKSUM 0x00000008
+#define DEV_RX_OFFLOAD_TCP_CKSUM 0x00000010
+#define DEV_RX_OFFLOAD_TCP_LRO 0x00000020
/**
* TX offload capabilities of a device.
*/
-#define DEV_TX_OFFLOAD_VLAN_INSERT 0x00000001
-#define DEV_TX_OFFLOAD_IPV4_CKSUM 0x00000002
-#define DEV_TX_OFFLOAD_UDP_CKSUM 0x00000004
-#define DEV_TX_OFFLOAD_TCP_CKSUM 0x00000008
-#define DEV_TX_OFFLOAD_SCTP_CKSUM 0x00000010
-#define DEV_TX_OFFLOAD_TCP_TSO 0x00000020
-#define DEV_TX_OFFLOAD_UDP_TSO 0x00000040
-#define DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM 0x00000080 /**< Used for tunneling packet. */
+#define DEV_TX_OFFLOAD_VLAN_INSERT 0x00000001
+#define DEV_TX_OFFLOAD_QINQ_INSERT 0x00000002
+#define DEV_TX_OFFLOAD_IPV4_CKSUM 0x00000004
+#define DEV_TX_OFFLOAD_UDP_CKSUM 0x00000008
+#define DEV_TX_OFFLOAD_TCP_CKSUM 0x00000010
+#define DEV_TX_OFFLOAD_SCTP_CKSUM 0x00000020
+#define DEV_TX_OFFLOAD_TCP_TSO 0x00000040
+#define DEV_TX_OFFLOAD_UDP_TSO 0x00000080
+#define DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM 0x00000100
struct rte_eth_dev_info {
struct rte_pci_device *pci_dev; /**< Device PCI information. */
--
1.9.3
^ permalink raw reply [flat|nested] 55+ messages in thread
* [dpdk-dev] [PATCH RFC 5/6] i40e: update of offload capability flags
2015-05-05 2:32 [dpdk-dev] [PATCH RFC 0/6] support of QinQ stripping and insertion of i40e Helin Zhang
` (3 preceding siblings ...)
2015-05-05 2:32 ` [dpdk-dev] [PATCH RFC 4/6] ethdev: add QinQ offload capability flags Helin Zhang
@ 2015-05-05 2:32 ` Helin Zhang
2015-05-05 2:32 ` [dpdk-dev] [PATCH RFC 6/6] app/testpmd: support of QinQ stripping and insertion Helin Zhang
2015-05-26 8:36 ` [dpdk-dev] [PATCH 0/5] support i40e " Helin Zhang
6 siblings, 0 replies; 55+ messages in thread
From: Helin Zhang @ 2015-05-05 2:32 UTC (permalink / raw)
To: dev
As hardware supports QinQ stripping and insertion, the offload
flags of them should be added in both PF and VF sides.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
lib/librte_pmd_i40e/i40e_ethdev.c | 2 ++
lib/librte_pmd_i40e/i40e_ethdev_vf.c | 13 +++++++++++++
2 files changed, 15 insertions(+)
diff --git a/lib/librte_pmd_i40e/i40e_ethdev.c b/lib/librte_pmd_i40e/i40e_ethdev.c
index 9b4bf06..a980d83 100644
--- a/lib/librte_pmd_i40e/i40e_ethdev.c
+++ b/lib/librte_pmd_i40e/i40e_ethdev.c
@@ -1529,11 +1529,13 @@ i40e_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->max_vfs = dev->pci_dev->max_vfs;
dev_info->rx_offload_capa =
DEV_RX_OFFLOAD_VLAN_STRIP |
+ DEV_RX_OFFLOAD_QINQ_STRIP |
DEV_RX_OFFLOAD_IPV4_CKSUM |
DEV_RX_OFFLOAD_UDP_CKSUM |
DEV_RX_OFFLOAD_TCP_CKSUM;
dev_info->tx_offload_capa =
DEV_TX_OFFLOAD_VLAN_INSERT |
+ DEV_TX_OFFLOAD_QINQ_INSERT |
DEV_TX_OFFLOAD_IPV4_CKSUM |
DEV_TX_OFFLOAD_UDP_CKSUM |
DEV_TX_OFFLOAD_TCP_CKSUM |
diff --git a/lib/librte_pmd_i40e/i40e_ethdev_vf.c b/lib/librte_pmd_i40e/i40e_ethdev_vf.c
index a0d808f..c623429 100644
--- a/lib/librte_pmd_i40e/i40e_ethdev_vf.c
+++ b/lib/librte_pmd_i40e/i40e_ethdev_vf.c
@@ -1643,6 +1643,19 @@ i40evf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->max_rx_pktlen = I40E_FRAME_SIZE_MAX;
dev_info->reta_size = ETH_RSS_RETA_SIZE_64;
dev_info->flow_type_rss_offloads = I40E_RSS_OFFLOAD_ALL;
+ dev_info->rx_offload_capa =
+ DEV_RX_OFFLOAD_VLAN_STRIP |
+ DEV_RX_OFFLOAD_QINQ_STRIP |
+ DEV_RX_OFFLOAD_IPV4_CKSUM |
+ DEV_RX_OFFLOAD_UDP_CKSUM |
+ DEV_RX_OFFLOAD_TCP_CKSUM;
+ dev_info->tx_offload_capa =
+ DEV_TX_OFFLOAD_VLAN_INSERT |
+ DEV_TX_OFFLOAD_QINQ_INSERT |
+ DEV_TX_OFFLOAD_IPV4_CKSUM |
+ DEV_TX_OFFLOAD_UDP_CKSUM |
+ DEV_TX_OFFLOAD_TCP_CKSUM |
+ DEV_TX_OFFLOAD_SCTP_CKSUM;
dev_info->default_rxconf = (struct rte_eth_rxconf) {
.rx_thresh = {
--
1.9.3
^ permalink raw reply [flat|nested] 55+ messages in thread
* [dpdk-dev] [PATCH RFC 6/6] app/testpmd: support of QinQ stripping and insertion
2015-05-05 2:32 [dpdk-dev] [PATCH RFC 0/6] support of QinQ stripping and insertion of i40e Helin Zhang
` (4 preceding siblings ...)
2015-05-05 2:32 ` [dpdk-dev] [PATCH RFC 5/6] i40e: update of " Helin Zhang
@ 2015-05-05 2:32 ` Helin Zhang
2015-05-26 8:36 ` [dpdk-dev] [PATCH 0/5] support i40e " Helin Zhang
6 siblings, 0 replies; 55+ messages in thread
From: Helin Zhang @ 2015-05-05 2:32 UTC (permalink / raw)
To: dev
As QinQ stripping and insertion have been supported, test commands
should be updated. In detail, "tx_vlan set vlan_id (port_id)" will
be changed to "tx_vlan set (port_id) vlan_id0[, vlan_id1]" to
support both single and double VLAN tag insertion; also VLAN tags
stripped from received packets will be printed in 'rxonly' mode.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
app/test-pmd/cmdline.c | 78 +++++++++++++++++++++++++++++++++++++++++++++-----
app/test-pmd/config.c | 23 +++++++++++++--
app/test-pmd/flowgen.c | 8 ++++--
app/test-pmd/macfwd.c | 5 +++-
app/test-pmd/macswap.c | 5 +++-
app/test-pmd/rxonly.c | 3 ++
app/test-pmd/testpmd.h | 6 +++-
app/test-pmd/txonly.c | 10 +++++--
8 files changed, 120 insertions(+), 18 deletions(-)
diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index f01db2a..a19d32a 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -304,9 +304,9 @@ static void cmd_help_long_parsed(void *parsed_result,
"rx_vxlan_port rm (udp_port) (port_id)\n"
" Remove an UDP port for VXLAN packet filter on a port\n\n"
- "tx_vlan set vlan_id (port_id)\n"
- " Set hardware insertion of VLAN ID in packets sent"
- " on a port.\n\n"
+ "tx_vlan set (port_id) vlan_id0[, vlan_id1]\n"
+ " Set hardware insertion of VLAN IDs (single or double VLAN "
+ "depends on the number of VLAN IDs) in packets sent on a port.\n\n"
"tx_vlan set pvid port_id vlan_id (on|off)\n"
" Set port based TX VLAN insertion.\n\n"
@@ -2799,8 +2799,8 @@ cmdline_parse_inst_t cmd_rx_vlan_filter = {
struct cmd_tx_vlan_set_result {
cmdline_fixed_string_t tx_vlan;
cmdline_fixed_string_t set;
- uint16_t vlan_id;
uint8_t port_id;
+ uint16_t vlan_id;
};
static void
@@ -2809,6 +2809,13 @@ cmd_tx_vlan_set_parsed(void *parsed_result,
__attribute__((unused)) void *data)
{
struct cmd_tx_vlan_set_result *res = parsed_result;
+ int vlan_offload = rte_eth_dev_get_vlan_offload(res->port_id);
+
+ if (vlan_offload & ETH_VLAN_EXTEND_OFFLOAD) {
+ printf("Error, as QinQ has been enabled.\n");
+ return;
+ }
+
tx_vlan_set(res->port_id, res->vlan_id);
}
@@ -2828,13 +2835,69 @@ cmdline_parse_token_num_t cmd_tx_vlan_set_portid =
cmdline_parse_inst_t cmd_tx_vlan_set = {
.f = cmd_tx_vlan_set_parsed,
.data = NULL,
- .help_str = "enable hardware insertion of a VLAN header with a given "
- "TAG Identifier in packets sent on a port",
+ .help_str = "enable hardware insertion of a single VLAN header "
+ "with a given TAG Identifier in packets sent on a port",
.tokens = {
(void *)&cmd_tx_vlan_set_tx_vlan,
(void *)&cmd_tx_vlan_set_set,
- (void *)&cmd_tx_vlan_set_vlanid,
(void *)&cmd_tx_vlan_set_portid,
+ (void *)&cmd_tx_vlan_set_vlanid,
+ NULL,
+ },
+};
+
+/* *** ENABLE HARDWARE INSERTION OF Double VLAN HEADER IN TX PACKETS *** */
+struct cmd_tx_vlan_set_qinq_result {
+ cmdline_fixed_string_t tx_vlan;
+ cmdline_fixed_string_t set;
+ uint8_t port_id;
+ uint16_t vlan_id0;
+ uint16_t vlan_id1;
+};
+
+static void
+cmd_tx_vlan_set_qinq_parsed(void *parsed_result,
+ __attribute__((unused)) struct cmdline *cl,
+ __attribute__((unused)) void *data)
+{
+ struct cmd_tx_vlan_set_qinq_result *res = parsed_result;
+ int vlan_offload = rte_eth_dev_get_vlan_offload(res->port_id);
+
+ if (!(vlan_offload & ETH_VLAN_EXTEND_OFFLOAD)) {
+ printf("Error, as QinQ hasn't been enabled.\n");
+ return;
+ }
+
+ tx_qinq_set(res->port_id, res->vlan_id0, res->vlan_id1);
+}
+
+cmdline_parse_token_string_t cmd_tx_vlan_set_qinq_tx_vlan =
+ TOKEN_STRING_INITIALIZER(struct cmd_tx_vlan_set_qinq_result,
+ tx_vlan, "tx_vlan");
+cmdline_parse_token_string_t cmd_tx_vlan_set_qinq_set =
+ TOKEN_STRING_INITIALIZER(struct cmd_tx_vlan_set_qinq_result,
+ set, "set");
+cmdline_parse_token_num_t cmd_tx_vlan_set_qinq_portid =
+ TOKEN_NUM_INITIALIZER(struct cmd_tx_vlan_set_qinq_result,
+ port_id, UINT8);
+cmdline_parse_token_num_t cmd_tx_vlan_set_qinq_vlanid0 =
+ TOKEN_NUM_INITIALIZER(struct cmd_tx_vlan_set_qinq_result,
+ vlan_id0, UINT16);
+cmdline_parse_token_num_t cmd_tx_vlan_set_qinq_vlanid1 =
+ TOKEN_NUM_INITIALIZER(struct cmd_tx_vlan_set_qinq_result,
+ vlan_id1, UINT16);
+
+cmdline_parse_inst_t cmd_tx_vlan_set_qinq = {
+ .f = cmd_tx_vlan_set_qinq_parsed,
+ .data = NULL,
+ .help_str = "enable hardware insertion of a double VLAN header "
+ "with a given TAG Identifier in packets sent on a port",
+ .tokens = {
+ (void *)&cmd_tx_vlan_set_qinq_tx_vlan,
+ (void *)&cmd_tx_vlan_set_qinq_set,
+ (void *)&cmd_tx_vlan_set_qinq_portid,
+ (void *)&cmd_tx_vlan_set_qinq_vlanid0,
+ (void *)&cmd_tx_vlan_set_qinq_vlanid1,
NULL,
},
};
@@ -8782,6 +8845,7 @@ cmdline_parse_ctx_t main_ctx[] = {
(cmdline_parse_inst_t *)&cmd_rx_vlan_filter_all,
(cmdline_parse_inst_t *)&cmd_rx_vlan_filter,
(cmdline_parse_inst_t *)&cmd_tx_vlan_set,
+ (cmdline_parse_inst_t *)&cmd_tx_vlan_set_qinq,
(cmdline_parse_inst_t *)&cmd_tx_vlan_reset,
(cmdline_parse_inst_t *)&cmd_tx_vlan_set_pvid,
(cmdline_parse_inst_t *)&cmd_csum_set,
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index f788ed5..6825a1e 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -1732,8 +1732,24 @@ tx_vlan_set(portid_t port_id, uint16_t vlan_id)
return;
if (vlan_id_is_invalid(vlan_id))
return;
+ tx_vlan_reset(port_id);
ports[port_id].tx_ol_flags |= TESTPMD_TX_OFFLOAD_INSERT_VLAN;
- ports[port_id].tx_vlan_id = vlan_id;
+ ports[port_id].tx_vlan_id0 = vlan_id;
+}
+
+void
+tx_qinq_set(portid_t port_id, uint16_t vlan_id0, uint16_t vlan_id1)
+{
+ if (port_id_is_invalid(port_id, ENABLED_WARN))
+ return;
+ if (vlan_id_is_invalid(vlan_id0))
+ return;
+ if (vlan_id_is_invalid(vlan_id1))
+ return;
+ tx_vlan_reset(port_id);
+ ports[port_id].tx_ol_flags |= TESTPMD_TX_OFFLOAD_INSERT_QINQ;
+ ports[port_id].tx_vlan_id0 = vlan_id0;
+ ports[port_id].tx_vlan_id1 = vlan_id1;
}
void
@@ -1741,7 +1757,10 @@ tx_vlan_reset(portid_t port_id)
{
if (port_id_is_invalid(port_id, ENABLED_WARN))
return;
- ports[port_id].tx_ol_flags &= ~TESTPMD_TX_OFFLOAD_INSERT_VLAN;
+ ports[port_id].tx_ol_flags &= ~(TESTPMD_TX_OFFLOAD_INSERT_VLAN |
+ TESTPMD_TX_OFFLOAD_INSERT_QINQ);
+ ports[port_id].tx_vlan_id0 = 0;
+ ports[port_id].tx_vlan_id1 = 0;
}
void
diff --git a/app/test-pmd/flowgen.c b/app/test-pmd/flowgen.c
index f24b00c..66a4687 100644
--- a/app/test-pmd/flowgen.c
+++ b/app/test-pmd/flowgen.c
@@ -136,7 +136,7 @@ pkt_burst_flow_gen(struct fwd_stream *fs)
struct ether_hdr *eth_hdr;
struct ipv4_hdr *ip_hdr;
struct udp_hdr *udp_hdr;
- uint16_t vlan_tci;
+ uint16_t vlan_tci0, vlan_tci1;
uint16_t ol_flags;
uint16_t nb_rx;
uint16_t nb_tx;
@@ -162,7 +162,8 @@ pkt_burst_flow_gen(struct fwd_stream *fs)
rte_pktmbuf_free(pkts_burst[i]);
mbp = current_fwd_lcore()->mbp;
- vlan_tci = ports[fs->tx_port].tx_vlan_id;
+ vlan_tci0 = ports[fs->tx_port].tx_vlan_id0;
+ vlan_tci1 = ports[fs->tx_port].tx_vlan_id1;
ol_flags = ports[fs->tx_port].tx_ol_flags;
for (nb_pkt = 0; nb_pkt < nb_pkt_per_burst; nb_pkt++) {
@@ -207,7 +208,8 @@ pkt_burst_flow_gen(struct fwd_stream *fs)
pkt->nb_segs = 1;
pkt->pkt_len = pkt_size;
pkt->ol_flags = ol_flags;
- pkt->vlan_tci0 = vlan_tci;
+ pkt->vlan_tci0 = vlan_tci0;
+ pkt->vlan_tci0 = vlan_tci1;
pkt->l2_len = sizeof(struct ether_hdr);
pkt->l3_len = sizeof(struct ipv4_hdr);
pkts_burst[nb_pkt] = pkt;
diff --git a/app/test-pmd/macfwd.c b/app/test-pmd/macfwd.c
index 590b613..5eaa70a 100644
--- a/app/test-pmd/macfwd.c
+++ b/app/test-pmd/macfwd.c
@@ -110,6 +110,8 @@ pkt_burst_mac_forward(struct fwd_stream *fs)
txp = &ports[fs->tx_port];
if (txp->tx_ol_flags & TESTPMD_TX_OFFLOAD_INSERT_VLAN)
ol_flags = PKT_TX_VLAN_PKT;
+ if (txp->tx_ol_flags & TESTPMD_TX_OFFLOAD_INSERT_QINQ)
+ ol_flags |= PKT_TX_QINQ_PKT;
for (i = 0; i < nb_rx; i++) {
mb = pkts_burst[i];
eth_hdr = rte_pktmbuf_mtod(mb, struct ether_hdr *);
@@ -120,7 +122,8 @@ pkt_burst_mac_forward(struct fwd_stream *fs)
mb->ol_flags = ol_flags;
mb->l2_len = sizeof(struct ether_hdr);
mb->l3_len = sizeof(struct ipv4_hdr);
- mb->vlan_tci0 = txp->tx_vlan_id;
+ mb->vlan_tci0 = txp->tx_vlan_id0;
+ mb->vlan_tci1 = txp->tx_vlan_id1;
}
nb_tx = rte_eth_tx_burst(fs->tx_port, fs->tx_queue, pkts_burst, nb_rx);
fs->tx_packets += nb_tx;
diff --git a/app/test-pmd/macswap.c b/app/test-pmd/macswap.c
index c355399..fcdb155 100644
--- a/app/test-pmd/macswap.c
+++ b/app/test-pmd/macswap.c
@@ -110,6 +110,8 @@ pkt_burst_mac_swap(struct fwd_stream *fs)
txp = &ports[fs->tx_port];
if (txp->tx_ol_flags & TESTPMD_TX_OFFLOAD_INSERT_VLAN)
ol_flags = PKT_TX_VLAN_PKT;
+ if (txp->tx_ol_flags & TESTPMD_TX_OFFLOAD_INSERT_QINQ)
+ ol_flags |= PKT_TX_QINQ_PKT;
for (i = 0; i < nb_rx; i++) {
mb = pkts_burst[i];
eth_hdr = rte_pktmbuf_mtod(mb, struct ether_hdr *);
@@ -122,7 +124,8 @@ pkt_burst_mac_swap(struct fwd_stream *fs)
mb->ol_flags = ol_flags;
mb->l2_len = sizeof(struct ether_hdr);
mb->l3_len = sizeof(struct ipv4_hdr);
- mb->vlan_tci0 = txp->tx_vlan_id;
+ mb->vlan_tci0 = txp->tx_vlan_id0;
+ mb->vlan_tci1 = txp->tx_vlan_id1;
}
nb_tx = rte_eth_tx_burst(fs->tx_port, fs->tx_queue, pkts_burst, nb_rx);
fs->tx_packets += nb_tx;
diff --git a/app/test-pmd/rxonly.c b/app/test-pmd/rxonly.c
index aa2cf7f..41d3874 100644
--- a/app/test-pmd/rxonly.c
+++ b/app/test-pmd/rxonly.c
@@ -160,6 +160,9 @@ pkt_burst_receive(struct fwd_stream *fs)
}
if (ol_flags & PKT_RX_VLAN_PKT)
printf(" - VLAN tci=0x%x", mb->vlan_tci0);
+ if (ol_flags & PKT_RX_QINQ_PKT)
+ printf(" - QinQ VLAN tci0=0x%x, VLAN tci1=0x%x",
+ mb->vlan_tci0, mb->vlan_tci1);
if (is_encapsulation) {
struct ipv4_hdr *ipv4_hdr;
struct ipv6_hdr *ipv6_hdr;
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index 389fc24..890fa3e 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -132,6 +132,8 @@ struct fwd_stream {
#define TESTPMD_TX_OFFLOAD_PARSE_TUNNEL 0x0020
/** Insert VLAN header in forward engine */
#define TESTPMD_TX_OFFLOAD_INSERT_VLAN 0x0040
+/** Insert double VLAN header in forward engine */
+#define TESTPMD_TX_OFFLOAD_INSERT_QINQ 0x0080
/**
* The data structure associated with each port.
@@ -148,7 +150,8 @@ struct rte_port {
unsigned int socket_id; /**< For NUMA support */
uint16_t tx_ol_flags;/**< TX Offload Flags (TESTPMD_TX_OFFLOAD...). */
uint16_t tso_segsz; /**< MSS for segmentation offload. */
- uint16_t tx_vlan_id; /**< Tag Id. in TX VLAN packets. */
+ uint16_t tx_vlan_id0;/**< The (outer) tag ID */
+ uint16_t tx_vlan_id1;/**< The inner tag ID */
void *fwd_ctx; /**< Forwarding mode context */
uint64_t rx_bad_ip_csum; /**< rx pkts with bad ip checksum */
uint64_t rx_bad_l4_csum; /**< rx pkts with bad l4 checksum */
@@ -512,6 +515,7 @@ int rx_vft_set(portid_t port_id, uint16_t vlan_id, int on);
void vlan_extend_set(portid_t port_id, int on);
void vlan_tpid_set(portid_t port_id, uint16_t tp_id);
void tx_vlan_set(portid_t port_id, uint16_t vlan_id);
+void tx_qinq_set(portid_t port_id, uint16_t vlan_id0, uint16_t vlan_id1);
void tx_vlan_reset(portid_t port_id);
void tx_vlan_pvid_set(portid_t port_id, uint16_t vlan_id, int on);
diff --git a/app/test-pmd/txonly.c b/app/test-pmd/txonly.c
index 4a2827f..9c7a86e 100644
--- a/app/test-pmd/txonly.c
+++ b/app/test-pmd/txonly.c
@@ -202,7 +202,7 @@ pkt_burst_transmit(struct fwd_stream *fs)
struct ether_hdr eth_hdr;
uint16_t nb_tx;
uint16_t nb_pkt;
- uint16_t vlan_tci;
+ uint16_t vlan_tci0, vlan_tci1;
uint64_t ol_flags = 0;
uint8_t i;
#ifdef RTE_TEST_PMD_RECORD_CORE_CYCLES
@@ -217,9 +217,12 @@ pkt_burst_transmit(struct fwd_stream *fs)
mbp = current_fwd_lcore()->mbp;
txp = &ports[fs->tx_port];
- vlan_tci = txp->tx_vlan_id;
+ vlan_tci0 = txp->tx_vlan_id0;
+ vlan_tci1 = txp->tx_vlan_id1;
if (txp->tx_ol_flags & TESTPMD_TX_OFFLOAD_INSERT_VLAN)
ol_flags = PKT_TX_VLAN_PKT;
+ if (txp->tx_ol_flags & TESTPMD_TX_OFFLOAD_INSERT_QINQ)
+ ol_flags |= PKT_TX_QINQ_PKT;
for (nb_pkt = 0; nb_pkt < nb_pkt_per_burst; nb_pkt++) {
pkt = tx_mbuf_alloc(mbp);
if (pkt == NULL) {
@@ -266,7 +269,8 @@ pkt_burst_transmit(struct fwd_stream *fs)
pkt->nb_segs = tx_pkt_nb_segs;
pkt->pkt_len = tx_pkt_length;
pkt->ol_flags = ol_flags;
- pkt->vlan_tci0 = vlan_tci;
+ pkt->vlan_tci0 = vlan_tci0;
+ pkt->vlan_tci1 = vlan_tci1;
pkt->l2_len = sizeof(struct ether_hdr);
pkt->l3_len = sizeof(struct ipv4_hdr);
pkts_burst[nb_pkt] = pkt;
--
1.9.3
^ permalink raw reply [flat|nested] 55+ messages in thread
* Re: [dpdk-dev] [PATCH RFC 1/6] mbuf: update mbuf structure for QinQ support
2015-05-05 2:32 ` [dpdk-dev] [PATCH RFC 1/6] mbuf: update mbuf structure for QinQ support Helin Zhang
@ 2015-05-05 11:04 ` Ananyev, Konstantin
2015-05-05 15:42 ` Chilikin, Andrey
2015-05-06 4:06 ` Zhang, Helin
0 siblings, 2 replies; 55+ messages in thread
From: Ananyev, Konstantin @ 2015-05-05 11:04 UTC (permalink / raw)
To: Zhang, Helin, dev
Hi Helin,
> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Helin Zhang
> Sent: Tuesday, May 05, 2015 3:32 AM
> To: dev@dpdk.org
> Subject: [dpdk-dev] [PATCH RFC 1/6] mbuf: update mbuf structure for QinQ support
>
> To support QinQ, 'vlan_tci' should be replaced by 'vlan_tci0' and
> 'vlan_tci1'. Also new offload flags of 'PKT_RX_QINQ_PKT' and
> 'PKT_TX_QINQ_PKT' should be added.
>
> Signed-off-by: Helin Zhang <helin.zhang@intel.com>
> ---
> app/test-pmd/flowgen.c | 2 +-
> app/test-pmd/macfwd.c | 2 +-
> app/test-pmd/macswap.c | 2 +-
> app/test-pmd/rxonly.c | 2 +-
> app/test-pmd/txonly.c | 2 +-
> app/test/packet_burst_generator.c | 4 ++--
> lib/librte_ether/rte_ether.h | 4 ++--
> lib/librte_mbuf/rte_mbuf.h | 22 +++++++++++++++++++---
> lib/librte_pmd_e1000/em_rxtx.c | 8 ++++----
> lib/librte_pmd_e1000/igb_rxtx.c | 8 ++++----
> lib/librte_pmd_enic/enic_ethdev.c | 2 +-
> lib/librte_pmd_enic/enic_main.c | 2 +-
> lib/librte_pmd_fm10k/fm10k_rxtx.c | 2 +-
> lib/librte_pmd_i40e/i40e_rxtx.c | 8 ++++----
> lib/librte_pmd_ixgbe/ixgbe_rxtx.c | 11 +++++------
> lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c | 6 +++---
> 16 files changed, 51 insertions(+), 36 deletions(-)
>
> diff --git a/app/test-pmd/flowgen.c b/app/test-pmd/flowgen.c
> index 72016c9..f24b00c 100644
> --- a/app/test-pmd/flowgen.c
> +++ b/app/test-pmd/flowgen.c
> @@ -207,7 +207,7 @@ pkt_burst_flow_gen(struct fwd_stream *fs)
> pkt->nb_segs = 1;
> pkt->pkt_len = pkt_size;
> pkt->ol_flags = ol_flags;
> - pkt->vlan_tci = vlan_tci;
> + pkt->vlan_tci0 = vlan_tci;
> pkt->l2_len = sizeof(struct ether_hdr);
> pkt->l3_len = sizeof(struct ipv4_hdr);
> pkts_burst[nb_pkt] = pkt;
> diff --git a/app/test-pmd/macfwd.c b/app/test-pmd/macfwd.c
> index 035e5eb..590b613 100644
> --- a/app/test-pmd/macfwd.c
> +++ b/app/test-pmd/macfwd.c
> @@ -120,7 +120,7 @@ pkt_burst_mac_forward(struct fwd_stream *fs)
> mb->ol_flags = ol_flags;
> mb->l2_len = sizeof(struct ether_hdr);
> mb->l3_len = sizeof(struct ipv4_hdr);
> - mb->vlan_tci = txp->tx_vlan_id;
> + mb->vlan_tci0 = txp->tx_vlan_id;
> }
> nb_tx = rte_eth_tx_burst(fs->tx_port, fs->tx_queue, pkts_burst, nb_rx);
> fs->tx_packets += nb_tx;
> diff --git a/app/test-pmd/macswap.c b/app/test-pmd/macswap.c
> index 6729849..c355399 100644
> --- a/app/test-pmd/macswap.c
> +++ b/app/test-pmd/macswap.c
> @@ -122,7 +122,7 @@ pkt_burst_mac_swap(struct fwd_stream *fs)
> mb->ol_flags = ol_flags;
> mb->l2_len = sizeof(struct ether_hdr);
> mb->l3_len = sizeof(struct ipv4_hdr);
> - mb->vlan_tci = txp->tx_vlan_id;
> + mb->vlan_tci0 = txp->tx_vlan_id;
> }
> nb_tx = rte_eth_tx_burst(fs->tx_port, fs->tx_queue, pkts_burst, nb_rx);
> fs->tx_packets += nb_tx;
> diff --git a/app/test-pmd/rxonly.c b/app/test-pmd/rxonly.c
> index ac56090..aa2cf7f 100644
> --- a/app/test-pmd/rxonly.c
> +++ b/app/test-pmd/rxonly.c
> @@ -159,7 +159,7 @@ pkt_burst_receive(struct fwd_stream *fs)
> mb->hash.fdir.hash, mb->hash.fdir.id);
> }
> if (ol_flags & PKT_RX_VLAN_PKT)
> - printf(" - VLAN tci=0x%x", mb->vlan_tci);
> + printf(" - VLAN tci=0x%x", mb->vlan_tci0);
> if (is_encapsulation) {
> struct ipv4_hdr *ipv4_hdr;
> struct ipv6_hdr *ipv6_hdr;
> diff --git a/app/test-pmd/txonly.c b/app/test-pmd/txonly.c
> index ca32c85..4a2827f 100644
> --- a/app/test-pmd/txonly.c
> +++ b/app/test-pmd/txonly.c
> @@ -266,7 +266,7 @@ pkt_burst_transmit(struct fwd_stream *fs)
> pkt->nb_segs = tx_pkt_nb_segs;
> pkt->pkt_len = tx_pkt_length;
> pkt->ol_flags = ol_flags;
> - pkt->vlan_tci = vlan_tci;
> + pkt->vlan_tci0 = vlan_tci;
> pkt->l2_len = sizeof(struct ether_hdr);
> pkt->l3_len = sizeof(struct ipv4_hdr);
> pkts_burst[nb_pkt] = pkt;
> diff --git a/app/test/packet_burst_generator.c b/app/test/packet_burst_generator.c
> index b46eed7..959644c 100644
> --- a/app/test/packet_burst_generator.c
> +++ b/app/test/packet_burst_generator.c
> @@ -270,7 +270,7 @@ nomore_mbuf:
> pkt->l2_len = eth_hdr_size;
>
> if (ipv4) {
> - pkt->vlan_tci = ETHER_TYPE_IPv4;
> + pkt->vlan_tci0 = ETHER_TYPE_IPv4;
> pkt->l3_len = sizeof(struct ipv4_hdr);
>
> if (vlan_enabled)
> @@ -278,7 +278,7 @@ nomore_mbuf:
> else
> pkt->ol_flags = PKT_RX_IPV4_HDR;
> } else {
> - pkt->vlan_tci = ETHER_TYPE_IPv6;
> + pkt->vlan_tci0 = ETHER_TYPE_IPv6;
> pkt->l3_len = sizeof(struct ipv6_hdr);
>
> if (vlan_enabled)
> diff --git a/lib/librte_ether/rte_ether.h b/lib/librte_ether/rte_ether.h
> index 49f4576..6d682a2 100644
> --- a/lib/librte_ether/rte_ether.h
> +++ b/lib/librte_ether/rte_ether.h
> @@ -357,7 +357,7 @@ static inline int rte_vlan_strip(struct rte_mbuf *m)
>
> struct vlan_hdr *vh = (struct vlan_hdr *)(eh + 1);
> m->ol_flags |= PKT_RX_VLAN_PKT;
> - m->vlan_tci = rte_be_to_cpu_16(vh->vlan_tci);
> + m->vlan_tci0 = rte_be_to_cpu_16(vh->vlan_tci);
>
> /* Copy ether header over rather than moving whole packet */
> memmove(rte_pktmbuf_adj(m, sizeof(struct vlan_hdr)),
> @@ -404,7 +404,7 @@ static inline int rte_vlan_insert(struct rte_mbuf **m)
> nh->ether_type = rte_cpu_to_be_16(ETHER_TYPE_VLAN);
>
> vh = (struct vlan_hdr *) (nh + 1);
> - vh->vlan_tci = rte_cpu_to_be_16((*m)->vlan_tci);
> + vh->vlan_tci = rte_cpu_to_be_16((*m)->vlan_tci0);
>
> return 0;
> }
> diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
> index 70b0987..6eed54f 100644
> --- a/lib/librte_mbuf/rte_mbuf.h
> +++ b/lib/librte_mbuf/rte_mbuf.h
> @@ -101,11 +101,17 @@ extern "C" {
> #define PKT_RX_TUNNEL_IPV6_HDR (1ULL << 12) /**< RX tunnel packet with IPv6 header. */
> #define PKT_RX_FDIR_ID (1ULL << 13) /**< FD id reported if FDIR match. */
> #define PKT_RX_FDIR_FLX (1ULL << 14) /**< Flexible bytes reported if FDIR match. */
> +#define PKT_RX_QINQ_PKT (1ULL << 15) /**< RX packet with double VLAN stripped. */
> /* add new RX flags here */
>
> /* add new TX flags here */
>
> /**
> + * Second VLAN insertion (QinQ) flag.
> + */
> +#define PKT_TX_QINQ_PKT (1ULL << 49)
> +
> +/**
> * TCP segmentation offload. To enable this offload feature for a
> * packet to be transmitted on hardware supporting TSO:
> * - set the PKT_TX_TCP_SEG flag in mbuf->ol_flags (this flag implies
> @@ -268,7 +274,6 @@ struct rte_mbuf {
>
> uint16_t data_len; /**< Amount of data in segment buffer. */
> uint32_t pkt_len; /**< Total pkt len: sum of all segments. */
> - uint16_t vlan_tci; /**< VLAN Tag Control Identifier (CPU order) */
> uint16_t reserved;
Now here is an implicit 2-bytes whole between 'reserved' and 'rss'.
Probably better to make it explicit - make 'reserved' uint32_t.
Another thing - it looks like your change will break ixgbe vector RX.
> union {
> uint32_t rss; /**< RSS hash result if RSS enabled */
> @@ -289,6 +294,15 @@ struct rte_mbuf {
> uint32_t usr; /**< User defined tags. See rte_distributor_process() */
> } hash; /**< hash information */
>
> + /* VLAN tags */
> + union {
> + uint32_t vlan_tags;
> + struct {
> + uint16_t vlan_tci0;
> + uint16_t vlan_tci1;
Do you really need to change vlan_tci to vlan_tci0?
Can't you keep 'vlan_tci' for first vlan tag, and add something like 'vlan_tci_ext', or 'vlan_tci_next' for second one?
Would save you a lot of changes, again users who use single vlan wouldn't need to update their code for 2.1.
> + };
> + };
> +
> uint32_t seqn; /**< Sequence number. See also rte_reorder_insert() */
>
> /* second cache line - fields only used in slow path or on TX */
> @@ -766,7 +780,8 @@ static inline void rte_pktmbuf_reset(struct rte_mbuf *m)
> m->next = NULL;
> m->pkt_len = 0;
> m->tx_offload = 0;
> - m->vlan_tci = 0;
> + m->vlan_tci0 = 0;
> + m->vlan_tci1 = 0;
Why just not:
m-> vlan_tags = 0;
?
> m->nb_segs = 1;
> m->port = 0xff;
>
> @@ -838,7 +853,8 @@ static inline void rte_pktmbuf_attach(struct rte_mbuf *mi, struct rte_mbuf *m)
> mi->data_off = m->data_off;
> mi->data_len = m->data_len;
> mi->port = m->port;
> - mi->vlan_tci = m->vlan_tci;
> + mi->vlan_tci0 = m->vlan_tci0;
> + mi->vlan_tci1 = m->vlan_tci1;
Same thing, why not:
mi-> vlan_tags = m-> vlan_tags;
?
> mi->tx_offload = m->tx_offload;
> mi->hash = m->hash;
>
> diff --git a/lib/librte_pmd_e1000/em_rxtx.c b/lib/librte_pmd_e1000/em_rxtx.c
> index 64d067c..422f4ed 100644
> --- a/lib/librte_pmd_e1000/em_rxtx.c
> +++ b/lib/librte_pmd_e1000/em_rxtx.c
> @@ -438,7 +438,7 @@ eth_em_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
> /* If hardware offload required */
> tx_ol_req = (ol_flags & (PKT_TX_IP_CKSUM | PKT_TX_L4_MASK));
> if (tx_ol_req) {
> - hdrlen.f.vlan_tci = tx_pkt->vlan_tci;
> + hdrlen.f.vlan_tci = tx_pkt->vlan_tci0;
> hdrlen.f.l2_len = tx_pkt->l2_len;
> hdrlen.f.l3_len = tx_pkt->l3_len;
> /* If new context to be built or reuse the exist ctx. */
> @@ -534,7 +534,7 @@ eth_em_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
> /* Set VLAN Tag offload fields. */
> if (ol_flags & PKT_TX_VLAN_PKT) {
> cmd_type_len |= E1000_TXD_CMD_VLE;
> - popts_spec = tx_pkt->vlan_tci << E1000_TXD_VLAN_SHIFT;
> + popts_spec = tx_pkt->vlan_tci0 << E1000_TXD_VLAN_SHIFT;
> }
>
> if (tx_ol_req) {
> @@ -800,7 +800,7 @@ eth_em_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
> rx_desc_error_to_pkt_flags(rxd.errors);
>
> /* Only valid if PKT_RX_VLAN_PKT set in pkt_flags */
> - rxm->vlan_tci = rte_le_to_cpu_16(rxd.special);
> + rxm->vlan_tci0 = rte_le_to_cpu_16(rxd.special);
>
> /*
> * Store the mbuf address into the next entry of the array
> @@ -1026,7 +1026,7 @@ eth_em_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
> rx_desc_error_to_pkt_flags(rxd.errors);
>
> /* Only valid if PKT_RX_VLAN_PKT set in pkt_flags */
> - rxm->vlan_tci = rte_le_to_cpu_16(rxd.special);
> + rxm->vlan_tci0 = rte_le_to_cpu_16(rxd.special);
>
> /* Prefetch data of first segment, if configured to do so. */
> rte_packet_prefetch((char *)first_seg->buf_addr +
> diff --git a/lib/librte_pmd_e1000/igb_rxtx.c b/lib/librte_pmd_e1000/igb_rxtx.c
> index 80d05c0..fda273f 100644
> --- a/lib/librte_pmd_e1000/igb_rxtx.c
> +++ b/lib/librte_pmd_e1000/igb_rxtx.c
> @@ -401,7 +401,7 @@ eth_igb_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
> ol_flags = tx_pkt->ol_flags;
> l2_l3_len.l2_len = tx_pkt->l2_len;
> l2_l3_len.l3_len = tx_pkt->l3_len;
> - vlan_macip_lens.f.vlan_tci = tx_pkt->vlan_tci;
> + vlan_macip_lens.f.vlan_tci = tx_pkt->vlan_tci0;
> vlan_macip_lens.f.l2_l3_len = l2_l3_len.u16;
> tx_ol_req = ol_flags & IGB_TX_OFFLOAD_MASK;
>
> @@ -784,7 +784,7 @@ eth_igb_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
> rxm->hash.rss = rxd.wb.lower.hi_dword.rss;
> hlen_type_rss = rte_le_to_cpu_32(rxd.wb.lower.lo_dword.data);
> /* Only valid if PKT_RX_VLAN_PKT set in pkt_flags */
> - rxm->vlan_tci = rte_le_to_cpu_16(rxd.wb.upper.vlan);
> + rxm->vlan_tci0 = rte_le_to_cpu_16(rxd.wb.upper.vlan);
>
> pkt_flags = rx_desc_hlen_type_rss_to_pkt_flags(hlen_type_rss);
> pkt_flags = pkt_flags | rx_desc_status_to_pkt_flags(staterr);
> @@ -1015,10 +1015,10 @@ eth_igb_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
> first_seg->hash.rss = rxd.wb.lower.hi_dword.rss;
>
> /*
> - * The vlan_tci field is only valid when PKT_RX_VLAN_PKT is
> + * The vlan_tci0 field is only valid when PKT_RX_VLAN_PKT is
> * set in the pkt_flags field.
> */
> - first_seg->vlan_tci = rte_le_to_cpu_16(rxd.wb.upper.vlan);
> + first_seg->vlan_tci0 = rte_le_to_cpu_16(rxd.wb.upper.vlan);
> hlen_type_rss = rte_le_to_cpu_32(rxd.wb.lower.lo_dword.data);
> pkt_flags = rx_desc_hlen_type_rss_to_pkt_flags(hlen_type_rss);
> pkt_flags = pkt_flags | rx_desc_status_to_pkt_flags(staterr);
> diff --git a/lib/librte_pmd_enic/enic_ethdev.c b/lib/librte_pmd_enic/enic_ethdev.c
> index 69ad01b..45c0e14 100644
> --- a/lib/librte_pmd_enic/enic_ethdev.c
> +++ b/lib/librte_pmd_enic/enic_ethdev.c
> @@ -506,7 +506,7 @@ static uint16_t enicpmd_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
> return index;
> }
> pkt_len = tx_pkt->pkt_len;
> - vlan_id = tx_pkt->vlan_tci;
> + vlan_id = tx_pkt->vlan_tci0;
> ol_flags = tx_pkt->ol_flags;
> for (frags = 0; inc_len < pkt_len; frags++) {
> if (!tx_pkt)
> diff --git a/lib/librte_pmd_enic/enic_main.c b/lib/librte_pmd_enic/enic_main.c
> index 15313c2..d1660a1 100644
> --- a/lib/librte_pmd_enic/enic_main.c
> +++ b/lib/librte_pmd_enic/enic_main.c
> @@ -490,7 +490,7 @@ static int enic_rq_indicate_buf(struct vnic_rq *rq,
>
> if (vlan_tci) {
> rx_pkt->ol_flags |= PKT_RX_VLAN_PKT;
> - rx_pkt->vlan_tci = vlan_tci;
> + rx_pkt->vlan_tci0 = vlan_tci;
> }
>
> return eop;
> diff --git a/lib/librte_pmd_fm10k/fm10k_rxtx.c b/lib/librte_pmd_fm10k/fm10k_rxtx.c
> index 83bddfc..ba3b8aa 100644
> --- a/lib/librte_pmd_fm10k/fm10k_rxtx.c
> +++ b/lib/librte_pmd_fm10k/fm10k_rxtx.c
> @@ -410,7 +410,7 @@ static inline void tx_xmit_pkt(struct fm10k_tx_queue *q, struct rte_mbuf *mb)
>
> /* set vlan if requested */
> if (mb->ol_flags & PKT_TX_VLAN_PKT)
> - q->hw_ring[q->next_free].vlan = mb->vlan_tci;
> + q->hw_ring[q->next_free].vlan = mb->vlan_tci0;
>
> /* fill up the rings */
> for (; mb != NULL; mb = mb->next) {
> diff --git a/lib/librte_pmd_i40e/i40e_rxtx.c b/lib/librte_pmd_i40e/i40e_rxtx.c
> index 493cfa3..1fe377c 100644
> --- a/lib/librte_pmd_i40e/i40e_rxtx.c
> +++ b/lib/librte_pmd_i40e/i40e_rxtx.c
> @@ -703,7 +703,7 @@ i40e_rx_scan_hw_ring(struct i40e_rx_queue *rxq)
> I40E_RXD_QW1_LENGTH_PBUF_SHIFT) - rxq->crc_len;
> mb->data_len = pkt_len;
> mb->pkt_len = pkt_len;
> - mb->vlan_tci = rx_status &
> + mb->vlan_tci0 = rx_status &
> (1 << I40E_RX_DESC_STATUS_L2TAG1P_SHIFT) ?
> rte_le_to_cpu_16(\
> rxdp[j].wb.qword0.lo_dword.l2tag1) : 0;
> @@ -947,7 +947,7 @@ i40e_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
> rxm->data_len = rx_packet_len;
> rxm->port = rxq->port_id;
>
> - rxm->vlan_tci = rx_status &
> + rxm->vlan_tci0 = rx_status &
> (1 << I40E_RX_DESC_STATUS_L2TAG1P_SHIFT) ?
> rte_le_to_cpu_16(rxd.wb.qword0.lo_dword.l2tag1) : 0;
> pkt_flags = i40e_rxd_status_to_pkt_flags(qword1);
> @@ -1106,7 +1106,7 @@ i40e_recv_scattered_pkts(void *rx_queue,
> }
>
> first_seg->port = rxq->port_id;
> - first_seg->vlan_tci = (rx_status &
> + first_seg->vlan_tci0 = (rx_status &
> (1 << I40E_RX_DESC_STATUS_L2TAG1P_SHIFT)) ?
> rte_le_to_cpu_16(rxd.wb.qword0.lo_dword.l2tag1) : 0;
> pkt_flags = i40e_rxd_status_to_pkt_flags(qword1);
> @@ -1291,7 +1291,7 @@ i40e_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
>
> /* Descriptor based VLAN insertion */
> if (ol_flags & PKT_TX_VLAN_PKT) {
> - tx_flags |= tx_pkt->vlan_tci <<
> + tx_flags |= tx_pkt->vlan_tci0 <<
> I40E_TX_FLAG_L2TAG1_SHIFT;
> tx_flags |= I40E_TX_FLAG_INSERT_VLAN;
> td_cmd |= I40E_TX_DESC_CMD_IL2TAG1;
> diff --git a/lib/librte_pmd_ixgbe/ixgbe_rxtx.c b/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
> index 7f15f15..fd664da 100644
> --- a/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
> +++ b/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
> @@ -612,7 +612,7 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
> tx_offload.l2_len = tx_pkt->l2_len;
> tx_offload.l3_len = tx_pkt->l3_len;
> tx_offload.l4_len = tx_pkt->l4_len;
> - tx_offload.vlan_tci = tx_pkt->vlan_tci;
> + tx_offload.vlan_tci = tx_pkt->vlan_tci0;
> tx_offload.tso_segsz = tx_pkt->tso_segsz;
>
> /* If new context need be built or reuse the exist ctx. */
> @@ -981,8 +981,7 @@ ixgbe_rx_scan_hw_ring(struct ixgbe_rx_queue *rxq)
> pkt_len = (uint16_t)(rxdp[j].wb.upper.length - rxq->crc_len);
> mb->data_len = pkt_len;
> mb->pkt_len = pkt_len;
> - mb->vlan_tci = rxdp[j].wb.upper.vlan;
> - mb->vlan_tci = rte_le_to_cpu_16(rxdp[j].wb.upper.vlan);
> + mb->vlan_tci0 = rte_le_to_cpu_16(rxdp[j].wb.upper.vlan);
>
> /* convert descriptor fields to rte mbuf flags */
> pkt_flags = rx_desc_hlen_type_rss_to_pkt_flags(
> @@ -1327,7 +1326,7 @@ ixgbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
>
> hlen_type_rss = rte_le_to_cpu_32(rxd.wb.lower.lo_dword.data);
> /* Only valid if PKT_RX_VLAN_PKT set in pkt_flags */
> - rxm->vlan_tci = rte_le_to_cpu_16(rxd.wb.upper.vlan);
> + rxm->vlan_tci0 = rte_le_to_cpu_16(rxd.wb.upper.vlan);
>
> pkt_flags = rx_desc_hlen_type_rss_to_pkt_flags(hlen_type_rss);
> pkt_flags = pkt_flags | rx_desc_status_to_pkt_flags(staterr);
> @@ -1412,10 +1411,10 @@ ixgbe_fill_cluster_head_buf(
> head->port = port_id;
>
> /*
> - * The vlan_tci field is only valid when PKT_RX_VLAN_PKT is
> + * The vlan_tci0 field is only valid when PKT_RX_VLAN_PKT is
> * set in the pkt_flags field.
> */
> - head->vlan_tci = rte_le_to_cpu_16(desc->wb.upper.vlan);
> + head->vlan_tci0 = rte_le_to_cpu_16(desc->wb.upper.vlan);
> hlen_type_rss = rte_le_to_cpu_32(desc->wb.lower.lo_dword.data);
> pkt_flags = rx_desc_hlen_type_rss_to_pkt_flags(hlen_type_rss);
> pkt_flags |= rx_desc_status_to_pkt_flags(staterr);
> diff --git a/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c b/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c
> index d8019f5..57a33c9 100644
> --- a/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c
> +++ b/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c
> @@ -405,7 +405,7 @@ vmxnet3_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
> /* Add VLAN tag if requested */
> if (txm->ol_flags & PKT_TX_VLAN_PKT) {
> txd->ti = 1;
> - txd->tci = rte_cpu_to_le_16(txm->vlan_tci);
> + txd->tci = rte_cpu_to_le_16(txm->vlan_tci0);
> }
>
> /* Record current mbuf for freeing it later in tx complete */
> @@ -629,10 +629,10 @@ vmxnet3_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
> rcd->tci);
> rxm->ol_flags = PKT_RX_VLAN_PKT;
> /* Copy vlan tag in packet buffer */
> - rxm->vlan_tci = rte_le_to_cpu_16((uint16_t)rcd->tci);
> + rxm->vlan_tci0 = rte_le_to_cpu_16((uint16_t)rcd->tci);
> } else {
> rxm->ol_flags = 0;
> - rxm->vlan_tci = 0;
> + rxm->vlan_tci0 = 0;
> }
>
> /* Initialize newly received packet buffer */
> --
> 1.9.3
^ permalink raw reply [flat|nested] 55+ messages in thread
* Re: [dpdk-dev] [PATCH RFC 1/6] mbuf: update mbuf structure for QinQ support
2015-05-05 11:04 ` Ananyev, Konstantin
@ 2015-05-05 15:42 ` Chilikin, Andrey
2015-05-05 22:37 ` Ananyev, Konstantin
2015-05-06 4:06 ` Zhang, Helin
1 sibling, 1 reply; 55+ messages in thread
From: Chilikin, Andrey @ 2015-05-05 15:42 UTC (permalink / raw)
To: Ananyev, Konstantin, Zhang, Helin, dev
Hi Helin,
I would agree with Konstantin about new naming for VLAN tags. I think we can leave existing name for t vlan_tci and just name new VLAN tag differently. I was thinking in the line of "vlan_tci_outer" or "stag_tci". So vlan_tci will store single VLAN in case if only one L2 tag is present or will store inner VLAN in case of two tags. "vlan_tci_outer" will store outer VLAN when two L2 tags are present. "stag_tci" name also looks like a good candidate as in most cases if two tags are presented then outer VLAN is addressed as S-Tag even if it is simple tag stacking.
Regards,
Andrey
> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Ananyev,
> Konstantin
> Sent: Tuesday, May 5, 2015 12:05 PM
> To: Zhang, Helin; dev@dpdk.org
> Subject: Re: [dpdk-dev] [PATCH RFC 1/6] mbuf: update mbuf structure for
> QinQ support
>
> Hi Helin,
>
> > -----Original Message-----
> > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Helin Zhang
> > Sent: Tuesday, May 05, 2015 3:32 AM
> > To: dev@dpdk.org
> > Subject: [dpdk-dev] [PATCH RFC 1/6] mbuf: update mbuf structure for
> > QinQ support
> >
> > To support QinQ, 'vlan_tci' should be replaced by 'vlan_tci0' and
> > 'vlan_tci1'. Also new offload flags of 'PKT_RX_QINQ_PKT' and
> > 'PKT_TX_QINQ_PKT' should be added.
> >
> > Signed-off-by: Helin Zhang <helin.zhang@intel.com>
> > ---
> > app/test-pmd/flowgen.c | 2 +-
> > app/test-pmd/macfwd.c | 2 +-
> > app/test-pmd/macswap.c | 2 +-
> > app/test-pmd/rxonly.c | 2 +-
> > app/test-pmd/txonly.c | 2 +-
> > app/test/packet_burst_generator.c | 4 ++--
> > lib/librte_ether/rte_ether.h | 4 ++--
> > lib/librte_mbuf/rte_mbuf.h | 22 +++++++++++++++++++---
> > lib/librte_pmd_e1000/em_rxtx.c | 8 ++++----
> > lib/librte_pmd_e1000/igb_rxtx.c | 8 ++++----
> > lib/librte_pmd_enic/enic_ethdev.c | 2 +-
> > lib/librte_pmd_enic/enic_main.c | 2 +-
> > lib/librte_pmd_fm10k/fm10k_rxtx.c | 2 +-
> > lib/librte_pmd_i40e/i40e_rxtx.c | 8 ++++----
> > lib/librte_pmd_ixgbe/ixgbe_rxtx.c | 11 +++++------
> > lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c | 6 +++---
> > 16 files changed, 51 insertions(+), 36 deletions(-)
> >
> > diff --git a/app/test-pmd/flowgen.c b/app/test-pmd/flowgen.c index
> > 72016c9..f24b00c 100644
> > --- a/app/test-pmd/flowgen.c
> > +++ b/app/test-pmd/flowgen.c
> > @@ -207,7 +207,7 @@ pkt_burst_flow_gen(struct fwd_stream *fs)
> > pkt->nb_segs = 1;
> > pkt->pkt_len = pkt_size;
> > pkt->ol_flags = ol_flags;
> > - pkt->vlan_tci = vlan_tci;
> > + pkt->vlan_tci0 = vlan_tci;
> > pkt->l2_len = sizeof(struct ether_hdr);
> > pkt->l3_len = sizeof(struct ipv4_hdr);
> > pkts_burst[nb_pkt] = pkt;
> > diff --git a/app/test-pmd/macfwd.c b/app/test-pmd/macfwd.c index
> > 035e5eb..590b613 100644
> > --- a/app/test-pmd/macfwd.c
> > +++ b/app/test-pmd/macfwd.c
> > @@ -120,7 +120,7 @@ pkt_burst_mac_forward(struct fwd_stream *fs)
> > mb->ol_flags = ol_flags;
> > mb->l2_len = sizeof(struct ether_hdr);
> > mb->l3_len = sizeof(struct ipv4_hdr);
> > - mb->vlan_tci = txp->tx_vlan_id;
> > + mb->vlan_tci0 = txp->tx_vlan_id;
> > }
> > nb_tx = rte_eth_tx_burst(fs->tx_port, fs->tx_queue, pkts_burst,
> nb_rx);
> > fs->tx_packets += nb_tx;
> > diff --git a/app/test-pmd/macswap.c b/app/test-pmd/macswap.c index
> > 6729849..c355399 100644
> > --- a/app/test-pmd/macswap.c
> > +++ b/app/test-pmd/macswap.c
> > @@ -122,7 +122,7 @@ pkt_burst_mac_swap(struct fwd_stream *fs)
> > mb->ol_flags = ol_flags;
> > mb->l2_len = sizeof(struct ether_hdr);
> > mb->l3_len = sizeof(struct ipv4_hdr);
> > - mb->vlan_tci = txp->tx_vlan_id;
> > + mb->vlan_tci0 = txp->tx_vlan_id;
> > }
> > nb_tx = rte_eth_tx_burst(fs->tx_port, fs->tx_queue, pkts_burst,
> nb_rx);
> > fs->tx_packets += nb_tx;
> > diff --git a/app/test-pmd/rxonly.c b/app/test-pmd/rxonly.c index
> > ac56090..aa2cf7f 100644
> > --- a/app/test-pmd/rxonly.c
> > +++ b/app/test-pmd/rxonly.c
> > @@ -159,7 +159,7 @@ pkt_burst_receive(struct fwd_stream *fs)
> > mb->hash.fdir.hash, mb->hash.fdir.id);
> > }
> > if (ol_flags & PKT_RX_VLAN_PKT)
> > - printf(" - VLAN tci=0x%x", mb->vlan_tci);
> > + printf(" - VLAN tci=0x%x", mb->vlan_tci0);
> > if (is_encapsulation) {
> > struct ipv4_hdr *ipv4_hdr;
> > struct ipv6_hdr *ipv6_hdr;
> > diff --git a/app/test-pmd/txonly.c b/app/test-pmd/txonly.c index
> > ca32c85..4a2827f 100644
> > --- a/app/test-pmd/txonly.c
> > +++ b/app/test-pmd/txonly.c
> > @@ -266,7 +266,7 @@ pkt_burst_transmit(struct fwd_stream *fs)
> > pkt->nb_segs = tx_pkt_nb_segs;
> > pkt->pkt_len = tx_pkt_length;
> > pkt->ol_flags = ol_flags;
> > - pkt->vlan_tci = vlan_tci;
> > + pkt->vlan_tci0 = vlan_tci;
> > pkt->l2_len = sizeof(struct ether_hdr);
> > pkt->l3_len = sizeof(struct ipv4_hdr);
> > pkts_burst[nb_pkt] = pkt;
> > diff --git a/app/test/packet_burst_generator.c
> > b/app/test/packet_burst_generator.c
> > index b46eed7..959644c 100644
> > --- a/app/test/packet_burst_generator.c
> > +++ b/app/test/packet_burst_generator.c
> > @@ -270,7 +270,7 @@ nomore_mbuf:
> > pkt->l2_len = eth_hdr_size;
> >
> > if (ipv4) {
> > - pkt->vlan_tci = ETHER_TYPE_IPv4;
> > + pkt->vlan_tci0 = ETHER_TYPE_IPv4;
> > pkt->l3_len = sizeof(struct ipv4_hdr);
> >
> > if (vlan_enabled)
> > @@ -278,7 +278,7 @@ nomore_mbuf:
> > else
> > pkt->ol_flags = PKT_RX_IPV4_HDR;
> > } else {
> > - pkt->vlan_tci = ETHER_TYPE_IPv6;
> > + pkt->vlan_tci0 = ETHER_TYPE_IPv6;
> > pkt->l3_len = sizeof(struct ipv6_hdr);
> >
> > if (vlan_enabled)
> > diff --git a/lib/librte_ether/rte_ether.h
> > b/lib/librte_ether/rte_ether.h index 49f4576..6d682a2 100644
> > --- a/lib/librte_ether/rte_ether.h
> > +++ b/lib/librte_ether/rte_ether.h
> > @@ -357,7 +357,7 @@ static inline int rte_vlan_strip(struct rte_mbuf
> > *m)
> >
> > struct vlan_hdr *vh = (struct vlan_hdr *)(eh + 1);
> > m->ol_flags |= PKT_RX_VLAN_PKT;
> > - m->vlan_tci = rte_be_to_cpu_16(vh->vlan_tci);
> > + m->vlan_tci0 = rte_be_to_cpu_16(vh->vlan_tci);
> >
> > /* Copy ether header over rather than moving whole packet */
> > memmove(rte_pktmbuf_adj(m, sizeof(struct vlan_hdr)), @@ -404,7
> > +404,7 @@ static inline int rte_vlan_insert(struct rte_mbuf **m)
> > nh->ether_type = rte_cpu_to_be_16(ETHER_TYPE_VLAN);
> >
> > vh = (struct vlan_hdr *) (nh + 1);
> > - vh->vlan_tci = rte_cpu_to_be_16((*m)->vlan_tci);
> > + vh->vlan_tci = rte_cpu_to_be_16((*m)->vlan_tci0);
> >
> > return 0;
> > }
> > diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
> > index 70b0987..6eed54f 100644
> > --- a/lib/librte_mbuf/rte_mbuf.h
> > +++ b/lib/librte_mbuf/rte_mbuf.h
> > @@ -101,11 +101,17 @@ extern "C" {
> > #define PKT_RX_TUNNEL_IPV6_HDR (1ULL << 12) /**< RX tunnel packet
> with IPv6 header. */
> > #define PKT_RX_FDIR_ID (1ULL << 13) /**< FD id reported if FDIR
> match. */
> > #define PKT_RX_FDIR_FLX (1ULL << 14) /**< Flexible bytes reported if
> FDIR match. */
> > +#define PKT_RX_QINQ_PKT (1ULL << 15) /**< RX packet with double
> VLAN stripped. */
> > /* add new RX flags here */
> >
> > /* add new TX flags here */
> >
> > /**
> > + * Second VLAN insertion (QinQ) flag.
> > + */
> > +#define PKT_TX_QINQ_PKT (1ULL << 49)
> > +
> > +/**
> > * TCP segmentation offload. To enable this offload feature for a
> > * packet to be transmitted on hardware supporting TSO:
> > * - set the PKT_TX_TCP_SEG flag in mbuf->ol_flags (this flag
> > implies @@ -268,7 +274,6 @@ struct rte_mbuf {
> >
> > uint16_t data_len; /**< Amount of data in segment buffer. */
> > uint32_t pkt_len; /**< Total pkt len: sum of all segments. */
> > - uint16_t vlan_tci; /**< VLAN Tag Control Identifier (CPU order) */
> > uint16_t reserved;
>
> Now here is an implicit 2-bytes whole between 'reserved' and 'rss'.
> Probably better to make it explicit - make 'reserved' uint32_t.
>
> Another thing - it looks like your change will break ixgbe vector RX.
>
> > union {
> > uint32_t rss; /**< RSS hash result if RSS enabled */
> > @@ -289,6 +294,15 @@ struct rte_mbuf {
> > uint32_t usr; /**< User defined tags. See
> rte_distributor_process() */
> > } hash; /**< hash information */
> >
> > + /* VLAN tags */
> > + union {
> > + uint32_t vlan_tags;
> > + struct {
> > + uint16_t vlan_tci0;
> > + uint16_t vlan_tci1;
>
> Do you really need to change vlan_tci to vlan_tci0?
> Can't you keep 'vlan_tci' for first vlan tag, and add something like
> 'vlan_tci_ext', or 'vlan_tci_next' for second one?
> Would save you a lot of changes, again users who use single vlan wouldn't
> need to update their code for 2.1.
>
> > + };
> > + };
> > +
> > uint32_t seqn; /**< Sequence number. See also
> rte_reorder_insert()
> > */
> >
> > /* second cache line - fields only used in slow path or on TX */ @@
> > -766,7 +780,8 @@ static inline void rte_pktmbuf_reset(struct rte_mbuf
> *m)
> > m->next = NULL;
> > m->pkt_len = 0;
> > m->tx_offload = 0;
> > - m->vlan_tci = 0;
> > + m->vlan_tci0 = 0;
> > + m->vlan_tci1 = 0;
>
> Why just not:
> m-> vlan_tags = 0;
> ?
>
> > m->nb_segs = 1;
> > m->port = 0xff;
> >
> > @@ -838,7 +853,8 @@ static inline void rte_pktmbuf_attach(struct
> rte_mbuf *mi, struct rte_mbuf *m)
> > mi->data_off = m->data_off;
> > mi->data_len = m->data_len;
> > mi->port = m->port;
> > - mi->vlan_tci = m->vlan_tci;
> > + mi->vlan_tci0 = m->vlan_tci0;
> > + mi->vlan_tci1 = m->vlan_tci1;
>
> Same thing, why not:
> mi-> vlan_tags = m-> vlan_tags;
> ?
>
> > mi->tx_offload = m->tx_offload;
> > mi->hash = m->hash;
> >
> > diff --git a/lib/librte_pmd_e1000/em_rxtx.c
> > b/lib/librte_pmd_e1000/em_rxtx.c index 64d067c..422f4ed 100644
> > --- a/lib/librte_pmd_e1000/em_rxtx.c
> > +++ b/lib/librte_pmd_e1000/em_rxtx.c
> > @@ -438,7 +438,7 @@ eth_em_xmit_pkts(void *tx_queue, struct
> rte_mbuf **tx_pkts,
> > /* If hardware offload required */
> > tx_ol_req = (ol_flags & (PKT_TX_IP_CKSUM |
> PKT_TX_L4_MASK));
> > if (tx_ol_req) {
> > - hdrlen.f.vlan_tci = tx_pkt->vlan_tci;
> > + hdrlen.f.vlan_tci = tx_pkt->vlan_tci0;
> > hdrlen.f.l2_len = tx_pkt->l2_len;
> > hdrlen.f.l3_len = tx_pkt->l3_len;
> > /* If new context to be built or reuse the exist ctx. */
> @@ -534,7
> > +534,7 @@ eth_em_xmit_pkts(void *tx_queue, struct rte_mbuf
> **tx_pkts,
> > /* Set VLAN Tag offload fields. */
> > if (ol_flags & PKT_TX_VLAN_PKT) {
> > cmd_type_len |= E1000_TXD_CMD_VLE;
> > - popts_spec = tx_pkt->vlan_tci <<
> E1000_TXD_VLAN_SHIFT;
> > + popts_spec = tx_pkt->vlan_tci0 <<
> E1000_TXD_VLAN_SHIFT;
> > }
> >
> > if (tx_ol_req) {
> > @@ -800,7 +800,7 @@ eth_em_recv_pkts(void *rx_queue, struct
> rte_mbuf **rx_pkts,
> > rx_desc_error_to_pkt_flags(rxd.errors);
> >
> > /* Only valid if PKT_RX_VLAN_PKT set in pkt_flags */
> > - rxm->vlan_tci = rte_le_to_cpu_16(rxd.special);
> > + rxm->vlan_tci0 = rte_le_to_cpu_16(rxd.special);
> >
> > /*
> > * Store the mbuf address into the next entry of the array
> @@
> > -1026,7 +1026,7 @@ eth_em_recv_scattered_pkts(void *rx_queue, struct
> rte_mbuf **rx_pkts,
> >
> rx_desc_error_to_pkt_flags(rxd.errors);
> >
> > /* Only valid if PKT_RX_VLAN_PKT set in pkt_flags */
> > - rxm->vlan_tci = rte_le_to_cpu_16(rxd.special);
> > + rxm->vlan_tci0 = rte_le_to_cpu_16(rxd.special);
> >
> > /* Prefetch data of first segment, if configured to do so. */
> > rte_packet_prefetch((char *)first_seg->buf_addr + diff --git
> > a/lib/librte_pmd_e1000/igb_rxtx.c b/lib/librte_pmd_e1000/igb_rxtx.c
> > index 80d05c0..fda273f 100644
> > --- a/lib/librte_pmd_e1000/igb_rxtx.c
> > +++ b/lib/librte_pmd_e1000/igb_rxtx.c
> > @@ -401,7 +401,7 @@ eth_igb_xmit_pkts(void *tx_queue, struct
> rte_mbuf **tx_pkts,
> > ol_flags = tx_pkt->ol_flags;
> > l2_l3_len.l2_len = tx_pkt->l2_len;
> > l2_l3_len.l3_len = tx_pkt->l3_len;
> > - vlan_macip_lens.f.vlan_tci = tx_pkt->vlan_tci;
> > + vlan_macip_lens.f.vlan_tci = tx_pkt->vlan_tci0;
> > vlan_macip_lens.f.l2_l3_len = l2_l3_len.u16;
> > tx_ol_req = ol_flags & IGB_TX_OFFLOAD_MASK;
> >
> > @@ -784,7 +784,7 @@ eth_igb_recv_pkts(void *rx_queue, struct
> rte_mbuf **rx_pkts,
> > rxm->hash.rss = rxd.wb.lower.hi_dword.rss;
> > hlen_type_rss =
> rte_le_to_cpu_32(rxd.wb.lower.lo_dword.data);
> > /* Only valid if PKT_RX_VLAN_PKT set in pkt_flags */
> > - rxm->vlan_tci = rte_le_to_cpu_16(rxd.wb.upper.vlan);
> > + rxm->vlan_tci0 = rte_le_to_cpu_16(rxd.wb.upper.vlan);
> >
> > pkt_flags =
> rx_desc_hlen_type_rss_to_pkt_flags(hlen_type_rss);
> > pkt_flags = pkt_flags | rx_desc_status_to_pkt_flags(staterr);
> > @@ -1015,10 +1015,10 @@ eth_igb_recv_scattered_pkts(void *rx_queue,
> struct rte_mbuf **rx_pkts,
> > first_seg->hash.rss = rxd.wb.lower.hi_dword.rss;
> >
> > /*
> > - * The vlan_tci field is only valid when PKT_RX_VLAN_PKT is
> > + * The vlan_tci0 field is only valid when PKT_RX_VLAN_PKT is
> > * set in the pkt_flags field.
> > */
> > - first_seg->vlan_tci = rte_le_to_cpu_16(rxd.wb.upper.vlan);
> > + first_seg->vlan_tci0 = rte_le_to_cpu_16(rxd.wb.upper.vlan);
> > hlen_type_rss =
> rte_le_to_cpu_32(rxd.wb.lower.lo_dword.data);
> > pkt_flags =
> rx_desc_hlen_type_rss_to_pkt_flags(hlen_type_rss);
> > pkt_flags = pkt_flags | rx_desc_status_to_pkt_flags(staterr);
> > diff --git a/lib/librte_pmd_enic/enic_ethdev.c
> > b/lib/librte_pmd_enic/enic_ethdev.c
> > index 69ad01b..45c0e14 100644
> > --- a/lib/librte_pmd_enic/enic_ethdev.c
> > +++ b/lib/librte_pmd_enic/enic_ethdev.c
> > @@ -506,7 +506,7 @@ static uint16_t enicpmd_xmit_pkts(void *tx_queue,
> struct rte_mbuf **tx_pkts,
> > return index;
> > }
> > pkt_len = tx_pkt->pkt_len;
> > - vlan_id = tx_pkt->vlan_tci;
> > + vlan_id = tx_pkt->vlan_tci0;
> > ol_flags = tx_pkt->ol_flags;
> > for (frags = 0; inc_len < pkt_len; frags++) {
> > if (!tx_pkt)
> > diff --git a/lib/librte_pmd_enic/enic_main.c
> > b/lib/librte_pmd_enic/enic_main.c index 15313c2..d1660a1 100644
> > --- a/lib/librte_pmd_enic/enic_main.c
> > +++ b/lib/librte_pmd_enic/enic_main.c
> > @@ -490,7 +490,7 @@ static int enic_rq_indicate_buf(struct vnic_rq
> > *rq,
> >
> > if (vlan_tci) {
> > rx_pkt->ol_flags |= PKT_RX_VLAN_PKT;
> > - rx_pkt->vlan_tci = vlan_tci;
> > + rx_pkt->vlan_tci0 = vlan_tci;
> > }
> >
> > return eop;
> > diff --git a/lib/librte_pmd_fm10k/fm10k_rxtx.c
> > b/lib/librte_pmd_fm10k/fm10k_rxtx.c
> > index 83bddfc..ba3b8aa 100644
> > --- a/lib/librte_pmd_fm10k/fm10k_rxtx.c
> > +++ b/lib/librte_pmd_fm10k/fm10k_rxtx.c
> > @@ -410,7 +410,7 @@ static inline void tx_xmit_pkt(struct
> > fm10k_tx_queue *q, struct rte_mbuf *mb)
> >
> > /* set vlan if requested */
> > if (mb->ol_flags & PKT_TX_VLAN_PKT)
> > - q->hw_ring[q->next_free].vlan = mb->vlan_tci;
> > + q->hw_ring[q->next_free].vlan = mb->vlan_tci0;
> >
> > /* fill up the rings */
> > for (; mb != NULL; mb = mb->next) {
> > diff --git a/lib/librte_pmd_i40e/i40e_rxtx.c
> > b/lib/librte_pmd_i40e/i40e_rxtx.c index 493cfa3..1fe377c 100644
> > --- a/lib/librte_pmd_i40e/i40e_rxtx.c
> > +++ b/lib/librte_pmd_i40e/i40e_rxtx.c
> > @@ -703,7 +703,7 @@ i40e_rx_scan_hw_ring(struct i40e_rx_queue *rxq)
> > I40E_RXD_QW1_LENGTH_PBUF_SHIFT) - rxq-
> >crc_len;
> > mb->data_len = pkt_len;
> > mb->pkt_len = pkt_len;
> > - mb->vlan_tci = rx_status &
> > + mb->vlan_tci0 = rx_status &
> > (1 <<
> I40E_RX_DESC_STATUS_L2TAG1P_SHIFT) ?
> > rte_le_to_cpu_16(\
> > rxdp[j].wb.qword0.lo_dword.l2tag1) : 0; @@
> -947,7 +947,7 @@
> > i40e_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t
> nb_pkts)
> > rxm->data_len = rx_packet_len;
> > rxm->port = rxq->port_id;
> >
> > - rxm->vlan_tci = rx_status &
> > + rxm->vlan_tci0 = rx_status &
> > (1 << I40E_RX_DESC_STATUS_L2TAG1P_SHIFT) ?
> > rte_le_to_cpu_16(rxd.wb.qword0.lo_dword.l2tag1) :
> 0;
> > pkt_flags = i40e_rxd_status_to_pkt_flags(qword1);
> > @@ -1106,7 +1106,7 @@ i40e_recv_scattered_pkts(void *rx_queue,
> > }
> >
> > first_seg->port = rxq->port_id;
> > - first_seg->vlan_tci = (rx_status &
> > + first_seg->vlan_tci0 = (rx_status &
> > (1 << I40E_RX_DESC_STATUS_L2TAG1P_SHIFT)) ?
> > rte_le_to_cpu_16(rxd.wb.qword0.lo_dword.l2tag1) :
> 0;
> > pkt_flags = i40e_rxd_status_to_pkt_flags(qword1);
> > @@ -1291,7 +1291,7 @@ i40e_xmit_pkts(void *tx_queue, struct rte_mbuf
> > **tx_pkts, uint16_t nb_pkts)
> >
> > /* Descriptor based VLAN insertion */
> > if (ol_flags & PKT_TX_VLAN_PKT) {
> > - tx_flags |= tx_pkt->vlan_tci <<
> > + tx_flags |= tx_pkt->vlan_tci0 <<
> > I40E_TX_FLAG_L2TAG1_SHIFT;
> > tx_flags |= I40E_TX_FLAG_INSERT_VLAN;
> > td_cmd |= I40E_TX_DESC_CMD_IL2TAG1; diff --git
> > a/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
> > b/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
> > index 7f15f15..fd664da 100644
> > --- a/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
> > +++ b/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
> > @@ -612,7 +612,7 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf
> **tx_pkts,
> > tx_offload.l2_len = tx_pkt->l2_len;
> > tx_offload.l3_len = tx_pkt->l3_len;
> > tx_offload.l4_len = tx_pkt->l4_len;
> > - tx_offload.vlan_tci = tx_pkt->vlan_tci;
> > + tx_offload.vlan_tci = tx_pkt->vlan_tci0;
> > tx_offload.tso_segsz = tx_pkt->tso_segsz;
> >
> > /* If new context need be built or reuse the exist ctx.
> */ @@
> > -981,8 +981,7 @@ ixgbe_rx_scan_hw_ring(struct ixgbe_rx_queue *rxq)
> > pkt_len = (uint16_t)(rxdp[j].wb.upper.length - rxq-
> >crc_len);
> > mb->data_len = pkt_len;
> > mb->pkt_len = pkt_len;
> > - mb->vlan_tci = rxdp[j].wb.upper.vlan;
> > - mb->vlan_tci =
> rte_le_to_cpu_16(rxdp[j].wb.upper.vlan);
> > + mb->vlan_tci0 =
> rte_le_to_cpu_16(rxdp[j].wb.upper.vlan);
> >
> > /* convert descriptor fields to rte mbuf flags */
> > pkt_flags = rx_desc_hlen_type_rss_to_pkt_flags(
> > @@ -1327,7 +1326,7 @@ ixgbe_recv_pkts(void *rx_queue, struct
> rte_mbuf
> > **rx_pkts,
> >
> > hlen_type_rss =
> rte_le_to_cpu_32(rxd.wb.lower.lo_dword.data);
> > /* Only valid if PKT_RX_VLAN_PKT set in pkt_flags */
> > - rxm->vlan_tci = rte_le_to_cpu_16(rxd.wb.upper.vlan);
> > + rxm->vlan_tci0 = rte_le_to_cpu_16(rxd.wb.upper.vlan);
> >
> > pkt_flags =
> rx_desc_hlen_type_rss_to_pkt_flags(hlen_type_rss);
> > pkt_flags = pkt_flags | rx_desc_status_to_pkt_flags(staterr);
> > @@ -1412,10 +1411,10 @@ ixgbe_fill_cluster_head_buf(
> > head->port = port_id;
> >
> > /*
> > - * The vlan_tci field is only valid when PKT_RX_VLAN_PKT is
> > + * The vlan_tci0 field is only valid when PKT_RX_VLAN_PKT is
> > * set in the pkt_flags field.
> > */
> > - head->vlan_tci = rte_le_to_cpu_16(desc->wb.upper.vlan);
> > + head->vlan_tci0 = rte_le_to_cpu_16(desc->wb.upper.vlan);
> > hlen_type_rss = rte_le_to_cpu_32(desc->wb.lower.lo_dword.data);
> > pkt_flags = rx_desc_hlen_type_rss_to_pkt_flags(hlen_type_rss);
> > pkt_flags |= rx_desc_status_to_pkt_flags(staterr);
> > diff --git a/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c
> > b/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c
> > index d8019f5..57a33c9 100644
> > --- a/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c
> > +++ b/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c
> > @@ -405,7 +405,7 @@ vmxnet3_xmit_pkts(void *tx_queue, struct
> rte_mbuf **tx_pkts,
> > /* Add VLAN tag if requested */
> > if (txm->ol_flags & PKT_TX_VLAN_PKT) {
> > txd->ti = 1;
> > - txd->tci = rte_cpu_to_le_16(txm->vlan_tci);
> > + txd->tci = rte_cpu_to_le_16(txm->vlan_tci0);
> > }
> >
> > /* Record current mbuf for freeing it later in tx
> complete */ @@
> > -629,10 +629,10 @@ vmxnet3_recv_pkts(void *rx_queue, struct rte_mbuf
> **rx_pkts, uint16_t nb_pkts)
> > rcd->tci);
> > rxm->ol_flags = PKT_RX_VLAN_PKT;
> > /* Copy vlan tag in packet buffer */
> > - rxm->vlan_tci = rte_le_to_cpu_16((uint16_t)rcd-
> >tci);
> > + rxm->vlan_tci0 = rte_le_to_cpu_16((uint16_t)rcd-
> >tci);
> > } else {
> > rxm->ol_flags = 0;
> > - rxm->vlan_tci = 0;
> > + rxm->vlan_tci0 = 0;
> > }
> >
> > /* Initialize newly received packet buffer */
> > --
> > 1.9.3
^ permalink raw reply [flat|nested] 55+ messages in thread
* Re: [dpdk-dev] [PATCH RFC 1/6] mbuf: update mbuf structure for QinQ support
2015-05-05 15:42 ` Chilikin, Andrey
@ 2015-05-05 22:37 ` Ananyev, Konstantin
2015-05-06 4:07 ` Zhang, Helin
0 siblings, 1 reply; 55+ messages in thread
From: Ananyev, Konstantin @ 2015-05-05 22:37 UTC (permalink / raw)
To: Chilikin, Andrey, Zhang, Helin, dev
> -----Original Message-----
> From: Chilikin, Andrey
> Sent: Tuesday, May 05, 2015 4:43 PM
> To: Ananyev, Konstantin; Zhang, Helin; dev@dpdk.org
> Subject: RE: [dpdk-dev] [PATCH RFC 1/6] mbuf: update mbuf structure for QinQ support
>
> Hi Helin,
>
> I would agree with Konstantin about new naming for VLAN tags. I think we can leave existing name for t vlan_tci and just name new
> VLAN tag differently. I was thinking in the line of "vlan_tci_outer" or "stag_tci". So vlan_tci will store single VLAN in case if only one L2
> tag is present or will store inner VLAN in case of two tags. "vlan_tci_outer" will store outer VLAN when two L2 tags are present.
> "stag_tci" name also looks like a good candidate as in most cases if two tags are presented then outer VLAN is addressed as S-Tag
> even if it is simple tag stacking.
Yep, I suppose "vlan_tci_outer" or "stag_tci" is a better name, then what I suggested.
Konstantin
>
> Regards,
> Andrey
>
> > -----Original Message-----
> > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Ananyev,
> > Konstantin
> > Sent: Tuesday, May 5, 2015 12:05 PM
> > To: Zhang, Helin; dev@dpdk.org
> > Subject: Re: [dpdk-dev] [PATCH RFC 1/6] mbuf: update mbuf structure for
> > QinQ support
> >
> > Hi Helin,
> >
> > > -----Original Message-----
> > > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Helin Zhang
> > > Sent: Tuesday, May 05, 2015 3:32 AM
> > > To: dev@dpdk.org
> > > Subject: [dpdk-dev] [PATCH RFC 1/6] mbuf: update mbuf structure for
> > > QinQ support
> > >
> > > To support QinQ, 'vlan_tci' should be replaced by 'vlan_tci0' and
> > > 'vlan_tci1'. Also new offload flags of 'PKT_RX_QINQ_PKT' and
> > > 'PKT_TX_QINQ_PKT' should be added.
> > >
> > > Signed-off-by: Helin Zhang <helin.zhang@intel.com>
> > > ---
> > > app/test-pmd/flowgen.c | 2 +-
> > > app/test-pmd/macfwd.c | 2 +-
> > > app/test-pmd/macswap.c | 2 +-
> > > app/test-pmd/rxonly.c | 2 +-
> > > app/test-pmd/txonly.c | 2 +-
> > > app/test/packet_burst_generator.c | 4 ++--
> > > lib/librte_ether/rte_ether.h | 4 ++--
> > > lib/librte_mbuf/rte_mbuf.h | 22 +++++++++++++++++++---
> > > lib/librte_pmd_e1000/em_rxtx.c | 8 ++++----
> > > lib/librte_pmd_e1000/igb_rxtx.c | 8 ++++----
> > > lib/librte_pmd_enic/enic_ethdev.c | 2 +-
> > > lib/librte_pmd_enic/enic_main.c | 2 +-
> > > lib/librte_pmd_fm10k/fm10k_rxtx.c | 2 +-
> > > lib/librte_pmd_i40e/i40e_rxtx.c | 8 ++++----
> > > lib/librte_pmd_ixgbe/ixgbe_rxtx.c | 11 +++++------
> > > lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c | 6 +++---
> > > 16 files changed, 51 insertions(+), 36 deletions(-)
> > >
> > > diff --git a/app/test-pmd/flowgen.c b/app/test-pmd/flowgen.c index
> > > 72016c9..f24b00c 100644
> > > --- a/app/test-pmd/flowgen.c
> > > +++ b/app/test-pmd/flowgen.c
> > > @@ -207,7 +207,7 @@ pkt_burst_flow_gen(struct fwd_stream *fs)
> > > pkt->nb_segs = 1;
> > > pkt->pkt_len = pkt_size;
> > > pkt->ol_flags = ol_flags;
> > > - pkt->vlan_tci = vlan_tci;
> > > + pkt->vlan_tci0 = vlan_tci;
> > > pkt->l2_len = sizeof(struct ether_hdr);
> > > pkt->l3_len = sizeof(struct ipv4_hdr);
> > > pkts_burst[nb_pkt] = pkt;
> > > diff --git a/app/test-pmd/macfwd.c b/app/test-pmd/macfwd.c index
> > > 035e5eb..590b613 100644
> > > --- a/app/test-pmd/macfwd.c
> > > +++ b/app/test-pmd/macfwd.c
> > > @@ -120,7 +120,7 @@ pkt_burst_mac_forward(struct fwd_stream *fs)
> > > mb->ol_flags = ol_flags;
> > > mb->l2_len = sizeof(struct ether_hdr);
> > > mb->l3_len = sizeof(struct ipv4_hdr);
> > > - mb->vlan_tci = txp->tx_vlan_id;
> > > + mb->vlan_tci0 = txp->tx_vlan_id;
> > > }
> > > nb_tx = rte_eth_tx_burst(fs->tx_port, fs->tx_queue, pkts_burst,
> > nb_rx);
> > > fs->tx_packets += nb_tx;
> > > diff --git a/app/test-pmd/macswap.c b/app/test-pmd/macswap.c index
> > > 6729849..c355399 100644
> > > --- a/app/test-pmd/macswap.c
> > > +++ b/app/test-pmd/macswap.c
> > > @@ -122,7 +122,7 @@ pkt_burst_mac_swap(struct fwd_stream *fs)
> > > mb->ol_flags = ol_flags;
> > > mb->l2_len = sizeof(struct ether_hdr);
> > > mb->l3_len = sizeof(struct ipv4_hdr);
> > > - mb->vlan_tci = txp->tx_vlan_id;
> > > + mb->vlan_tci0 = txp->tx_vlan_id;
> > > }
> > > nb_tx = rte_eth_tx_burst(fs->tx_port, fs->tx_queue, pkts_burst,
> > nb_rx);
> > > fs->tx_packets += nb_tx;
> > > diff --git a/app/test-pmd/rxonly.c b/app/test-pmd/rxonly.c index
> > > ac56090..aa2cf7f 100644
> > > --- a/app/test-pmd/rxonly.c
> > > +++ b/app/test-pmd/rxonly.c
> > > @@ -159,7 +159,7 @@ pkt_burst_receive(struct fwd_stream *fs)
> > > mb->hash.fdir.hash, mb->hash.fdir.id);
> > > }
> > > if (ol_flags & PKT_RX_VLAN_PKT)
> > > - printf(" - VLAN tci=0x%x", mb->vlan_tci);
> > > + printf(" - VLAN tci=0x%x", mb->vlan_tci0);
> > > if (is_encapsulation) {
> > > struct ipv4_hdr *ipv4_hdr;
> > > struct ipv6_hdr *ipv6_hdr;
> > > diff --git a/app/test-pmd/txonly.c b/app/test-pmd/txonly.c index
> > > ca32c85..4a2827f 100644
> > > --- a/app/test-pmd/txonly.c
> > > +++ b/app/test-pmd/txonly.c
> > > @@ -266,7 +266,7 @@ pkt_burst_transmit(struct fwd_stream *fs)
> > > pkt->nb_segs = tx_pkt_nb_segs;
> > > pkt->pkt_len = tx_pkt_length;
> > > pkt->ol_flags = ol_flags;
> > > - pkt->vlan_tci = vlan_tci;
> > > + pkt->vlan_tci0 = vlan_tci;
> > > pkt->l2_len = sizeof(struct ether_hdr);
> > > pkt->l3_len = sizeof(struct ipv4_hdr);
> > > pkts_burst[nb_pkt] = pkt;
> > > diff --git a/app/test/packet_burst_generator.c
> > > b/app/test/packet_burst_generator.c
> > > index b46eed7..959644c 100644
> > > --- a/app/test/packet_burst_generator.c
> > > +++ b/app/test/packet_burst_generator.c
> > > @@ -270,7 +270,7 @@ nomore_mbuf:
> > > pkt->l2_len = eth_hdr_size;
> > >
> > > if (ipv4) {
> > > - pkt->vlan_tci = ETHER_TYPE_IPv4;
> > > + pkt->vlan_tci0 = ETHER_TYPE_IPv4;
> > > pkt->l3_len = sizeof(struct ipv4_hdr);
> > >
> > > if (vlan_enabled)
> > > @@ -278,7 +278,7 @@ nomore_mbuf:
> > > else
> > > pkt->ol_flags = PKT_RX_IPV4_HDR;
> > > } else {
> > > - pkt->vlan_tci = ETHER_TYPE_IPv6;
> > > + pkt->vlan_tci0 = ETHER_TYPE_IPv6;
> > > pkt->l3_len = sizeof(struct ipv6_hdr);
> > >
> > > if (vlan_enabled)
> > > diff --git a/lib/librte_ether/rte_ether.h
> > > b/lib/librte_ether/rte_ether.h index 49f4576..6d682a2 100644
> > > --- a/lib/librte_ether/rte_ether.h
> > > +++ b/lib/librte_ether/rte_ether.h
> > > @@ -357,7 +357,7 @@ static inline int rte_vlan_strip(struct rte_mbuf
> > > *m)
> > >
> > > struct vlan_hdr *vh = (struct vlan_hdr *)(eh + 1);
> > > m->ol_flags |= PKT_RX_VLAN_PKT;
> > > - m->vlan_tci = rte_be_to_cpu_16(vh->vlan_tci);
> > > + m->vlan_tci0 = rte_be_to_cpu_16(vh->vlan_tci);
> > >
> > > /* Copy ether header over rather than moving whole packet */
> > > memmove(rte_pktmbuf_adj(m, sizeof(struct vlan_hdr)), @@ -404,7
> > > +404,7 @@ static inline int rte_vlan_insert(struct rte_mbuf **m)
> > > nh->ether_type = rte_cpu_to_be_16(ETHER_TYPE_VLAN);
> > >
> > > vh = (struct vlan_hdr *) (nh + 1);
> > > - vh->vlan_tci = rte_cpu_to_be_16((*m)->vlan_tci);
> > > + vh->vlan_tci = rte_cpu_to_be_16((*m)->vlan_tci0);
> > >
> > > return 0;
> > > }
> > > diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
> > > index 70b0987..6eed54f 100644
> > > --- a/lib/librte_mbuf/rte_mbuf.h
> > > +++ b/lib/librte_mbuf/rte_mbuf.h
> > > @@ -101,11 +101,17 @@ extern "C" {
> > > #define PKT_RX_TUNNEL_IPV6_HDR (1ULL << 12) /**< RX tunnel packet
> > with IPv6 header. */
> > > #define PKT_RX_FDIR_ID (1ULL << 13) /**< FD id reported if FDIR
> > match. */
> > > #define PKT_RX_FDIR_FLX (1ULL << 14) /**< Flexible bytes reported if
> > FDIR match. */
> > > +#define PKT_RX_QINQ_PKT (1ULL << 15) /**< RX packet with double
> > VLAN stripped. */
> > > /* add new RX flags here */
> > >
> > > /* add new TX flags here */
> > >
> > > /**
> > > + * Second VLAN insertion (QinQ) flag.
> > > + */
> > > +#define PKT_TX_QINQ_PKT (1ULL << 49)
> > > +
> > > +/**
> > > * TCP segmentation offload. To enable this offload feature for a
> > > * packet to be transmitted on hardware supporting TSO:
> > > * - set the PKT_TX_TCP_SEG flag in mbuf->ol_flags (this flag
> > > implies @@ -268,7 +274,6 @@ struct rte_mbuf {
> > >
> > > uint16_t data_len; /**< Amount of data in segment buffer. */
> > > uint32_t pkt_len; /**< Total pkt len: sum of all segments. */
> > > - uint16_t vlan_tci; /**< VLAN Tag Control Identifier (CPU order) */
> > > uint16_t reserved;
> >
> > Now here is an implicit 2-bytes whole between 'reserved' and 'rss'.
> > Probably better to make it explicit - make 'reserved' uint32_t.
> >
> > Another thing - it looks like your change will break ixgbe vector RX.
> >
> > > union {
> > > uint32_t rss; /**< RSS hash result if RSS enabled */
> > > @@ -289,6 +294,15 @@ struct rte_mbuf {
> > > uint32_t usr; /**< User defined tags. See
> > rte_distributor_process() */
> > > } hash; /**< hash information */
> > >
> > > + /* VLAN tags */
> > > + union {
> > > + uint32_t vlan_tags;
> > > + struct {
> > > + uint16_t vlan_tci0;
> > > + uint16_t vlan_tci1;
> >
> > Do you really need to change vlan_tci to vlan_tci0?
> > Can't you keep 'vlan_tci' for first vlan tag, and add something like
> > 'vlan_tci_ext', or 'vlan_tci_next' for second one?
> > Would save you a lot of changes, again users who use single vlan wouldn't
> > need to update their code for 2.1.
> >
> > > + };
> > > + };
> > > +
> > > uint32_t seqn; /**< Sequence number. See also
> > rte_reorder_insert()
> > > */
> > >
> > > /* second cache line - fields only used in slow path or on TX */ @@
> > > -766,7 +780,8 @@ static inline void rte_pktmbuf_reset(struct rte_mbuf
> > *m)
> > > m->next = NULL;
> > > m->pkt_len = 0;
> > > m->tx_offload = 0;
> > > - m->vlan_tci = 0;
> > > + m->vlan_tci0 = 0;
> > > + m->vlan_tci1 = 0;
> >
> > Why just not:
> > m-> vlan_tags = 0;
> > ?
> >
> > > m->nb_segs = 1;
> > > m->port = 0xff;
> > >
> > > @@ -838,7 +853,8 @@ static inline void rte_pktmbuf_attach(struct
> > rte_mbuf *mi, struct rte_mbuf *m)
> > > mi->data_off = m->data_off;
> > > mi->data_len = m->data_len;
> > > mi->port = m->port;
> > > - mi->vlan_tci = m->vlan_tci;
> > > + mi->vlan_tci0 = m->vlan_tci0;
> > > + mi->vlan_tci1 = m->vlan_tci1;
> >
> > Same thing, why not:
> > mi-> vlan_tags = m-> vlan_tags;
> > ?
> >
> > > mi->tx_offload = m->tx_offload;
> > > mi->hash = m->hash;
> > >
> > > diff --git a/lib/librte_pmd_e1000/em_rxtx.c
> > > b/lib/librte_pmd_e1000/em_rxtx.c index 64d067c..422f4ed 100644
> > > --- a/lib/librte_pmd_e1000/em_rxtx.c
> > > +++ b/lib/librte_pmd_e1000/em_rxtx.c
> > > @@ -438,7 +438,7 @@ eth_em_xmit_pkts(void *tx_queue, struct
> > rte_mbuf **tx_pkts,
> > > /* If hardware offload required */
> > > tx_ol_req = (ol_flags & (PKT_TX_IP_CKSUM |
> > PKT_TX_L4_MASK));
> > > if (tx_ol_req) {
> > > - hdrlen.f.vlan_tci = tx_pkt->vlan_tci;
> > > + hdrlen.f.vlan_tci = tx_pkt->vlan_tci0;
> > > hdrlen.f.l2_len = tx_pkt->l2_len;
> > > hdrlen.f.l3_len = tx_pkt->l3_len;
> > > /* If new context to be built or reuse the exist ctx. */
> > @@ -534,7
> > > +534,7 @@ eth_em_xmit_pkts(void *tx_queue, struct rte_mbuf
> > **tx_pkts,
> > > /* Set VLAN Tag offload fields. */
> > > if (ol_flags & PKT_TX_VLAN_PKT) {
> > > cmd_type_len |= E1000_TXD_CMD_VLE;
> > > - popts_spec = tx_pkt->vlan_tci <<
> > E1000_TXD_VLAN_SHIFT;
> > > + popts_spec = tx_pkt->vlan_tci0 <<
> > E1000_TXD_VLAN_SHIFT;
> > > }
> > >
> > > if (tx_ol_req) {
> > > @@ -800,7 +800,7 @@ eth_em_recv_pkts(void *rx_queue, struct
> > rte_mbuf **rx_pkts,
> > > rx_desc_error_to_pkt_flags(rxd.errors);
> > >
> > > /* Only valid if PKT_RX_VLAN_PKT set in pkt_flags */
> > > - rxm->vlan_tci = rte_le_to_cpu_16(rxd.special);
> > > + rxm->vlan_tci0 = rte_le_to_cpu_16(rxd.special);
> > >
> > > /*
> > > * Store the mbuf address into the next entry of the array
> > @@
> > > -1026,7 +1026,7 @@ eth_em_recv_scattered_pkts(void *rx_queue, struct
> > rte_mbuf **rx_pkts,
> > >
> > rx_desc_error_to_pkt_flags(rxd.errors);
> > >
> > > /* Only valid if PKT_RX_VLAN_PKT set in pkt_flags */
> > > - rxm->vlan_tci = rte_le_to_cpu_16(rxd.special);
> > > + rxm->vlan_tci0 = rte_le_to_cpu_16(rxd.special);
> > >
> > > /* Prefetch data of first segment, if configured to do so. */
> > > rte_packet_prefetch((char *)first_seg->buf_addr + diff --git
> > > a/lib/librte_pmd_e1000/igb_rxtx.c b/lib/librte_pmd_e1000/igb_rxtx.c
> > > index 80d05c0..fda273f 100644
> > > --- a/lib/librte_pmd_e1000/igb_rxtx.c
> > > +++ b/lib/librte_pmd_e1000/igb_rxtx.c
> > > @@ -401,7 +401,7 @@ eth_igb_xmit_pkts(void *tx_queue, struct
> > rte_mbuf **tx_pkts,
> > > ol_flags = tx_pkt->ol_flags;
> > > l2_l3_len.l2_len = tx_pkt->l2_len;
> > > l2_l3_len.l3_len = tx_pkt->l3_len;
> > > - vlan_macip_lens.f.vlan_tci = tx_pkt->vlan_tci;
> > > + vlan_macip_lens.f.vlan_tci = tx_pkt->vlan_tci0;
> > > vlan_macip_lens.f.l2_l3_len = l2_l3_len.u16;
> > > tx_ol_req = ol_flags & IGB_TX_OFFLOAD_MASK;
> > >
> > > @@ -784,7 +784,7 @@ eth_igb_recv_pkts(void *rx_queue, struct
> > rte_mbuf **rx_pkts,
> > > rxm->hash.rss = rxd.wb.lower.hi_dword.rss;
> > > hlen_type_rss =
> > rte_le_to_cpu_32(rxd.wb.lower.lo_dword.data);
> > > /* Only valid if PKT_RX_VLAN_PKT set in pkt_flags */
> > > - rxm->vlan_tci = rte_le_to_cpu_16(rxd.wb.upper.vlan);
> > > + rxm->vlan_tci0 = rte_le_to_cpu_16(rxd.wb.upper.vlan);
> > >
> > > pkt_flags =
> > rx_desc_hlen_type_rss_to_pkt_flags(hlen_type_rss);
> > > pkt_flags = pkt_flags | rx_desc_status_to_pkt_flags(staterr);
> > > @@ -1015,10 +1015,10 @@ eth_igb_recv_scattered_pkts(void *rx_queue,
> > struct rte_mbuf **rx_pkts,
> > > first_seg->hash.rss = rxd.wb.lower.hi_dword.rss;
> > >
> > > /*
> > > - * The vlan_tci field is only valid when PKT_RX_VLAN_PKT is
> > > + * The vlan_tci0 field is only valid when PKT_RX_VLAN_PKT is
> > > * set in the pkt_flags field.
> > > */
> > > - first_seg->vlan_tci = rte_le_to_cpu_16(rxd.wb.upper.vlan);
> > > + first_seg->vlan_tci0 = rte_le_to_cpu_16(rxd.wb.upper.vlan);
> > > hlen_type_rss =
> > rte_le_to_cpu_32(rxd.wb.lower.lo_dword.data);
> > > pkt_flags =
> > rx_desc_hlen_type_rss_to_pkt_flags(hlen_type_rss);
> > > pkt_flags = pkt_flags | rx_desc_status_to_pkt_flags(staterr);
> > > diff --git a/lib/librte_pmd_enic/enic_ethdev.c
> > > b/lib/librte_pmd_enic/enic_ethdev.c
> > > index 69ad01b..45c0e14 100644
> > > --- a/lib/librte_pmd_enic/enic_ethdev.c
> > > +++ b/lib/librte_pmd_enic/enic_ethdev.c
> > > @@ -506,7 +506,7 @@ static uint16_t enicpmd_xmit_pkts(void *tx_queue,
> > struct rte_mbuf **tx_pkts,
> > > return index;
> > > }
> > > pkt_len = tx_pkt->pkt_len;
> > > - vlan_id = tx_pkt->vlan_tci;
> > > + vlan_id = tx_pkt->vlan_tci0;
> > > ol_flags = tx_pkt->ol_flags;
> > > for (frags = 0; inc_len < pkt_len; frags++) {
> > > if (!tx_pkt)
> > > diff --git a/lib/librte_pmd_enic/enic_main.c
> > > b/lib/librte_pmd_enic/enic_main.c index 15313c2..d1660a1 100644
> > > --- a/lib/librte_pmd_enic/enic_main.c
> > > +++ b/lib/librte_pmd_enic/enic_main.c
> > > @@ -490,7 +490,7 @@ static int enic_rq_indicate_buf(struct vnic_rq
> > > *rq,
> > >
> > > if (vlan_tci) {
> > > rx_pkt->ol_flags |= PKT_RX_VLAN_PKT;
> > > - rx_pkt->vlan_tci = vlan_tci;
> > > + rx_pkt->vlan_tci0 = vlan_tci;
> > > }
> > >
> > > return eop;
> > > diff --git a/lib/librte_pmd_fm10k/fm10k_rxtx.c
> > > b/lib/librte_pmd_fm10k/fm10k_rxtx.c
> > > index 83bddfc..ba3b8aa 100644
> > > --- a/lib/librte_pmd_fm10k/fm10k_rxtx.c
> > > +++ b/lib/librte_pmd_fm10k/fm10k_rxtx.c
> > > @@ -410,7 +410,7 @@ static inline void tx_xmit_pkt(struct
> > > fm10k_tx_queue *q, struct rte_mbuf *mb)
> > >
> > > /* set vlan if requested */
> > > if (mb->ol_flags & PKT_TX_VLAN_PKT)
> > > - q->hw_ring[q->next_free].vlan = mb->vlan_tci;
> > > + q->hw_ring[q->next_free].vlan = mb->vlan_tci0;
> > >
> > > /* fill up the rings */
> > > for (; mb != NULL; mb = mb->next) {
> > > diff --git a/lib/librte_pmd_i40e/i40e_rxtx.c
> > > b/lib/librte_pmd_i40e/i40e_rxtx.c index 493cfa3..1fe377c 100644
> > > --- a/lib/librte_pmd_i40e/i40e_rxtx.c
> > > +++ b/lib/librte_pmd_i40e/i40e_rxtx.c
> > > @@ -703,7 +703,7 @@ i40e_rx_scan_hw_ring(struct i40e_rx_queue *rxq)
> > > I40E_RXD_QW1_LENGTH_PBUF_SHIFT) - rxq-
> > >crc_len;
> > > mb->data_len = pkt_len;
> > > mb->pkt_len = pkt_len;
> > > - mb->vlan_tci = rx_status &
> > > + mb->vlan_tci0 = rx_status &
> > > (1 <<
> > I40E_RX_DESC_STATUS_L2TAG1P_SHIFT) ?
> > > rte_le_to_cpu_16(\
> > > rxdp[j].wb.qword0.lo_dword.l2tag1) : 0; @@
> > -947,7 +947,7 @@
> > > i40e_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t
> > nb_pkts)
> > > rxm->data_len = rx_packet_len;
> > > rxm->port = rxq->port_id;
> > >
> > > - rxm->vlan_tci = rx_status &
> > > + rxm->vlan_tci0 = rx_status &
> > > (1 << I40E_RX_DESC_STATUS_L2TAG1P_SHIFT) ?
> > > rte_le_to_cpu_16(rxd.wb.qword0.lo_dword.l2tag1) :
> > 0;
> > > pkt_flags = i40e_rxd_status_to_pkt_flags(qword1);
> > > @@ -1106,7 +1106,7 @@ i40e_recv_scattered_pkts(void *rx_queue,
> > > }
> > >
> > > first_seg->port = rxq->port_id;
> > > - first_seg->vlan_tci = (rx_status &
> > > + first_seg->vlan_tci0 = (rx_status &
> > > (1 << I40E_RX_DESC_STATUS_L2TAG1P_SHIFT)) ?
> > > rte_le_to_cpu_16(rxd.wb.qword0.lo_dword.l2tag1) :
> > 0;
> > > pkt_flags = i40e_rxd_status_to_pkt_flags(qword1);
> > > @@ -1291,7 +1291,7 @@ i40e_xmit_pkts(void *tx_queue, struct rte_mbuf
> > > **tx_pkts, uint16_t nb_pkts)
> > >
> > > /* Descriptor based VLAN insertion */
> > > if (ol_flags & PKT_TX_VLAN_PKT) {
> > > - tx_flags |= tx_pkt->vlan_tci <<
> > > + tx_flags |= tx_pkt->vlan_tci0 <<
> > > I40E_TX_FLAG_L2TAG1_SHIFT;
> > > tx_flags |= I40E_TX_FLAG_INSERT_VLAN;
> > > td_cmd |= I40E_TX_DESC_CMD_IL2TAG1; diff --git
> > > a/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
> > > b/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
> > > index 7f15f15..fd664da 100644
> > > --- a/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
> > > +++ b/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
> > > @@ -612,7 +612,7 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf
> > **tx_pkts,
> > > tx_offload.l2_len = tx_pkt->l2_len;
> > > tx_offload.l3_len = tx_pkt->l3_len;
> > > tx_offload.l4_len = tx_pkt->l4_len;
> > > - tx_offload.vlan_tci = tx_pkt->vlan_tci;
> > > + tx_offload.vlan_tci = tx_pkt->vlan_tci0;
> > > tx_offload.tso_segsz = tx_pkt->tso_segsz;
> > >
> > > /* If new context need be built or reuse the exist ctx.
> > */ @@
> > > -981,8 +981,7 @@ ixgbe_rx_scan_hw_ring(struct ixgbe_rx_queue *rxq)
> > > pkt_len = (uint16_t)(rxdp[j].wb.upper.length - rxq-
> > >crc_len);
> > > mb->data_len = pkt_len;
> > > mb->pkt_len = pkt_len;
> > > - mb->vlan_tci = rxdp[j].wb.upper.vlan;
> > > - mb->vlan_tci =
> > rte_le_to_cpu_16(rxdp[j].wb.upper.vlan);
> > > + mb->vlan_tci0 =
> > rte_le_to_cpu_16(rxdp[j].wb.upper.vlan);
> > >
> > > /* convert descriptor fields to rte mbuf flags */
> > > pkt_flags = rx_desc_hlen_type_rss_to_pkt_flags(
> > > @@ -1327,7 +1326,7 @@ ixgbe_recv_pkts(void *rx_queue, struct
> > rte_mbuf
> > > **rx_pkts,
> > >
> > > hlen_type_rss =
> > rte_le_to_cpu_32(rxd.wb.lower.lo_dword.data);
> > > /* Only valid if PKT_RX_VLAN_PKT set in pkt_flags */
> > > - rxm->vlan_tci = rte_le_to_cpu_16(rxd.wb.upper.vlan);
> > > + rxm->vlan_tci0 = rte_le_to_cpu_16(rxd.wb.upper.vlan);
> > >
> > > pkt_flags =
> > rx_desc_hlen_type_rss_to_pkt_flags(hlen_type_rss);
> > > pkt_flags = pkt_flags | rx_desc_status_to_pkt_flags(staterr);
> > > @@ -1412,10 +1411,10 @@ ixgbe_fill_cluster_head_buf(
> > > head->port = port_id;
> > >
> > > /*
> > > - * The vlan_tci field is only valid when PKT_RX_VLAN_PKT is
> > > + * The vlan_tci0 field is only valid when PKT_RX_VLAN_PKT is
> > > * set in the pkt_flags field.
> > > */
> > > - head->vlan_tci = rte_le_to_cpu_16(desc->wb.upper.vlan);
> > > + head->vlan_tci0 = rte_le_to_cpu_16(desc->wb.upper.vlan);
> > > hlen_type_rss = rte_le_to_cpu_32(desc->wb.lower.lo_dword.data);
> > > pkt_flags = rx_desc_hlen_type_rss_to_pkt_flags(hlen_type_rss);
> > > pkt_flags |= rx_desc_status_to_pkt_flags(staterr);
> > > diff --git a/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c
> > > b/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c
> > > index d8019f5..57a33c9 100644
> > > --- a/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c
> > > +++ b/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c
> > > @@ -405,7 +405,7 @@ vmxnet3_xmit_pkts(void *tx_queue, struct
> > rte_mbuf **tx_pkts,
> > > /* Add VLAN tag if requested */
> > > if (txm->ol_flags & PKT_TX_VLAN_PKT) {
> > > txd->ti = 1;
> > > - txd->tci = rte_cpu_to_le_16(txm->vlan_tci);
> > > + txd->tci = rte_cpu_to_le_16(txm->vlan_tci0);
> > > }
> > >
> > > /* Record current mbuf for freeing it later in tx
> > complete */ @@
> > > -629,10 +629,10 @@ vmxnet3_recv_pkts(void *rx_queue, struct rte_mbuf
> > **rx_pkts, uint16_t nb_pkts)
> > > rcd->tci);
> > > rxm->ol_flags = PKT_RX_VLAN_PKT;
> > > /* Copy vlan tag in packet buffer */
> > > - rxm->vlan_tci = rte_le_to_cpu_16((uint16_t)rcd-
> > >tci);
> > > + rxm->vlan_tci0 = rte_le_to_cpu_16((uint16_t)rcd-
> > >tci);
> > > } else {
> > > rxm->ol_flags = 0;
> > > - rxm->vlan_tci = 0;
> > > + rxm->vlan_tci0 = 0;
> > > }
> > >
> > > /* Initialize newly received packet buffer */
> > > --
> > > 1.9.3
^ permalink raw reply [flat|nested] 55+ messages in thread
* Re: [dpdk-dev] [PATCH RFC 1/6] mbuf: update mbuf structure for QinQ support
2015-05-05 11:04 ` Ananyev, Konstantin
2015-05-05 15:42 ` Chilikin, Andrey
@ 2015-05-06 4:06 ` Zhang, Helin
2015-05-06 8:39 ` Bruce Richardson
1 sibling, 1 reply; 55+ messages in thread
From: Zhang, Helin @ 2015-05-06 4:06 UTC (permalink / raw)
To: Ananyev, Konstantin, dev
> -----Original Message-----
> From: Ananyev, Konstantin
> Sent: Tuesday, May 5, 2015 7:05 PM
> To: Zhang, Helin; dev@dpdk.org
> Subject: RE: [dpdk-dev] [PATCH RFC 1/6] mbuf: update mbuf structure for
> QinQ support
>
> Hi Helin,
>
> > -----Original Message-----
> > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Helin Zhang
> > Sent: Tuesday, May 05, 2015 3:32 AM
> > To: dev@dpdk.org
> > Subject: [dpdk-dev] [PATCH RFC 1/6] mbuf: update mbuf structure for
> > QinQ support
> >
> > To support QinQ, 'vlan_tci' should be replaced by 'vlan_tci0' and
> > 'vlan_tci1'. Also new offload flags of 'PKT_RX_QINQ_PKT' and
> > 'PKT_TX_QINQ_PKT' should be added.
> >
> > Signed-off-by: Helin Zhang <helin.zhang@intel.com>
> > ---
> > app/test-pmd/flowgen.c | 2 +-
> > app/test-pmd/macfwd.c | 2 +-
> > app/test-pmd/macswap.c | 2 +-
> > app/test-pmd/rxonly.c | 2 +-
> > app/test-pmd/txonly.c | 2 +-
> > app/test/packet_burst_generator.c | 4 ++--
> > lib/librte_ether/rte_ether.h | 4 ++--
> > lib/librte_mbuf/rte_mbuf.h | 22
> +++++++++++++++++++---
> > lib/librte_pmd_e1000/em_rxtx.c | 8 ++++----
> > lib/librte_pmd_e1000/igb_rxtx.c | 8 ++++----
> > lib/librte_pmd_enic/enic_ethdev.c | 2 +-
> > lib/librte_pmd_enic/enic_main.c | 2 +-
> > lib/librte_pmd_fm10k/fm10k_rxtx.c | 2 +-
> > lib/librte_pmd_i40e/i40e_rxtx.c | 8 ++++----
> > lib/librte_pmd_ixgbe/ixgbe_rxtx.c | 11 +++++------
> > lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c | 6 +++---
> > 16 files changed, 51 insertions(+), 36 deletions(-)
> >
> > diff --git a/app/test-pmd/flowgen.c b/app/test-pmd/flowgen.c index
> > 72016c9..f24b00c 100644
> > --- a/app/test-pmd/flowgen.c
> > +++ b/app/test-pmd/flowgen.c
> > @@ -207,7 +207,7 @@ pkt_burst_flow_gen(struct fwd_stream *fs)
> > pkt->nb_segs = 1;
> > pkt->pkt_len = pkt_size;
> > pkt->ol_flags = ol_flags;
> > - pkt->vlan_tci = vlan_tci;
> > + pkt->vlan_tci0 = vlan_tci;
> > pkt->l2_len = sizeof(struct ether_hdr);
> > pkt->l3_len = sizeof(struct ipv4_hdr);
> > pkts_burst[nb_pkt] = pkt;
> > diff --git a/app/test-pmd/macfwd.c b/app/test-pmd/macfwd.c index
> > 035e5eb..590b613 100644
> > --- a/app/test-pmd/macfwd.c
> > +++ b/app/test-pmd/macfwd.c
> > @@ -120,7 +120,7 @@ pkt_burst_mac_forward(struct fwd_stream *fs)
> > mb->ol_flags = ol_flags;
> > mb->l2_len = sizeof(struct ether_hdr);
> > mb->l3_len = sizeof(struct ipv4_hdr);
> > - mb->vlan_tci = txp->tx_vlan_id;
> > + mb->vlan_tci0 = txp->tx_vlan_id;
> > }
> > nb_tx = rte_eth_tx_burst(fs->tx_port, fs->tx_queue, pkts_burst,
> nb_rx);
> > fs->tx_packets += nb_tx;
> > diff --git a/app/test-pmd/macswap.c b/app/test-pmd/macswap.c index
> > 6729849..c355399 100644
> > --- a/app/test-pmd/macswap.c
> > +++ b/app/test-pmd/macswap.c
> > @@ -122,7 +122,7 @@ pkt_burst_mac_swap(struct fwd_stream *fs)
> > mb->ol_flags = ol_flags;
> > mb->l2_len = sizeof(struct ether_hdr);
> > mb->l3_len = sizeof(struct ipv4_hdr);
> > - mb->vlan_tci = txp->tx_vlan_id;
> > + mb->vlan_tci0 = txp->tx_vlan_id;
> > }
> > nb_tx = rte_eth_tx_burst(fs->tx_port, fs->tx_queue, pkts_burst,
> nb_rx);
> > fs->tx_packets += nb_tx;
> > diff --git a/app/test-pmd/rxonly.c b/app/test-pmd/rxonly.c index
> > ac56090..aa2cf7f 100644
> > --- a/app/test-pmd/rxonly.c
> > +++ b/app/test-pmd/rxonly.c
> > @@ -159,7 +159,7 @@ pkt_burst_receive(struct fwd_stream *fs)
> > mb->hash.fdir.hash, mb->hash.fdir.id);
> > }
> > if (ol_flags & PKT_RX_VLAN_PKT)
> > - printf(" - VLAN tci=0x%x", mb->vlan_tci);
> > + printf(" - VLAN tci=0x%x", mb->vlan_tci0);
> > if (is_encapsulation) {
> > struct ipv4_hdr *ipv4_hdr;
> > struct ipv6_hdr *ipv6_hdr;
> > diff --git a/app/test-pmd/txonly.c b/app/test-pmd/txonly.c index
> > ca32c85..4a2827f 100644
> > --- a/app/test-pmd/txonly.c
> > +++ b/app/test-pmd/txonly.c
> > @@ -266,7 +266,7 @@ pkt_burst_transmit(struct fwd_stream *fs)
> > pkt->nb_segs = tx_pkt_nb_segs;
> > pkt->pkt_len = tx_pkt_length;
> > pkt->ol_flags = ol_flags;
> > - pkt->vlan_tci = vlan_tci;
> > + pkt->vlan_tci0 = vlan_tci;
> > pkt->l2_len = sizeof(struct ether_hdr);
> > pkt->l3_len = sizeof(struct ipv4_hdr);
> > pkts_burst[nb_pkt] = pkt;
> > diff --git a/app/test/packet_burst_generator.c
> > b/app/test/packet_burst_generator.c
> > index b46eed7..959644c 100644
> > --- a/app/test/packet_burst_generator.c
> > +++ b/app/test/packet_burst_generator.c
> > @@ -270,7 +270,7 @@ nomore_mbuf:
> > pkt->l2_len = eth_hdr_size;
> >
> > if (ipv4) {
> > - pkt->vlan_tci = ETHER_TYPE_IPv4;
> > + pkt->vlan_tci0 = ETHER_TYPE_IPv4;
> > pkt->l3_len = sizeof(struct ipv4_hdr);
> >
> > if (vlan_enabled)
> > @@ -278,7 +278,7 @@ nomore_mbuf:
> > else
> > pkt->ol_flags = PKT_RX_IPV4_HDR;
> > } else {
> > - pkt->vlan_tci = ETHER_TYPE_IPv6;
> > + pkt->vlan_tci0 = ETHER_TYPE_IPv6;
> > pkt->l3_len = sizeof(struct ipv6_hdr);
> >
> > if (vlan_enabled)
> > diff --git a/lib/librte_ether/rte_ether.h
> > b/lib/librte_ether/rte_ether.h index 49f4576..6d682a2 100644
> > --- a/lib/librte_ether/rte_ether.h
> > +++ b/lib/librte_ether/rte_ether.h
> > @@ -357,7 +357,7 @@ static inline int rte_vlan_strip(struct rte_mbuf
> > *m)
> >
> > struct vlan_hdr *vh = (struct vlan_hdr *)(eh + 1);
> > m->ol_flags |= PKT_RX_VLAN_PKT;
> > - m->vlan_tci = rte_be_to_cpu_16(vh->vlan_tci);
> > + m->vlan_tci0 = rte_be_to_cpu_16(vh->vlan_tci);
> >
> > /* Copy ether header over rather than moving whole packet */
> > memmove(rte_pktmbuf_adj(m, sizeof(struct vlan_hdr)), @@ -404,7
> > +404,7 @@ static inline int rte_vlan_insert(struct rte_mbuf **m)
> > nh->ether_type = rte_cpu_to_be_16(ETHER_TYPE_VLAN);
> >
> > vh = (struct vlan_hdr *) (nh + 1);
> > - vh->vlan_tci = rte_cpu_to_be_16((*m)->vlan_tci);
> > + vh->vlan_tci = rte_cpu_to_be_16((*m)->vlan_tci0);
> >
> > return 0;
> > }
> > diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
> > index 70b0987..6eed54f 100644
> > --- a/lib/librte_mbuf/rte_mbuf.h
> > +++ b/lib/librte_mbuf/rte_mbuf.h
> > @@ -101,11 +101,17 @@ extern "C" {
> > #define PKT_RX_TUNNEL_IPV6_HDR (1ULL << 12) /**< RX tunnel
> packet with IPv6 header. */
> > #define PKT_RX_FDIR_ID (1ULL << 13) /**< FD id reported if
> FDIR match. */
> > #define PKT_RX_FDIR_FLX (1ULL << 14) /**< Flexible bytes
> reported if FDIR match. */
> > +#define PKT_RX_QINQ_PKT (1ULL << 15) /**< RX packet with
> double VLAN stripped. */
> > /* add new RX flags here */
> >
> > /* add new TX flags here */
> >
> > /**
> > + * Second VLAN insertion (QinQ) flag.
> > + */
> > +#define PKT_TX_QINQ_PKT (1ULL << 49)
> > +
> > +/**
> > * TCP segmentation offload. To enable this offload feature for a
> > * packet to be transmitted on hardware supporting TSO:
> > * - set the PKT_TX_TCP_SEG flag in mbuf->ol_flags (this flag
> > implies @@ -268,7 +274,6 @@ struct rte_mbuf {
> >
> > uint16_t data_len; /**< Amount of data in segment buffer. */
> > uint32_t pkt_len; /**< Total pkt len: sum of all segments. */
> > - uint16_t vlan_tci; /**< VLAN Tag Control Identifier (CPU
> order) */
> > uint16_t reserved;
>
> Now here is an implicit 2-bytes whole between 'reserved' and 'rss'.
> Probably better to make it explicit - make 'reserved' uint32_t.
Yes, the layout will be changed according to the demands of Vector PMD.
The vlan structure will be kept the same, but the mbuf structure layout will
be re-organized a bit.
>
> Another thing - it looks like your change will break ixgbe vector RX.
Yes, in the cover-letter, I noted that the vector PMD will be updated soon
together with the code changes.
>
> > union {
> > uint32_t rss; /**< RSS hash result if RSS enabled */
> > @@ -289,6 +294,15 @@ struct rte_mbuf {
> > uint32_t usr; /**< User defined tags. See
> rte_distributor_process() */
> > } hash; /**< hash information */
> >
> > + /* VLAN tags */
> > + union {
> > + uint32_t vlan_tags;
> > + struct {
> > + uint16_t vlan_tci0;
> > + uint16_t vlan_tci1;
>
> Do you really need to change vlan_tci to vlan_tci0?
> Can't you keep 'vlan_tci' for first vlan tag, and add something like
> 'vlan_tci_ext', or 'vlan_tci_next' for second one?
> Would save you a lot of changes, again users who use single vlan wouldn't
> need to update their code for 2.1.
Yes, good point! The names came from the original mbuf definition done by
Bruce long long ago. If more guys suggest keeping th old one, and just add a new
one, I will do like that in the next version of patch set.
Thank you all!
>
> > + };
> > + };
> > +
> > uint32_t seqn; /**< Sequence number. See also rte_reorder_insert()
> > */
> >
> > /* second cache line - fields only used in slow path or on TX */ @@
> > -766,7 +780,8 @@ static inline void rte_pktmbuf_reset(struct rte_mbuf
> *m)
> > m->next = NULL;
> > m->pkt_len = 0;
> > m->tx_offload = 0;
> > - m->vlan_tci = 0;
> > + m->vlan_tci0 = 0;
> > + m->vlan_tci1 = 0;
>
> Why just not:
> m-> vlan_tags = 0;
> ?
Accepted. Good point!
>
> > m->nb_segs = 1;
> > m->port = 0xff;
> >
> > @@ -838,7 +853,8 @@ static inline void rte_pktmbuf_attach(struct
> rte_mbuf *mi, struct rte_mbuf *m)
> > mi->data_off = m->data_off;
> > mi->data_len = m->data_len;
> > mi->port = m->port;
> > - mi->vlan_tci = m->vlan_tci;
> > + mi->vlan_tci0 = m->vlan_tci0;
> > + mi->vlan_tci1 = m->vlan_tci1;
>
> Same thing, why not:
> mi-> vlan_tags = m-> vlan_tags;
> ?
Accepted. Good point!
Regards,
Helin
>
> > mi->tx_offload = m->tx_offload;
> > mi->hash = m->hash;
> >
> > diff --git a/lib/librte_pmd_e1000/em_rxtx.c
> > b/lib/librte_pmd_e1000/em_rxtx.c index 64d067c..422f4ed 100644
> > --- a/lib/librte_pmd_e1000/em_rxtx.c
> > +++ b/lib/librte_pmd_e1000/em_rxtx.c
> > @@ -438,7 +438,7 @@ eth_em_xmit_pkts(void *tx_queue, struct
> rte_mbuf **tx_pkts,
> > /* If hardware offload required */
> > tx_ol_req = (ol_flags & (PKT_TX_IP_CKSUM |
> PKT_TX_L4_MASK));
> > if (tx_ol_req) {
> > - hdrlen.f.vlan_tci = tx_pkt->vlan_tci;
> > + hdrlen.f.vlan_tci = tx_pkt->vlan_tci0;
> > hdrlen.f.l2_len = tx_pkt->l2_len;
> > hdrlen.f.l3_len = tx_pkt->l3_len;
> > /* If new context to be built or reuse the exist ctx. */ @@
> -534,7
> > +534,7 @@ eth_em_xmit_pkts(void *tx_queue, struct rte_mbuf
> **tx_pkts,
> > /* Set VLAN Tag offload fields. */
> > if (ol_flags & PKT_TX_VLAN_PKT) {
> > cmd_type_len |= E1000_TXD_CMD_VLE;
> > - popts_spec = tx_pkt->vlan_tci << E1000_TXD_VLAN_SHIFT;
> > + popts_spec = tx_pkt->vlan_tci0 << E1000_TXD_VLAN_SHIFT;
> > }
> >
> > if (tx_ol_req) {
> > @@ -800,7 +800,7 @@ eth_em_recv_pkts(void *rx_queue, struct
> rte_mbuf **rx_pkts,
> > rx_desc_error_to_pkt_flags(rxd.errors);
> >
> > /* Only valid if PKT_RX_VLAN_PKT set in pkt_flags */
> > - rxm->vlan_tci = rte_le_to_cpu_16(rxd.special);
> > + rxm->vlan_tci0 = rte_le_to_cpu_16(rxd.special);
> >
> > /*
> > * Store the mbuf address into the next entry of the array @@
> > -1026,7 +1026,7 @@ eth_em_recv_scattered_pkts(void *rx_queue,
> struct rte_mbuf **rx_pkts,
> > rx_desc_error_to_pkt_flags(rxd.errors);
> >
> > /* Only valid if PKT_RX_VLAN_PKT set in pkt_flags */
> > - rxm->vlan_tci = rte_le_to_cpu_16(rxd.special);
> > + rxm->vlan_tci0 = rte_le_to_cpu_16(rxd.special);
> >
> > /* Prefetch data of first segment, if configured to do so. */
> > rte_packet_prefetch((char *)first_seg->buf_addr + diff --git
> > a/lib/librte_pmd_e1000/igb_rxtx.c b/lib/librte_pmd_e1000/igb_rxtx.c
> > index 80d05c0..fda273f 100644
> > --- a/lib/librte_pmd_e1000/igb_rxtx.c
> > +++ b/lib/librte_pmd_e1000/igb_rxtx.c
> > @@ -401,7 +401,7 @@ eth_igb_xmit_pkts(void *tx_queue, struct
> rte_mbuf **tx_pkts,
> > ol_flags = tx_pkt->ol_flags;
> > l2_l3_len.l2_len = tx_pkt->l2_len;
> > l2_l3_len.l3_len = tx_pkt->l3_len;
> > - vlan_macip_lens.f.vlan_tci = tx_pkt->vlan_tci;
> > + vlan_macip_lens.f.vlan_tci = tx_pkt->vlan_tci0;
> > vlan_macip_lens.f.l2_l3_len = l2_l3_len.u16;
> > tx_ol_req = ol_flags & IGB_TX_OFFLOAD_MASK;
> >
> > @@ -784,7 +784,7 @@ eth_igb_recv_pkts(void *rx_queue, struct
> rte_mbuf **rx_pkts,
> > rxm->hash.rss = rxd.wb.lower.hi_dword.rss;
> > hlen_type_rss = rte_le_to_cpu_32(rxd.wb.lower.lo_dword.data);
> > /* Only valid if PKT_RX_VLAN_PKT set in pkt_flags */
> > - rxm->vlan_tci = rte_le_to_cpu_16(rxd.wb.upper.vlan);
> > + rxm->vlan_tci0 = rte_le_to_cpu_16(rxd.wb.upper.vlan);
> >
> > pkt_flags = rx_desc_hlen_type_rss_to_pkt_flags(hlen_type_rss);
> > pkt_flags = pkt_flags | rx_desc_status_to_pkt_flags(staterr);
> > @@ -1015,10 +1015,10 @@ eth_igb_recv_scattered_pkts(void
> *rx_queue, struct rte_mbuf **rx_pkts,
> > first_seg->hash.rss = rxd.wb.lower.hi_dword.rss;
> >
> > /*
> > - * The vlan_tci field is only valid when PKT_RX_VLAN_PKT is
> > + * The vlan_tci0 field is only valid when PKT_RX_VLAN_PKT is
> > * set in the pkt_flags field.
> > */
> > - first_seg->vlan_tci = rte_le_to_cpu_16(rxd.wb.upper.vlan);
> > + first_seg->vlan_tci0 = rte_le_to_cpu_16(rxd.wb.upper.vlan);
> > hlen_type_rss = rte_le_to_cpu_32(rxd.wb.lower.lo_dword.data);
> > pkt_flags = rx_desc_hlen_type_rss_to_pkt_flags(hlen_type_rss);
> > pkt_flags = pkt_flags | rx_desc_status_to_pkt_flags(staterr);
> > diff --git a/lib/librte_pmd_enic/enic_ethdev.c
> > b/lib/librte_pmd_enic/enic_ethdev.c
> > index 69ad01b..45c0e14 100644
> > --- a/lib/librte_pmd_enic/enic_ethdev.c
> > +++ b/lib/librte_pmd_enic/enic_ethdev.c
> > @@ -506,7 +506,7 @@ static uint16_t enicpmd_xmit_pkts(void
> *tx_queue, struct rte_mbuf **tx_pkts,
> > return index;
> > }
> > pkt_len = tx_pkt->pkt_len;
> > - vlan_id = tx_pkt->vlan_tci;
> > + vlan_id = tx_pkt->vlan_tci0;
> > ol_flags = tx_pkt->ol_flags;
> > for (frags = 0; inc_len < pkt_len; frags++) {
> > if (!tx_pkt)
> > diff --git a/lib/librte_pmd_enic/enic_main.c
> > b/lib/librte_pmd_enic/enic_main.c index 15313c2..d1660a1 100644
> > --- a/lib/librte_pmd_enic/enic_main.c
> > +++ b/lib/librte_pmd_enic/enic_main.c
> > @@ -490,7 +490,7 @@ static int enic_rq_indicate_buf(struct vnic_rq
> > *rq,
> >
> > if (vlan_tci) {
> > rx_pkt->ol_flags |= PKT_RX_VLAN_PKT;
> > - rx_pkt->vlan_tci = vlan_tci;
> > + rx_pkt->vlan_tci0 = vlan_tci;
> > }
> >
> > return eop;
> > diff --git a/lib/librte_pmd_fm10k/fm10k_rxtx.c
> > b/lib/librte_pmd_fm10k/fm10k_rxtx.c
> > index 83bddfc..ba3b8aa 100644
> > --- a/lib/librte_pmd_fm10k/fm10k_rxtx.c
> > +++ b/lib/librte_pmd_fm10k/fm10k_rxtx.c
> > @@ -410,7 +410,7 @@ static inline void tx_xmit_pkt(struct
> > fm10k_tx_queue *q, struct rte_mbuf *mb)
> >
> > /* set vlan if requested */
> > if (mb->ol_flags & PKT_TX_VLAN_PKT)
> > - q->hw_ring[q->next_free].vlan = mb->vlan_tci;
> > + q->hw_ring[q->next_free].vlan = mb->vlan_tci0;
> >
> > /* fill up the rings */
> > for (; mb != NULL; mb = mb->next) {
> > diff --git a/lib/librte_pmd_i40e/i40e_rxtx.c
> > b/lib/librte_pmd_i40e/i40e_rxtx.c index 493cfa3..1fe377c 100644
> > --- a/lib/librte_pmd_i40e/i40e_rxtx.c
> > +++ b/lib/librte_pmd_i40e/i40e_rxtx.c
> > @@ -703,7 +703,7 @@ i40e_rx_scan_hw_ring(struct i40e_rx_queue
> *rxq)
> > I40E_RXD_QW1_LENGTH_PBUF_SHIFT) - rxq->crc_len;
> > mb->data_len = pkt_len;
> > mb->pkt_len = pkt_len;
> > - mb->vlan_tci = rx_status &
> > + mb->vlan_tci0 = rx_status &
> > (1 << I40E_RX_DESC_STATUS_L2TAG1P_SHIFT) ?
> > rte_le_to_cpu_16(\
> > rxdp[j].wb.qword0.lo_dword.l2tag1) : 0; @@ -947,7
> +947,7 @@
> > i40e_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t
> nb_pkts)
> > rxm->data_len = rx_packet_len;
> > rxm->port = rxq->port_id;
> >
> > - rxm->vlan_tci = rx_status &
> > + rxm->vlan_tci0 = rx_status &
> > (1 << I40E_RX_DESC_STATUS_L2TAG1P_SHIFT) ?
> > rte_le_to_cpu_16(rxd.wb.qword0.lo_dword.l2tag1) : 0;
> > pkt_flags = i40e_rxd_status_to_pkt_flags(qword1);
> > @@ -1106,7 +1106,7 @@ i40e_recv_scattered_pkts(void *rx_queue,
> > }
> >
> > first_seg->port = rxq->port_id;
> > - first_seg->vlan_tci = (rx_status &
> > + first_seg->vlan_tci0 = (rx_status &
> > (1 << I40E_RX_DESC_STATUS_L2TAG1P_SHIFT)) ?
> > rte_le_to_cpu_16(rxd.wb.qword0.lo_dword.l2tag1) : 0;
> > pkt_flags = i40e_rxd_status_to_pkt_flags(qword1);
> > @@ -1291,7 +1291,7 @@ i40e_xmit_pkts(void *tx_queue, struct
> rte_mbuf
> > **tx_pkts, uint16_t nb_pkts)
> >
> > /* Descriptor based VLAN insertion */
> > if (ol_flags & PKT_TX_VLAN_PKT) {
> > - tx_flags |= tx_pkt->vlan_tci <<
> > + tx_flags |= tx_pkt->vlan_tci0 <<
> > I40E_TX_FLAG_L2TAG1_SHIFT;
> > tx_flags |= I40E_TX_FLAG_INSERT_VLAN;
> > td_cmd |= I40E_TX_DESC_CMD_IL2TAG1; diff --git
> > a/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
> > b/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
> > index 7f15f15..fd664da 100644
> > --- a/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
> > +++ b/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
> > @@ -612,7 +612,7 @@ ixgbe_xmit_pkts(void *tx_queue, struct
> rte_mbuf **tx_pkts,
> > tx_offload.l2_len = tx_pkt->l2_len;
> > tx_offload.l3_len = tx_pkt->l3_len;
> > tx_offload.l4_len = tx_pkt->l4_len;
> > - tx_offload.vlan_tci = tx_pkt->vlan_tci;
> > + tx_offload.vlan_tci = tx_pkt->vlan_tci0;
> > tx_offload.tso_segsz = tx_pkt->tso_segsz;
> >
> > /* If new context need be built or reuse the exist ctx. */
> @@
> > -981,8 +981,7 @@ ixgbe_rx_scan_hw_ring(struct ixgbe_rx_queue
> *rxq)
> > pkt_len = (uint16_t)(rxdp[j].wb.upper.length - rxq->crc_len);
> > mb->data_len = pkt_len;
> > mb->pkt_len = pkt_len;
> > - mb->vlan_tci = rxdp[j].wb.upper.vlan;
> > - mb->vlan_tci = rte_le_to_cpu_16(rxdp[j].wb.upper.vlan);
> > + mb->vlan_tci0 = rte_le_to_cpu_16(rxdp[j].wb.upper.vlan);
> >
> > /* convert descriptor fields to rte mbuf flags */
> > pkt_flags = rx_desc_hlen_type_rss_to_pkt_flags(
> > @@ -1327,7 +1326,7 @@ ixgbe_recv_pkts(void *rx_queue, struct
> rte_mbuf
> > **rx_pkts,
> >
> > hlen_type_rss = rte_le_to_cpu_32(rxd.wb.lower.lo_dword.data);
> > /* Only valid if PKT_RX_VLAN_PKT set in pkt_flags */
> > - rxm->vlan_tci = rte_le_to_cpu_16(rxd.wb.upper.vlan);
> > + rxm->vlan_tci0 = rte_le_to_cpu_16(rxd.wb.upper.vlan);
> >
> > pkt_flags = rx_desc_hlen_type_rss_to_pkt_flags(hlen_type_rss);
> > pkt_flags = pkt_flags | rx_desc_status_to_pkt_flags(staterr);
> > @@ -1412,10 +1411,10 @@ ixgbe_fill_cluster_head_buf(
> > head->port = port_id;
> >
> > /*
> > - * The vlan_tci field is only valid when PKT_RX_VLAN_PKT is
> > + * The vlan_tci0 field is only valid when PKT_RX_VLAN_PKT is
> > * set in the pkt_flags field.
> > */
> > - head->vlan_tci = rte_le_to_cpu_16(desc->wb.upper.vlan);
> > + head->vlan_tci0 = rte_le_to_cpu_16(desc->wb.upper.vlan);
> > hlen_type_rss = rte_le_to_cpu_32(desc->wb.lower.lo_dword.data);
> > pkt_flags = rx_desc_hlen_type_rss_to_pkt_flags(hlen_type_rss);
> > pkt_flags |= rx_desc_status_to_pkt_flags(staterr);
> > diff --git a/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c
> > b/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c
> > index d8019f5..57a33c9 100644
> > --- a/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c
> > +++ b/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c
> > @@ -405,7 +405,7 @@ vmxnet3_xmit_pkts(void *tx_queue, struct
> rte_mbuf **tx_pkts,
> > /* Add VLAN tag if requested */
> > if (txm->ol_flags & PKT_TX_VLAN_PKT) {
> > txd->ti = 1;
> > - txd->tci = rte_cpu_to_le_16(txm->vlan_tci);
> > + txd->tci = rte_cpu_to_le_16(txm->vlan_tci0);
> > }
> >
> > /* Record current mbuf for freeing it later in tx complete */
> @@
> > -629,10 +629,10 @@ vmxnet3_recv_pkts(void *rx_queue, struct
> rte_mbuf **rx_pkts, uint16_t nb_pkts)
> > rcd->tci);
> > rxm->ol_flags = PKT_RX_VLAN_PKT;
> > /* Copy vlan tag in packet buffer */
> > - rxm->vlan_tci = rte_le_to_cpu_16((uint16_t)rcd->tci);
> > + rxm->vlan_tci0 = rte_le_to_cpu_16((uint16_t)rcd->tci);
> > } else {
> > rxm->ol_flags = 0;
> > - rxm->vlan_tci = 0;
> > + rxm->vlan_tci0 = 0;
> > }
> >
> > /* Initialize newly received packet buffer */
> > --
> > 1.9.3
^ permalink raw reply [flat|nested] 55+ messages in thread
* Re: [dpdk-dev] [PATCH RFC 1/6] mbuf: update mbuf structure for QinQ support
2015-05-05 22:37 ` Ananyev, Konstantin
@ 2015-05-06 4:07 ` Zhang, Helin
0 siblings, 0 replies; 55+ messages in thread
From: Zhang, Helin @ 2015-05-06 4:07 UTC (permalink / raw)
To: Ananyev, Konstantin, Chilikin, Andrey, dev
> -----Original Message-----
> From: Ananyev, Konstantin
> Sent: Wednesday, May 6, 2015 6:38 AM
> To: Chilikin, Andrey; Zhang, Helin; dev@dpdk.org
> Subject: RE: [dpdk-dev] [PATCH RFC 1/6] mbuf: update mbuf structure for
> QinQ support
>
>
>
> > -----Original Message-----
> > From: Chilikin, Andrey
> > Sent: Tuesday, May 05, 2015 4:43 PM
> > To: Ananyev, Konstantin; Zhang, Helin; dev@dpdk.org
> > Subject: RE: [dpdk-dev] [PATCH RFC 1/6] mbuf: update mbuf structure
> > for QinQ support
> >
> > Hi Helin,
> >
> > I would agree with Konstantin about new naming for VLAN tags. I think
> > we can leave existing name for t vlan_tci and just name new VLAN tag
> > differently. I was thinking in the line of "vlan_tci_outer" or "stag_tci". So
> vlan_tci will store single VLAN in case if only one L2 tag is present or will
> store inner VLAN in case of two tags. "vlan_tci_outer" will store outer
> VLAN when two L2 tags are present.
> > "stag_tci" name also looks like a good candidate as in most cases if
> > two tags are presented then outer VLAN is addressed as S-Tag even if it is
> simple tag stacking.
>
> Yep, I suppose "vlan_tci_outer" or "stag_tci" is a better name, then what I
> suggested.
> Konstantin
Agree with you guys. It seems this is more popular! Thank you, Andrey, Konstantin!
Regards,
Helin
>
> >
> > Regards,
> > Andrey
> >
> > > -----Original Message-----
> > > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Ananyev,
> > > Konstantin
> > > Sent: Tuesday, May 5, 2015 12:05 PM
> > > To: Zhang, Helin; dev@dpdk.org
> > > Subject: Re: [dpdk-dev] [PATCH RFC 1/6] mbuf: update mbuf structure
> > > for QinQ support
> > >
> > > Hi Helin,
> > >
> > > > -----Original Message-----
> > > > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Helin Zhang
> > > > Sent: Tuesday, May 05, 2015 3:32 AM
> > > > To: dev@dpdk.org
> > > > Subject: [dpdk-dev] [PATCH RFC 1/6] mbuf: update mbuf structure
> > > > for QinQ support
> > > >
> > > > To support QinQ, 'vlan_tci' should be replaced by 'vlan_tci0' and
> > > > 'vlan_tci1'. Also new offload flags of 'PKT_RX_QINQ_PKT' and
> > > > 'PKT_TX_QINQ_PKT' should be added.
> > > >
> > > > Signed-off-by: Helin Zhang <helin.zhang@intel.com>
> > > > ---
> > > > app/test-pmd/flowgen.c | 2 +-
> > > > app/test-pmd/macfwd.c | 2 +-
> > > > app/test-pmd/macswap.c | 2 +-
> > > > app/test-pmd/rxonly.c | 2 +-
> > > > app/test-pmd/txonly.c | 2 +-
> > > > app/test/packet_burst_generator.c | 4 ++--
> > > > lib/librte_ether/rte_ether.h | 4 ++--
> > > > lib/librte_mbuf/rte_mbuf.h | 22
> +++++++++++++++++++---
> > > > lib/librte_pmd_e1000/em_rxtx.c | 8 ++++----
> > > > lib/librte_pmd_e1000/igb_rxtx.c | 8 ++++----
> > > > lib/librte_pmd_enic/enic_ethdev.c | 2 +-
> > > > lib/librte_pmd_enic/enic_main.c | 2 +-
> > > > lib/librte_pmd_fm10k/fm10k_rxtx.c | 2 +-
> > > > lib/librte_pmd_i40e/i40e_rxtx.c | 8 ++++----
> > > > lib/librte_pmd_ixgbe/ixgbe_rxtx.c | 11 +++++------
> > > > lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c | 6 +++---
> > > > 16 files changed, 51 insertions(+), 36 deletions(-)
> > > >
> > > > diff --git a/app/test-pmd/flowgen.c b/app/test-pmd/flowgen.c index
> > > > 72016c9..f24b00c 100644
> > > > --- a/app/test-pmd/flowgen.c
> > > > +++ b/app/test-pmd/flowgen.c
> > > > @@ -207,7 +207,7 @@ pkt_burst_flow_gen(struct fwd_stream *fs)
> > > > pkt->nb_segs = 1;
> > > > pkt->pkt_len = pkt_size;
> > > > pkt->ol_flags = ol_flags;
> > > > - pkt->vlan_tci = vlan_tci;
> > > > + pkt->vlan_tci0 = vlan_tci;
> > > > pkt->l2_len = sizeof(struct ether_hdr);
> > > > pkt->l3_len = sizeof(struct ipv4_hdr);
> > > > pkts_burst[nb_pkt] = pkt;
> > > > diff --git a/app/test-pmd/macfwd.c b/app/test-pmd/macfwd.c index
> > > > 035e5eb..590b613 100644
> > > > --- a/app/test-pmd/macfwd.c
> > > > +++ b/app/test-pmd/macfwd.c
> > > > @@ -120,7 +120,7 @@ pkt_burst_mac_forward(struct fwd_stream
> *fs)
> > > > mb->ol_flags = ol_flags;
> > > > mb->l2_len = sizeof(struct ether_hdr);
> > > > mb->l3_len = sizeof(struct ipv4_hdr);
> > > > - mb->vlan_tci = txp->tx_vlan_id;
> > > > + mb->vlan_tci0 = txp->tx_vlan_id;
> > > > }
> > > > nb_tx = rte_eth_tx_burst(fs->tx_port, fs->tx_queue, pkts_burst,
> > > nb_rx);
> > > > fs->tx_packets += nb_tx;
> > > > diff --git a/app/test-pmd/macswap.c b/app/test-pmd/macswap.c
> index
> > > > 6729849..c355399 100644
> > > > --- a/app/test-pmd/macswap.c
> > > > +++ b/app/test-pmd/macswap.c
> > > > @@ -122,7 +122,7 @@ pkt_burst_mac_swap(struct fwd_stream
> *fs)
> > > > mb->ol_flags = ol_flags;
> > > > mb->l2_len = sizeof(struct ether_hdr);
> > > > mb->l3_len = sizeof(struct ipv4_hdr);
> > > > - mb->vlan_tci = txp->tx_vlan_id;
> > > > + mb->vlan_tci0 = txp->tx_vlan_id;
> > > > }
> > > > nb_tx = rte_eth_tx_burst(fs->tx_port, fs->tx_queue, pkts_burst,
> > > nb_rx);
> > > > fs->tx_packets += nb_tx;
> > > > diff --git a/app/test-pmd/rxonly.c b/app/test-pmd/rxonly.c index
> > > > ac56090..aa2cf7f 100644
> > > > --- a/app/test-pmd/rxonly.c
> > > > +++ b/app/test-pmd/rxonly.c
> > > > @@ -159,7 +159,7 @@ pkt_burst_receive(struct fwd_stream *fs)
> > > > mb->hash.fdir.hash, mb->hash.fdir.id);
> > > > }
> > > > if (ol_flags & PKT_RX_VLAN_PKT)
> > > > - printf(" - VLAN tci=0x%x", mb->vlan_tci);
> > > > + printf(" - VLAN tci=0x%x", mb->vlan_tci0);
> > > > if (is_encapsulation) {
> > > > struct ipv4_hdr *ipv4_hdr;
> > > > struct ipv6_hdr *ipv6_hdr;
> > > > diff --git a/app/test-pmd/txonly.c b/app/test-pmd/txonly.c index
> > > > ca32c85..4a2827f 100644
> > > > --- a/app/test-pmd/txonly.c
> > > > +++ b/app/test-pmd/txonly.c
> > > > @@ -266,7 +266,7 @@ pkt_burst_transmit(struct fwd_stream *fs)
> > > > pkt->nb_segs = tx_pkt_nb_segs;
> > > > pkt->pkt_len = tx_pkt_length;
> > > > pkt->ol_flags = ol_flags;
> > > > - pkt->vlan_tci = vlan_tci;
> > > > + pkt->vlan_tci0 = vlan_tci;
> > > > pkt->l2_len = sizeof(struct ether_hdr);
> > > > pkt->l3_len = sizeof(struct ipv4_hdr);
> > > > pkts_burst[nb_pkt] = pkt;
> > > > diff --git a/app/test/packet_burst_generator.c
> > > > b/app/test/packet_burst_generator.c
> > > > index b46eed7..959644c 100644
> > > > --- a/app/test/packet_burst_generator.c
> > > > +++ b/app/test/packet_burst_generator.c
> > > > @@ -270,7 +270,7 @@ nomore_mbuf:
> > > > pkt->l2_len = eth_hdr_size;
> > > >
> > > > if (ipv4) {
> > > > - pkt->vlan_tci = ETHER_TYPE_IPv4;
> > > > + pkt->vlan_tci0 = ETHER_TYPE_IPv4;
> > > > pkt->l3_len = sizeof(struct ipv4_hdr);
> > > >
> > > > if (vlan_enabled)
> > > > @@ -278,7 +278,7 @@ nomore_mbuf:
> > > > else
> > > > pkt->ol_flags = PKT_RX_IPV4_HDR;
> > > > } else {
> > > > - pkt->vlan_tci = ETHER_TYPE_IPv6;
> > > > + pkt->vlan_tci0 = ETHER_TYPE_IPv6;
> > > > pkt->l3_len = sizeof(struct ipv6_hdr);
> > > >
> > > > if (vlan_enabled)
> > > > diff --git a/lib/librte_ether/rte_ether.h
> > > > b/lib/librte_ether/rte_ether.h index 49f4576..6d682a2 100644
> > > > --- a/lib/librte_ether/rte_ether.h
> > > > +++ b/lib/librte_ether/rte_ether.h
> > > > @@ -357,7 +357,7 @@ static inline int rte_vlan_strip(struct
> > > > rte_mbuf
> > > > *m)
> > > >
> > > > struct vlan_hdr *vh = (struct vlan_hdr *)(eh + 1);
> > > > m->ol_flags |= PKT_RX_VLAN_PKT;
> > > > - m->vlan_tci = rte_be_to_cpu_16(vh->vlan_tci);
> > > > + m->vlan_tci0 = rte_be_to_cpu_16(vh->vlan_tci);
> > > >
> > > > /* Copy ether header over rather than moving whole packet */
> > > > memmove(rte_pktmbuf_adj(m, sizeof(struct vlan_hdr)), @@
> -404,7
> > > > +404,7 @@ static inline int rte_vlan_insert(struct rte_mbuf **m)
> > > > nh->ether_type = rte_cpu_to_be_16(ETHER_TYPE_VLAN);
> > > >
> > > > vh = (struct vlan_hdr *) (nh + 1);
> > > > - vh->vlan_tci = rte_cpu_to_be_16((*m)->vlan_tci);
> > > > + vh->vlan_tci = rte_cpu_to_be_16((*m)->vlan_tci0);
> > > >
> > > > return 0;
> > > > }
> > > > diff --git a/lib/librte_mbuf/rte_mbuf.h
> > > > b/lib/librte_mbuf/rte_mbuf.h index 70b0987..6eed54f 100644
> > > > --- a/lib/librte_mbuf/rte_mbuf.h
> > > > +++ b/lib/librte_mbuf/rte_mbuf.h
> > > > @@ -101,11 +101,17 @@ extern "C" { #define
> PKT_RX_TUNNEL_IPV6_HDR
> > > > (1ULL << 12) /**< RX tunnel packet
> > > with IPv6 header. */
> > > > #define PKT_RX_FDIR_ID (1ULL << 13) /**< FD id reported
> if FDIR
> > > match. */
> > > > #define PKT_RX_FDIR_FLX (1ULL << 14) /**< Flexible bytes
> reported if
> > > FDIR match. */
> > > > +#define PKT_RX_QINQ_PKT (1ULL << 15) /**< RX packet
> with double
> > > VLAN stripped. */
> > > > /* add new RX flags here */
> > > >
> > > > /* add new TX flags here */
> > > >
> > > > /**
> > > > + * Second VLAN insertion (QinQ) flag.
> > > > + */
> > > > +#define PKT_TX_QINQ_PKT (1ULL << 49)
> > > > +
> > > > +/**
> > > > * TCP segmentation offload. To enable this offload feature for a
> > > > * packet to be transmitted on hardware supporting TSO:
> > > > * - set the PKT_TX_TCP_SEG flag in mbuf->ol_flags (this flag
> > > > implies @@ -268,7 +274,6 @@ struct rte_mbuf {
> > > >
> > > > uint16_t data_len; /**< Amount of data in segment
> buffer. */
> > > > uint32_t pkt_len; /**< Total pkt len: sum of all
> segments. */
> > > > - uint16_t vlan_tci; /**< VLAN Tag Control Identifier (CPU
> order) */
> > > > uint16_t reserved;
> > >
> > > Now here is an implicit 2-bytes whole between 'reserved' and 'rss'.
> > > Probably better to make it explicit - make 'reserved' uint32_t.
> > >
> > > Another thing - it looks like your change will break ixgbe vector RX.
> > >
> > > > union {
> > > > uint32_t rss; /**< RSS hash result if RSS enabled */
> > > > @@ -289,6 +294,15 @@ struct rte_mbuf {
> > > > uint32_t usr; /**< User defined tags. See
> > > rte_distributor_process() */
> > > > } hash; /**< hash information */
> > > >
> > > > + /* VLAN tags */
> > > > + union {
> > > > + uint32_t vlan_tags;
> > > > + struct {
> > > > + uint16_t vlan_tci0;
> > > > + uint16_t vlan_tci1;
> > >
> > > Do you really need to change vlan_tci to vlan_tci0?
> > > Can't you keep 'vlan_tci' for first vlan tag, and add something like
> > > 'vlan_tci_ext', or 'vlan_tci_next' for second one?
> > > Would save you a lot of changes, again users who use single vlan
> > > wouldn't need to update their code for 2.1.
> > >
> > > > + };
> > > > + };
> > > > +
> > > > uint32_t seqn; /**< Sequence number. See also
> > > rte_reorder_insert()
> > > > */
> > > >
> > > > /* second cache line - fields only used in slow path or on TX */
> > > > @@
> > > > -766,7 +780,8 @@ static inline void rte_pktmbuf_reset(struct
> > > > rte_mbuf
> > > *m)
> > > > m->next = NULL;
> > > > m->pkt_len = 0;
> > > > m->tx_offload = 0;
> > > > - m->vlan_tci = 0;
> > > > + m->vlan_tci0 = 0;
> > > > + m->vlan_tci1 = 0;
> > >
> > > Why just not:
> > > m-> vlan_tags = 0;
> > > ?
> > >
> > > > m->nb_segs = 1;
> > > > m->port = 0xff;
> > > >
> > > > @@ -838,7 +853,8 @@ static inline void rte_pktmbuf_attach(struct
> > > rte_mbuf *mi, struct rte_mbuf *m)
> > > > mi->data_off = m->data_off;
> > > > mi->data_len = m->data_len;
> > > > mi->port = m->port;
> > > > - mi->vlan_tci = m->vlan_tci;
> > > > + mi->vlan_tci0 = m->vlan_tci0;
> > > > + mi->vlan_tci1 = m->vlan_tci1;
> > >
> > > Same thing, why not:
> > > mi-> vlan_tags = m-> vlan_tags;
> > > ?
> > >
> > > > mi->tx_offload = m->tx_offload;
> > > > mi->hash = m->hash;
> > > >
> > > > diff --git a/lib/librte_pmd_e1000/em_rxtx.c
> > > > b/lib/librte_pmd_e1000/em_rxtx.c index 64d067c..422f4ed 100644
> > > > --- a/lib/librte_pmd_e1000/em_rxtx.c
> > > > +++ b/lib/librte_pmd_e1000/em_rxtx.c
> > > > @@ -438,7 +438,7 @@ eth_em_xmit_pkts(void *tx_queue, struct
> > > rte_mbuf **tx_pkts,
> > > > /* If hardware offload required */
> > > > tx_ol_req = (ol_flags & (PKT_TX_IP_CKSUM |
> > > PKT_TX_L4_MASK));
> > > > if (tx_ol_req) {
> > > > - hdrlen.f.vlan_tci = tx_pkt->vlan_tci;
> > > > + hdrlen.f.vlan_tci = tx_pkt->vlan_tci0;
> > > > hdrlen.f.l2_len = tx_pkt->l2_len;
> > > > hdrlen.f.l3_len = tx_pkt->l3_len;
> > > > /* If new context to be built or reuse the exist ctx. */
> > > @@ -534,7
> > > > +534,7 @@ eth_em_xmit_pkts(void *tx_queue, struct rte_mbuf
> > > **tx_pkts,
> > > > /* Set VLAN Tag offload fields. */
> > > > if (ol_flags & PKT_TX_VLAN_PKT) {
> > > > cmd_type_len |= E1000_TXD_CMD_VLE;
> > > > - popts_spec = tx_pkt->vlan_tci <<
> > > E1000_TXD_VLAN_SHIFT;
> > > > + popts_spec = tx_pkt->vlan_tci0 <<
> > > E1000_TXD_VLAN_SHIFT;
> > > > }
> > > >
> > > > if (tx_ol_req) {
> > > > @@ -800,7 +800,7 @@ eth_em_recv_pkts(void *rx_queue, struct
> > > rte_mbuf **rx_pkts,
> > > > rx_desc_error_to_pkt_flags(rxd.errors);
> > > >
> > > > /* Only valid if PKT_RX_VLAN_PKT set in pkt_flags */
> > > > - rxm->vlan_tci = rte_le_to_cpu_16(rxd.special);
> > > > + rxm->vlan_tci0 = rte_le_to_cpu_16(rxd.special);
> > > >
> > > > /*
> > > > * Store the mbuf address into the next entry of the array
> > > @@
> > > > -1026,7 +1026,7 @@ eth_em_recv_scattered_pkts(void
> *rx_queue,
> > > > struct
> > > rte_mbuf **rx_pkts,
> > > >
> > > rx_desc_error_to_pkt_flags(rxd.errors);
> > > >
> > > > /* Only valid if PKT_RX_VLAN_PKT set in pkt_flags */
> > > > - rxm->vlan_tci = rte_le_to_cpu_16(rxd.special);
> > > > + rxm->vlan_tci0 = rte_le_to_cpu_16(rxd.special);
> > > >
> > > > /* Prefetch data of first segment, if configured to do so. */
> > > > rte_packet_prefetch((char *)first_seg->buf_addr + diff --git
> > > > a/lib/librte_pmd_e1000/igb_rxtx.c
> > > > b/lib/librte_pmd_e1000/igb_rxtx.c index 80d05c0..fda273f 100644
> > > > --- a/lib/librte_pmd_e1000/igb_rxtx.c
> > > > +++ b/lib/librte_pmd_e1000/igb_rxtx.c
> > > > @@ -401,7 +401,7 @@ eth_igb_xmit_pkts(void *tx_queue, struct
> > > rte_mbuf **tx_pkts,
> > > > ol_flags = tx_pkt->ol_flags;
> > > > l2_l3_len.l2_len = tx_pkt->l2_len;
> > > > l2_l3_len.l3_len = tx_pkt->l3_len;
> > > > - vlan_macip_lens.f.vlan_tci = tx_pkt->vlan_tci;
> > > > + vlan_macip_lens.f.vlan_tci = tx_pkt->vlan_tci0;
> > > > vlan_macip_lens.f.l2_l3_len = l2_l3_len.u16;
> > > > tx_ol_req = ol_flags & IGB_TX_OFFLOAD_MASK;
> > > >
> > > > @@ -784,7 +784,7 @@ eth_igb_recv_pkts(void *rx_queue, struct
> > > rte_mbuf **rx_pkts,
> > > > rxm->hash.rss = rxd.wb.lower.hi_dword.rss;
> > > > hlen_type_rss =
> > > rte_le_to_cpu_32(rxd.wb.lower.lo_dword.data);
> > > > /* Only valid if PKT_RX_VLAN_PKT set in pkt_flags */
> > > > - rxm->vlan_tci = rte_le_to_cpu_16(rxd.wb.upper.vlan);
> > > > + rxm->vlan_tci0 = rte_le_to_cpu_16(rxd.wb.upper.vlan);
> > > >
> > > > pkt_flags =
> > > rx_desc_hlen_type_rss_to_pkt_flags(hlen_type_rss);
> > > > pkt_flags = pkt_flags | rx_desc_status_to_pkt_flags(staterr);
> > > > @@ -1015,10 +1015,10 @@ eth_igb_recv_scattered_pkts(void
> > > > *rx_queue,
> > > struct rte_mbuf **rx_pkts,
> > > > first_seg->hash.rss = rxd.wb.lower.hi_dword.rss;
> > > >
> > > > /*
> > > > - * The vlan_tci field is only valid when PKT_RX_VLAN_PKT is
> > > > + * The vlan_tci0 field is only valid when PKT_RX_VLAN_PKT
> is
> > > > * set in the pkt_flags field.
> > > > */
> > > > - first_seg->vlan_tci = rte_le_to_cpu_16(rxd.wb.upper.vlan);
> > > > + first_seg->vlan_tci0 = rte_le_to_cpu_16(rxd.wb.upper.vlan);
> > > > hlen_type_rss =
> > > rte_le_to_cpu_32(rxd.wb.lower.lo_dword.data);
> > > > pkt_flags =
> > > rx_desc_hlen_type_rss_to_pkt_flags(hlen_type_rss);
> > > > pkt_flags = pkt_flags | rx_desc_status_to_pkt_flags(staterr);
> > > > diff --git a/lib/librte_pmd_enic/enic_ethdev.c
> > > > b/lib/librte_pmd_enic/enic_ethdev.c
> > > > index 69ad01b..45c0e14 100644
> > > > --- a/lib/librte_pmd_enic/enic_ethdev.c
> > > > +++ b/lib/librte_pmd_enic/enic_ethdev.c
> > > > @@ -506,7 +506,7 @@ static uint16_t enicpmd_xmit_pkts(void
> > > > *tx_queue,
> > > struct rte_mbuf **tx_pkts,
> > > > return index;
> > > > }
> > > > pkt_len = tx_pkt->pkt_len;
> > > > - vlan_id = tx_pkt->vlan_tci;
> > > > + vlan_id = tx_pkt->vlan_tci0;
> > > > ol_flags = tx_pkt->ol_flags;
> > > > for (frags = 0; inc_len < pkt_len; frags++) {
> > > > if (!tx_pkt)
> > > > diff --git a/lib/librte_pmd_enic/enic_main.c
> > > > b/lib/librte_pmd_enic/enic_main.c index 15313c2..d1660a1
> 100644
> > > > --- a/lib/librte_pmd_enic/enic_main.c
> > > > +++ b/lib/librte_pmd_enic/enic_main.c
> > > > @@ -490,7 +490,7 @@ static int enic_rq_indicate_buf(struct
> vnic_rq
> > > > *rq,
> > > >
> > > > if (vlan_tci) {
> > > > rx_pkt->ol_flags |= PKT_RX_VLAN_PKT;
> > > > - rx_pkt->vlan_tci = vlan_tci;
> > > > + rx_pkt->vlan_tci0 = vlan_tci;
> > > > }
> > > >
> > > > return eop;
> > > > diff --git a/lib/librte_pmd_fm10k/fm10k_rxtx.c
> > > > b/lib/librte_pmd_fm10k/fm10k_rxtx.c
> > > > index 83bddfc..ba3b8aa 100644
> > > > --- a/lib/librte_pmd_fm10k/fm10k_rxtx.c
> > > > +++ b/lib/librte_pmd_fm10k/fm10k_rxtx.c
> > > > @@ -410,7 +410,7 @@ static inline void tx_xmit_pkt(struct
> > > > fm10k_tx_queue *q, struct rte_mbuf *mb)
> > > >
> > > > /* set vlan if requested */
> > > > if (mb->ol_flags & PKT_TX_VLAN_PKT)
> > > > - q->hw_ring[q->next_free].vlan = mb->vlan_tci;
> > > > + q->hw_ring[q->next_free].vlan = mb->vlan_tci0;
> > > >
> > > > /* fill up the rings */
> > > > for (; mb != NULL; mb = mb->next) { diff --git
> > > > a/lib/librte_pmd_i40e/i40e_rxtx.c
> > > > b/lib/librte_pmd_i40e/i40e_rxtx.c index 493cfa3..1fe377c 100644
> > > > --- a/lib/librte_pmd_i40e/i40e_rxtx.c
> > > > +++ b/lib/librte_pmd_i40e/i40e_rxtx.c
> > > > @@ -703,7 +703,7 @@ i40e_rx_scan_hw_ring(struct
> i40e_rx_queue *rxq)
> > > > I40E_RXD_QW1_LENGTH_PBUF_SHIFT) - rxq-
> crc_len;
> > > > mb->data_len = pkt_len;
> > > > mb->pkt_len = pkt_len;
> > > > - mb->vlan_tci = rx_status &
> > > > + mb->vlan_tci0 = rx_status &
> > > > (1 <<
> > > I40E_RX_DESC_STATUS_L2TAG1P_SHIFT) ?
> > > > rte_le_to_cpu_16(\
> > > > rxdp[j].wb.qword0.lo_dword.l2tag1) : 0; @@
> > > -947,7 +947,7 @@
> > > > i40e_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
> uint16_t
> > > nb_pkts)
> > > > rxm->data_len = rx_packet_len;
> > > > rxm->port = rxq->port_id;
> > > >
> > > > - rxm->vlan_tci = rx_status &
> > > > + rxm->vlan_tci0 = rx_status &
> > > > (1 << I40E_RX_DESC_STATUS_L2TAG1P_SHIFT) ?
> > > > rte_le_to_cpu_16(rxd.wb.qword0.lo_dword.l2tag1) :
> > > 0;
> > > > pkt_flags = i40e_rxd_status_to_pkt_flags(qword1);
> > > > @@ -1106,7 +1106,7 @@ i40e_recv_scattered_pkts(void
> *rx_queue,
> > > > }
> > > >
> > > > first_seg->port = rxq->port_id;
> > > > - first_seg->vlan_tci = (rx_status &
> > > > + first_seg->vlan_tci0 = (rx_status &
> > > > (1 << I40E_RX_DESC_STATUS_L2TAG1P_SHIFT)) ?
> > > > rte_le_to_cpu_16(rxd.wb.qword0.lo_dword.l2tag1) :
> > > 0;
> > > > pkt_flags = i40e_rxd_status_to_pkt_flags(qword1);
> > > > @@ -1291,7 +1291,7 @@ i40e_xmit_pkts(void *tx_queue, struct
> > > > rte_mbuf **tx_pkts, uint16_t nb_pkts)
> > > >
> > > > /* Descriptor based VLAN insertion */
> > > > if (ol_flags & PKT_TX_VLAN_PKT) {
> > > > - tx_flags |= tx_pkt->vlan_tci <<
> > > > + tx_flags |= tx_pkt->vlan_tci0 <<
> > > > I40E_TX_FLAG_L2TAG1_SHIFT;
> > > > tx_flags |= I40E_TX_FLAG_INSERT_VLAN;
> > > > td_cmd |= I40E_TX_DESC_CMD_IL2TAG1; diff --git
> > > > a/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
> > > > b/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
> > > > index 7f15f15..fd664da 100644
> > > > --- a/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
> > > > +++ b/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
> > > > @@ -612,7 +612,7 @@ ixgbe_xmit_pkts(void *tx_queue, struct
> > > > rte_mbuf
> > > **tx_pkts,
> > > > tx_offload.l2_len = tx_pkt->l2_len;
> > > > tx_offload.l3_len = tx_pkt->l3_len;
> > > > tx_offload.l4_len = tx_pkt->l4_len;
> > > > - tx_offload.vlan_tci = tx_pkt->vlan_tci;
> > > > + tx_offload.vlan_tci = tx_pkt->vlan_tci0;
> > > > tx_offload.tso_segsz = tx_pkt->tso_segsz;
> > > >
> > > > /* If new context need be built or reuse the exist ctx.
> > > */ @@
> > > > -981,8 +981,7 @@ ixgbe_rx_scan_hw_ring(struct ixgbe_rx_queue
> *rxq)
> > > > pkt_len = (uint16_t)(rxdp[j].wb.upper.length - rxq-
> crc_len);
> > > > mb->data_len = pkt_len;
> > > > mb->pkt_len = pkt_len;
> > > > - mb->vlan_tci = rxdp[j].wb.upper.vlan;
> > > > - mb->vlan_tci =
> > > rte_le_to_cpu_16(rxdp[j].wb.upper.vlan);
> > > > + mb->vlan_tci0 =
> > > rte_le_to_cpu_16(rxdp[j].wb.upper.vlan);
> > > >
> > > > /* convert descriptor fields to rte mbuf flags */
> > > > pkt_flags = rx_desc_hlen_type_rss_to_pkt_flags(
> > > > @@ -1327,7 +1326,7 @@ ixgbe_recv_pkts(void *rx_queue, struct
> > > rte_mbuf
> > > > **rx_pkts,
> > > >
> > > > hlen_type_rss =
> > > rte_le_to_cpu_32(rxd.wb.lower.lo_dword.data);
> > > > /* Only valid if PKT_RX_VLAN_PKT set in pkt_flags */
> > > > - rxm->vlan_tci = rte_le_to_cpu_16(rxd.wb.upper.vlan);
> > > > + rxm->vlan_tci0 = rte_le_to_cpu_16(rxd.wb.upper.vlan);
> > > >
> > > > pkt_flags =
> > > rx_desc_hlen_type_rss_to_pkt_flags(hlen_type_rss);
> > > > pkt_flags = pkt_flags | rx_desc_status_to_pkt_flags(staterr);
> > > > @@ -1412,10 +1411,10 @@ ixgbe_fill_cluster_head_buf(
> > > > head->port = port_id;
> > > >
> > > > /*
> > > > - * The vlan_tci field is only valid when PKT_RX_VLAN_PKT is
> > > > + * The vlan_tci0 field is only valid when PKT_RX_VLAN_PKT is
> > > > * set in the pkt_flags field.
> > > > */
> > > > - head->vlan_tci = rte_le_to_cpu_16(desc->wb.upper.vlan);
> > > > + head->vlan_tci0 = rte_le_to_cpu_16(desc->wb.upper.vlan);
> > > > hlen_type_rss =
> rte_le_to_cpu_32(desc->wb.lower.lo_dword.data);
> > > > pkt_flags = rx_desc_hlen_type_rss_to_pkt_flags(hlen_type_rss);
> > > > pkt_flags |= rx_desc_status_to_pkt_flags(staterr);
> > > > diff --git a/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c
> > > > b/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c
> > > > index d8019f5..57a33c9 100644
> > > > --- a/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c
> > > > +++ b/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c
> > > > @@ -405,7 +405,7 @@ vmxnet3_xmit_pkts(void *tx_queue, struct
> > > rte_mbuf **tx_pkts,
> > > > /* Add VLAN tag if requested */
> > > > if (txm->ol_flags & PKT_TX_VLAN_PKT) {
> > > > txd->ti = 1;
> > > > - txd->tci = rte_cpu_to_le_16(txm->vlan_tci);
> > > > + txd->tci = rte_cpu_to_le_16(txm->vlan_tci0);
> > > > }
> > > >
> > > > /* Record current mbuf for freeing it later in tx
> > > complete */ @@
> > > > -629,10 +629,10 @@ vmxnet3_recv_pkts(void *rx_queue, struct
> > > > rte_mbuf
> > > **rx_pkts, uint16_t nb_pkts)
> > > > rcd->tci);
> > > > rxm->ol_flags = PKT_RX_VLAN_PKT;
> > > > /* Copy vlan tag in packet buffer */
> > > > - rxm->vlan_tci = rte_le_to_cpu_16((uint16_t)rcd-
> > > >tci);
> > > > + rxm->vlan_tci0 = rte_le_to_cpu_16((uint16_t)rcd-
> > > >tci);
> > > > } else {
> > > > rxm->ol_flags = 0;
> > > > - rxm->vlan_tci = 0;
> > > > + rxm->vlan_tci0 = 0;
> > > > }
> > > >
> > > > /* Initialize newly received packet buffer */
> > > > --
> > > > 1.9.3
^ permalink raw reply [flat|nested] 55+ messages in thread
* Re: [dpdk-dev] [PATCH RFC 1/6] mbuf: update mbuf structure for QinQ support
2015-05-06 4:06 ` Zhang, Helin
@ 2015-05-06 8:39 ` Bruce Richardson
2015-05-06 8:48 ` Zhang, Helin
0 siblings, 1 reply; 55+ messages in thread
From: Bruce Richardson @ 2015-05-06 8:39 UTC (permalink / raw)
To: Zhang, Helin; +Cc: dev
On Wed, May 06, 2015 at 04:06:17AM +0000, Zhang, Helin wrote:
>
>
> > -----Original Message-----
> > From: Ananyev, Konstantin
> > Sent: Tuesday, May 5, 2015 7:05 PM
> > To: Zhang, Helin; dev@dpdk.org
> > Subject: RE: [dpdk-dev] [PATCH RFC 1/6] mbuf: update mbuf structure for
> > QinQ support
> >
> > Hi Helin,
> >
> > > -----Original Message-----
> > > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Helin Zhang
> > > Sent: Tuesday, May 05, 2015 3:32 AM
> > > To: dev@dpdk.org
> > > Subject: [dpdk-dev] [PATCH RFC 1/6] mbuf: update mbuf structure for
> > > QinQ support
> > >
> > > To support QinQ, 'vlan_tci' should be replaced by 'vlan_tci0' and
> > > 'vlan_tci1'. Also new offload flags of 'PKT_RX_QINQ_PKT' and
> > > 'PKT_TX_QINQ_PKT' should be added.
> > >
> > > Signed-off-by: Helin Zhang <helin.zhang@intel.com>
> > > ---
> > > app/test-pmd/flowgen.c | 2 +-
> > > app/test-pmd/macfwd.c | 2 +-
> > > app/test-pmd/macswap.c | 2 +-
> > > app/test-pmd/rxonly.c | 2 +-
> > > app/test-pmd/txonly.c | 2 +-
> > > app/test/packet_burst_generator.c | 4 ++--
> > > lib/librte_ether/rte_ether.h | 4 ++--
> > > lib/librte_mbuf/rte_mbuf.h | 22
> > +++++++++++++++++++---
> > > lib/librte_pmd_e1000/em_rxtx.c | 8 ++++----
> > > lib/librte_pmd_e1000/igb_rxtx.c | 8 ++++----
> > > lib/librte_pmd_enic/enic_ethdev.c | 2 +-
> > > lib/librte_pmd_enic/enic_main.c | 2 +-
> > > lib/librte_pmd_fm10k/fm10k_rxtx.c | 2 +-
> > > lib/librte_pmd_i40e/i40e_rxtx.c | 8 ++++----
> > > lib/librte_pmd_ixgbe/ixgbe_rxtx.c | 11 +++++------
> > > lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c | 6 +++---
> > > 16 files changed, 51 insertions(+), 36 deletions(-)
> > >
<snip>
> > --- a/lib/librte_mbuf/rte_mbuf.h
> > > +++ b/lib/librte_mbuf/rte_mbuf.h
> > > @@ -101,11 +101,17 @@ extern "C" {
> > > #define PKT_RX_TUNNEL_IPV6_HDR (1ULL << 12) /**< RX tunnel
> > packet with IPv6 header. */
> > > #define PKT_RX_FDIR_ID (1ULL << 13) /**< FD id reported if
> > FDIR match. */
> > > #define PKT_RX_FDIR_FLX (1ULL << 14) /**< Flexible bytes
> > reported if FDIR match. */
> > > +#define PKT_RX_QINQ_PKT (1ULL << 15) /**< RX packet with
> > double VLAN stripped. */
> > > /* add new RX flags here */
> > >
> > > /* add new TX flags here */
> > >
> > > /**
> > > + * Second VLAN insertion (QinQ) flag.
> > > + */
> > > +#define PKT_TX_QINQ_PKT (1ULL << 49)
> > > +
> > > +/**
> > > * TCP segmentation offload. To enable this offload feature for a
> > > * packet to be transmitted on hardware supporting TSO:
> > > * - set the PKT_TX_TCP_SEG flag in mbuf->ol_flags (this flag
> > > implies @@ -268,7 +274,6 @@ struct rte_mbuf {
> > >
> > > uint16_t data_len; /**< Amount of data in segment buffer. */
> > > uint32_t pkt_len; /**< Total pkt len: sum of all segments. */
> > > - uint16_t vlan_tci; /**< VLAN Tag Control Identifier (CPU
> > order) */
> > > uint16_t reserved;
> >
> > Now here is an implicit 2-bytes whole between 'reserved' and 'rss'.
> > Probably better to make it explicit - make 'reserved' uint32_t.
> Yes, the layout will be changed according to the demands of Vector PMD.
> The vlan structure will be kept the same, but the mbuf structure layout will
> be re-organized a bit.
Why not just put the extra vlan tag into the reserved space. In the original
work to restructure the mbuf, that was what the reserved space was put there
for [it was marked as reserved as it was requested that fields not be fully
dedicated until used, and we did not have double-vlan support at that time].
However, it seems more sensible to put the vlans there now, unless there is
a good reason to move them to the new location in the mbuf that you propose
below.
/Bruce
>
> >
> > Another thing - it looks like your change will break ixgbe vector RX.
> Yes, in the cover-letter, I noted that the vector PMD will be updated soon
> together with the code changes.
>
> >
> > > union {
> > > uint32_t rss; /**< RSS hash result if RSS enabled */
> > > @@ -289,6 +294,15 @@ struct rte_mbuf {
> > > uint32_t usr; /**< User defined tags. See
> > rte_distributor_process() */
> > > } hash; /**< hash information */
> > >
> > > + /* VLAN tags */
> > > + union {
> > > + uint32_t vlan_tags;
> > > + struct {
> > > + uint16_t vlan_tci0;
> > > + uint16_t vlan_tci1;
> >
> > Do you really need to change vlan_tci to vlan_tci0?
> > Can't you keep 'vlan_tci' for first vlan tag, and add something like
> > 'vlan_tci_ext', or 'vlan_tci_next' for second one?
> > Would save you a lot of changes, again users who use single vlan wouldn't
> > need to update their code for 2.1.
> Yes, good point! The names came from the original mbuf definition done by
> Bruce long long ago. If more guys suggest keeping th old one, and just add a new
> one, I will do like that in the next version of patch set.
> Thank you all!
>
> >
> > > + };
> > > + };
> > > +
> > > uint32_t seqn; /**< Sequence number. See also rte_reorder_insert()
> > > */
> > >
> > > /* second cache line - fields only used in slow path or on TX */ @@
> > > -766,7 +780,8 @@ static inline void rte_pktmbuf_reset(struct rte_mbuf
> > *m)
> > > m->next = NULL;
> > > m->pkt_len = 0;
> > > m->tx_offload = 0;
> > > - m->vlan_tci = 0;
> > > + m->vlan_tci0 = 0;
> > > + m->vlan_tci1 = 0;
> >
> > Why just not:
> > m-> vlan_tags = 0;
> > ?
> Accepted. Good point!
>
> >
> > > m->nb_segs = 1;
> > > m->port = 0xff;
> > >
> > > @@ -838,7 +853,8 @@ static inline void rte_pktmbuf_attach(struct
> > rte_mbuf *mi, struct rte_mbuf *m)
> > > mi->data_off = m->data_off;
> > > mi->data_len = m->data_len;
> > > mi->port = m->port;
> > > - mi->vlan_tci = m->vlan_tci;
> > > + mi->vlan_tci0 = m->vlan_tci0;
> > > + mi->vlan_tci1 = m->vlan_tci1;
> >
> > Same thing, why not:
> > mi-> vlan_tags = m-> vlan_tags;
> > ?
> Accepted. Good point!
>
> Regards,
> Helin
<snip>
^ permalink raw reply [flat|nested] 55+ messages in thread
* Re: [dpdk-dev] [PATCH RFC 1/6] mbuf: update mbuf structure for QinQ support
2015-05-06 8:39 ` Bruce Richardson
@ 2015-05-06 8:48 ` Zhang, Helin
0 siblings, 0 replies; 55+ messages in thread
From: Zhang, Helin @ 2015-05-06 8:48 UTC (permalink / raw)
To: Richardson, Bruce; +Cc: dev
> -----Original Message-----
> From: Richardson, Bruce
> Sent: Wednesday, May 6, 2015 4:39 PM
> To: Zhang, Helin
> Cc: Ananyev, Konstantin; dev@dpdk.org
> Subject: Re: [dpdk-dev] [PATCH RFC 1/6] mbuf: update mbuf structure for
> QinQ support
>
> On Wed, May 06, 2015 at 04:06:17AM +0000, Zhang, Helin wrote:
> >
> >
> > > -----Original Message-----
> > > From: Ananyev, Konstantin
> > > Sent: Tuesday, May 5, 2015 7:05 PM
> > > To: Zhang, Helin; dev@dpdk.org
> > > Subject: RE: [dpdk-dev] [PATCH RFC 1/6] mbuf: update mbuf structure
> > > for QinQ support
> > >
> > > Hi Helin,
> > >
> > > > -----Original Message-----
> > > > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Helin Zhang
> > > > Sent: Tuesday, May 05, 2015 3:32 AM
> > > > To: dev@dpdk.org
> > > > Subject: [dpdk-dev] [PATCH RFC 1/6] mbuf: update mbuf structure
> > > > for QinQ support
> > > >
> > > > To support QinQ, 'vlan_tci' should be replaced by 'vlan_tci0' and
> > > > 'vlan_tci1'. Also new offload flags of 'PKT_RX_QINQ_PKT' and
> > > > 'PKT_TX_QINQ_PKT' should be added.
> > > >
> > > > Signed-off-by: Helin Zhang <helin.zhang@intel.com>
> > > > ---
> > > > app/test-pmd/flowgen.c | 2 +-
> > > > app/test-pmd/macfwd.c | 2 +-
> > > > app/test-pmd/macswap.c | 2 +-
> > > > app/test-pmd/rxonly.c | 2 +-
> > > > app/test-pmd/txonly.c | 2 +-
> > > > app/test/packet_burst_generator.c | 4 ++--
> > > > lib/librte_ether/rte_ether.h | 4 ++--
> > > > lib/librte_mbuf/rte_mbuf.h | 22
> > > +++++++++++++++++++---
> > > > lib/librte_pmd_e1000/em_rxtx.c | 8 ++++----
> > > > lib/librte_pmd_e1000/igb_rxtx.c | 8 ++++----
> > > > lib/librte_pmd_enic/enic_ethdev.c | 2 +-
> > > > lib/librte_pmd_enic/enic_main.c | 2 +-
> > > > lib/librte_pmd_fm10k/fm10k_rxtx.c | 2 +-
> > > > lib/librte_pmd_i40e/i40e_rxtx.c | 8 ++++----
> > > > lib/librte_pmd_ixgbe/ixgbe_rxtx.c | 11 +++++------
> > > > lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c | 6 +++---
> > > > 16 files changed, 51 insertions(+), 36 deletions(-)
> > > >
> <snip>
> > > --- a/lib/librte_mbuf/rte_mbuf.h
> > > > +++ b/lib/librte_mbuf/rte_mbuf.h
> > > > @@ -101,11 +101,17 @@ extern "C" { #define
> PKT_RX_TUNNEL_IPV6_HDR
> > > > (1ULL << 12) /**< RX tunnel
> > > packet with IPv6 header. */
> > > > #define PKT_RX_FDIR_ID (1ULL << 13) /**< FD id reported
> if
> > > FDIR match. */
> > > > #define PKT_RX_FDIR_FLX (1ULL << 14) /**< Flexible bytes
> > > reported if FDIR match. */
> > > > +#define PKT_RX_QINQ_PKT (1ULL << 15) /**< RX packet
> with
> > > double VLAN stripped. */
> > > > /* add new RX flags here */
> > > >
> > > > /* add new TX flags here */
> > > >
> > > > /**
> > > > + * Second VLAN insertion (QinQ) flag.
> > > > + */
> > > > +#define PKT_TX_QINQ_PKT (1ULL << 49)
> > > > +
> > > > +/**
> > > > * TCP segmentation offload. To enable this offload feature for a
> > > > * packet to be transmitted on hardware supporting TSO:
> > > > * - set the PKT_TX_TCP_SEG flag in mbuf->ol_flags (this flag
> > > > implies @@ -268,7 +274,6 @@ struct rte_mbuf {
> > > >
> > > > uint16_t data_len; /**< Amount of data in segment
> buffer. */
> > > > uint32_t pkt_len; /**< Total pkt len: sum of all
> segments. */
> > > > - uint16_t vlan_tci; /**< VLAN Tag Control Identifier (CPU
> > > order) */
> > > > uint16_t reserved;
> > >
> > > Now here is an implicit 2-bytes whole between 'reserved' and 'rss'.
> > > Probably better to make it explicit - make 'reserved' uint32_t.
> > Yes, the layout will be changed according to the demands of Vector PMD.
> > The vlan structure will be kept the same, but the mbuf structure
> > layout will be re-organized a bit.
>
> Why not just put the extra vlan tag into the reserved space. In the original
> work to restructure the mbuf, that was what the reserved space was put
> there for [it was marked as reserved as it was requested that fields not be
> fully dedicated until used, and we did not have double-vlan support at that
> time].
> However, it seems more sensible to put the vlans there now, unless there
> is a good reason to move them to the new location in the mbuf that you
> propose below.
>
> /Bruce
Thank you very much for the reminder!
The main reason is that we planned to enlarge the packet_type field, so we have to move the vlan fields down. Hopefully the unified packet type can be merged before this one.
Regards,
Helin
>
> >
> > >
> > > Another thing - it looks like your change will break ixgbe vector RX.
> > Yes, in the cover-letter, I noted that the vector PMD will be updated
> > soon together with the code changes.
> >
> > >
> > > > union {
> > > > uint32_t rss; /**< RSS hash result if RSS enabled */
> > > > @@ -289,6 +294,15 @@ struct rte_mbuf {
> > > > uint32_t usr; /**< User defined tags. See
> > > rte_distributor_process() */
> > > > } hash; /**< hash information */
> > > >
> > > > + /* VLAN tags */
> > > > + union {
> > > > + uint32_t vlan_tags;
> > > > + struct {
> > > > + uint16_t vlan_tci0;
> > > > + uint16_t vlan_tci1;
> > >
> > > Do you really need to change vlan_tci to vlan_tci0?
> > > Can't you keep 'vlan_tci' for first vlan tag, and add something like
> > > 'vlan_tci_ext', or 'vlan_tci_next' for second one?
> > > Would save you a lot of changes, again users who use single vlan
> > > wouldn't need to update their code for 2.1.
> > Yes, good point! The names came from the original mbuf definition done
> > by Bruce long long ago. If more guys suggest keeping th old one, and
> > just add a new one, I will do like that in the next version of patch set.
> > Thank you all!
> >
> > >
> > > > + };
> > > > + };
> > > > +
> > > > uint32_t seqn; /**< Sequence number. See also
> > > > rte_reorder_insert() */
> > > >
> > > > /* second cache line - fields only used in slow path or on TX */
> > > > @@
> > > > -766,7 +780,8 @@ static inline void rte_pktmbuf_reset(struct
> > > > rte_mbuf
> > > *m)
> > > > m->next = NULL;
> > > > m->pkt_len = 0;
> > > > m->tx_offload = 0;
> > > > - m->vlan_tci = 0;
> > > > + m->vlan_tci0 = 0;
> > > > + m->vlan_tci1 = 0;
> > >
> > > Why just not:
> > > m-> vlan_tags = 0;
> > > ?
> > Accepted. Good point!
> >
> > >
> > > > m->nb_segs = 1;
> > > > m->port = 0xff;
> > > >
> > > > @@ -838,7 +853,8 @@ static inline void rte_pktmbuf_attach(struct
> > > rte_mbuf *mi, struct rte_mbuf *m)
> > > > mi->data_off = m->data_off;
> > > > mi->data_len = m->data_len;
> > > > mi->port = m->port;
> > > > - mi->vlan_tci = m->vlan_tci;
> > > > + mi->vlan_tci0 = m->vlan_tci0;
> > > > + mi->vlan_tci1 = m->vlan_tci1;
> > >
> > > Same thing, why not:
> > > mi-> vlan_tags = m-> vlan_tags;
> > > ?
> > Accepted. Good point!
> >
> > Regards,
> > Helin
>
> <snip>
^ permalink raw reply [flat|nested] 55+ messages in thread
* [dpdk-dev] [PATCH 0/5] support i40e QinQ stripping and insertion
2015-05-05 2:32 [dpdk-dev] [PATCH RFC 0/6] support of QinQ stripping and insertion of i40e Helin Zhang
` (5 preceding siblings ...)
2015-05-05 2:32 ` [dpdk-dev] [PATCH RFC 6/6] app/testpmd: support of QinQ stripping and insertion Helin Zhang
@ 2015-05-26 8:36 ` Helin Zhang
2015-05-26 8:36 ` [dpdk-dev] [PATCH 1/5] ixgbe: remove a discarded source line Helin Zhang
` (5 more replies)
6 siblings, 6 replies; 55+ messages in thread
From: Helin Zhang @ 2015-05-26 8:36 UTC (permalink / raw)
To: dev
As i40e hardware can be reconfigured to support QinQ stripping and
insertion, this patch set is to enable that together with using the
reserved 16 bits in 'struct rte_mbuf' for the second vlan tag.
Corresponding command is added in testpmd for testing.
Note that no need to rework vPMD, as nothings used in it changed.
Helin Zhang (5):
ixgbe: remove a discarded source line
mbuf: use the reserved 16 bits for double vlan
i40e: support double vlan stripping and insertion
i40evf: add supported offload capability flags
app/testpmd: add test cases for qinq stripping and insertion
app/test-pmd/cmdline.c | 78 +++++++++++++++++++++++++++++++++----
app/test-pmd/config.c | 21 +++++++++-
app/test-pmd/flowgen.c | 4 +-
app/test-pmd/macfwd.c | 3 ++
app/test-pmd/macswap.c | 3 ++
app/test-pmd/rxonly.c | 3 ++
app/test-pmd/testpmd.h | 6 ++-
app/test-pmd/txonly.c | 8 +++-
drivers/net/i40e/i40e_ethdev.c | 52 +++++++++++++++++++++++++
drivers/net/i40e/i40e_ethdev_vf.c | 13 +++++++
drivers/net/i40e/i40e_rxtx.c | 81 +++++++++++++++++++++++++--------------
drivers/net/ixgbe/ixgbe_rxtx.c | 1 -
lib/librte_ether/rte_ethdev.h | 28 +++++++-------
lib/librte_mbuf/rte_mbuf.h | 10 ++++-
14 files changed, 255 insertions(+), 56 deletions(-)
--
1.9.3
^ permalink raw reply [flat|nested] 55+ messages in thread
* [dpdk-dev] [PATCH 1/5] ixgbe: remove a discarded source line
2015-05-26 8:36 ` [dpdk-dev] [PATCH 0/5] support i40e " Helin Zhang
@ 2015-05-26 8:36 ` Helin Zhang
2015-06-01 8:50 ` Olivier MATZ
2015-05-26 8:36 ` [dpdk-dev] [PATCH 2/5] mbuf: use the reserved 16 bits for double vlan Helin Zhang
` (4 subsequent siblings)
5 siblings, 1 reply; 55+ messages in thread
From: Helin Zhang @ 2015-05-26 8:36 UTC (permalink / raw)
To: dev
Little endian to CPU order conversion had been added for reading
vlan tag from RX descriptor, while its original source line was
forgotten to delete. That's a discarded source line and should be
deleted.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
drivers/net/ixgbe/ixgbe_rxtx.c | 1 -
1 file changed, 1 deletion(-)
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c
index 4f9ab22..041c544 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx.c
@@ -981,7 +981,6 @@ ixgbe_rx_scan_hw_ring(struct ixgbe_rx_queue *rxq)
pkt_len = (uint16_t)(rxdp[j].wb.upper.length - rxq->crc_len);
mb->data_len = pkt_len;
mb->pkt_len = pkt_len;
- mb->vlan_tci = rxdp[j].wb.upper.vlan;
mb->vlan_tci = rte_le_to_cpu_16(rxdp[j].wb.upper.vlan);
/* convert descriptor fields to rte mbuf flags */
--
1.9.3
^ permalink raw reply [flat|nested] 55+ messages in thread
* [dpdk-dev] [PATCH 2/5] mbuf: use the reserved 16 bits for double vlan
2015-05-26 8:36 ` [dpdk-dev] [PATCH 0/5] support i40e " Helin Zhang
2015-05-26 8:36 ` [dpdk-dev] [PATCH 1/5] ixgbe: remove a discarded source line Helin Zhang
@ 2015-05-26 8:36 ` Helin Zhang
2015-05-26 14:55 ` Stephen Hemminger
2015-06-01 8:50 ` Olivier MATZ
2015-05-26 8:36 ` [dpdk-dev] [PATCH 3/5] i40e: support double vlan stripping and insertion Helin Zhang
` (3 subsequent siblings)
5 siblings, 2 replies; 55+ messages in thread
From: Helin Zhang @ 2015-05-26 8:36 UTC (permalink / raw)
To: dev
Use the reserved 16 bits in rte_mbuf structure for the outer vlan,
also add QinQ offloading flags for both RX and TX sides.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
lib/librte_mbuf/rte_mbuf.h | 10 +++++++++-
1 file changed, 9 insertions(+), 1 deletion(-)
diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
index ab6de67..4551df9 100644
--- a/lib/librte_mbuf/rte_mbuf.h
+++ b/lib/librte_mbuf/rte_mbuf.h
@@ -101,11 +101,17 @@ extern "C" {
#define PKT_RX_TUNNEL_IPV6_HDR (1ULL << 12) /**< RX tunnel packet with IPv6 header. */
#define PKT_RX_FDIR_ID (1ULL << 13) /**< FD id reported if FDIR match. */
#define PKT_RX_FDIR_FLX (1ULL << 14) /**< Flexible bytes reported if FDIR match. */
+#define PKT_RX_QINQ_PKT (1ULL << 15) /**< RX packet with double VLAN stripped. */
/* add new RX flags here */
/* add new TX flags here */
/**
+ * Second VLAN insertion (QinQ) flag.
+ */
+#define PKT_TX_QINQ_PKT (1ULL << 49) /**< TX packet with double VLAN inserted. */
+
+/**
* TCP segmentation offload. To enable this offload feature for a
* packet to be transmitted on hardware supporting TSO:
* - set the PKT_TX_TCP_SEG flag in mbuf->ol_flags (this flag implies
@@ -279,7 +285,7 @@ struct rte_mbuf {
uint16_t data_len; /**< Amount of data in segment buffer. */
uint32_t pkt_len; /**< Total pkt len: sum of all segments. */
uint16_t vlan_tci; /**< VLAN Tag Control Identifier (CPU order) */
- uint16_t reserved;
+ uint16_t vlan_tci_outer; /**< Outer VLAN Tag Control Identifier (CPU order) */
union {
uint32_t rss; /**< RSS hash result if RSS enabled */
struct {
@@ -777,6 +783,7 @@ static inline void rte_pktmbuf_reset(struct rte_mbuf *m)
m->pkt_len = 0;
m->tx_offload = 0;
m->vlan_tci = 0;
+ m->vlan_tci_outer = 0;
m->nb_segs = 1;
m->port = 0xff;
@@ -849,6 +856,7 @@ static inline void rte_pktmbuf_attach(struct rte_mbuf *mi, struct rte_mbuf *m)
mi->data_len = m->data_len;
mi->port = m->port;
mi->vlan_tci = m->vlan_tci;
+ mi->vlan_tci_outer = m->vlan_tci_outer;
mi->tx_offload = m->tx_offload;
mi->hash = m->hash;
--
1.9.3
^ permalink raw reply [flat|nested] 55+ messages in thread
* [dpdk-dev] [PATCH 3/5] i40e: support double vlan stripping and insertion
2015-05-26 8:36 ` [dpdk-dev] [PATCH 0/5] support i40e " Helin Zhang
2015-05-26 8:36 ` [dpdk-dev] [PATCH 1/5] ixgbe: remove a discarded source line Helin Zhang
2015-05-26 8:36 ` [dpdk-dev] [PATCH 2/5] mbuf: use the reserved 16 bits for double vlan Helin Zhang
@ 2015-05-26 8:36 ` Helin Zhang
2015-06-01 8:50 ` Olivier MATZ
2015-05-26 8:36 ` [dpdk-dev] [PATCH 4/5] i40evf: add supported offload capability flags Helin Zhang
` (2 subsequent siblings)
5 siblings, 1 reply; 55+ messages in thread
From: Helin Zhang @ 2015-05-26 8:36 UTC (permalink / raw)
To: dev
It configures specific registers to enable double vlan stripping
on RX side and insertion on TX side.
The RX descriptors will be parsed, the vlan tags and flags will be
saved to corresponding mbuf fields if vlan tag is detected.
The TX descriptors will be configured according to the
configurations in mbufs, to trigger the hardware insertion of
double vlan tags for each packets sent out.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
drivers/net/i40e/i40e_ethdev.c | 52 +++++++++++++++++++++++++
drivers/net/i40e/i40e_ethdev_vf.c | 6 +++
drivers/net/i40e/i40e_rxtx.c | 81 +++++++++++++++++++++++++--------------
lib/librte_ether/rte_ethdev.h | 28 +++++++-------
4 files changed, 125 insertions(+), 42 deletions(-)
diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index fb64027..e841623 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -211,6 +211,7 @@ static int i40e_dev_filter_ctrl(struct rte_eth_dev *dev,
void *arg);
static void i40e_configure_registers(struct i40e_hw *hw);
static void i40e_hw_init(struct i40e_hw *hw);
+static int i40e_config_qinq(struct i40e_hw *hw, struct i40e_vsi *vsi);
static const struct rte_pci_id pci_id_i40e_map[] = {
#define RTE_PCI_DEV_ID_DECL_I40E(vend, dev) {RTE_PCI_DEVICE(vend, dev)},
@@ -1529,11 +1530,13 @@ i40e_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->max_vfs = dev->pci_dev->max_vfs;
dev_info->rx_offload_capa =
DEV_RX_OFFLOAD_VLAN_STRIP |
+ DEV_RX_OFFLOAD_QINQ_STRIP |
DEV_RX_OFFLOAD_IPV4_CKSUM |
DEV_RX_OFFLOAD_UDP_CKSUM |
DEV_RX_OFFLOAD_TCP_CKSUM;
dev_info->tx_offload_capa =
DEV_TX_OFFLOAD_VLAN_INSERT |
+ DEV_TX_OFFLOAD_QINQ_INSERT |
DEV_TX_OFFLOAD_IPV4_CKSUM |
DEV_TX_OFFLOAD_UDP_CKSUM |
DEV_TX_OFFLOAD_TCP_CKSUM |
@@ -3056,6 +3059,7 @@ i40e_vsi_setup(struct i40e_pf *pf,
* macvlan filter which is expected and cannot be removed.
*/
i40e_update_default_filter_setting(vsi);
+ i40e_config_qinq(hw, vsi);
} else if (type == I40E_VSI_SRIOV) {
memset(&ctxt, 0, sizeof(ctxt));
/**
@@ -3096,6 +3100,8 @@ i40e_vsi_setup(struct i40e_pf *pf,
* Since VSI is not created yet, only configure parameter,
* will add vsi below.
*/
+
+ i40e_config_qinq(hw, vsi);
} else if (type == I40E_VSI_VMDQ2) {
memset(&ctxt, 0, sizeof(ctxt));
/*
@@ -5697,3 +5703,49 @@ i40e_configure_registers(struct i40e_hw *hw)
"0x%"PRIx32, reg_table[i].val, reg_table[i].addr);
}
}
+
+#define I40E_VSI_TSR(_i) (0x00050800 + ((_i) * 4))
+#define I40E_VSI_TSR_QINQ_CONFIG 0xc030
+#define I40E_VSI_L2TAGSTXVALID(_i) (0x00042800 + ((_i) * 4))
+#define I40E_VSI_L2TAGSTXVALID_QINQ 0xab
+static int
+i40e_config_qinq(struct i40e_hw *hw, struct i40e_vsi *vsi)
+{
+ uint32_t reg;
+ int ret;
+
+ if (vsi->vsi_id >= I40E_MAX_NUM_VSIS) {
+ PMD_DRV_LOG(ERR, "VSI ID exceeds the maximum");
+ return -EINVAL;
+ }
+
+ /* Configure for double VLAN RX stripping */
+ reg = I40E_READ_REG(hw, I40E_VSI_TSR(vsi->vsi_id));
+ if ((reg & I40E_VSI_TSR_QINQ_CONFIG) != I40E_VSI_TSR_QINQ_CONFIG) {
+ reg |= I40E_VSI_TSR_QINQ_CONFIG;
+ ret = i40e_aq_debug_write_register(hw,
+ I40E_VSI_TSR(vsi->vsi_id),
+ reg, NULL);
+ if (ret < 0) {
+ PMD_DRV_LOG(ERR, "Failed to update VSI_TSR[%d]",
+ vsi->vsi_id);
+ return I40E_ERR_CONFIG;
+ }
+ }
+
+ /* Configure for double VLAN TX insertion */
+ reg = I40E_READ_REG(hw, I40E_VSI_L2TAGSTXVALID(vsi->vsi_id));
+ if ((reg & 0xff) != I40E_VSI_L2TAGSTXVALID_QINQ) {
+ reg = I40E_VSI_L2TAGSTXVALID_QINQ;
+ ret = i40e_aq_debug_write_register(hw,
+ I40E_VSI_L2TAGSTXVALID(
+ vsi->vsi_id), reg, NULL);
+ if (ret < 0) {
+ PMD_DRV_LOG(ERR, "Failed to update "
+ "VSI_L2TAGSTXVALID[%d]", vsi->vsi_id);
+ return I40E_ERR_CONFIG;
+ }
+ }
+
+ return 0;
+}
diff --git a/drivers/net/i40e/i40e_ethdev_vf.c b/drivers/net/i40e/i40e_ethdev_vf.c
index 9f92a2f..1a4d088 100644
--- a/drivers/net/i40e/i40e_ethdev_vf.c
+++ b/drivers/net/i40e/i40e_ethdev_vf.c
@@ -1643,6 +1643,12 @@ i40evf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->max_rx_pktlen = I40E_FRAME_SIZE_MAX;
dev_info->reta_size = ETH_RSS_RETA_SIZE_64;
dev_info->flow_type_rss_offloads = I40E_RSS_OFFLOAD_ALL;
+ dev_info->rx_offload_capa =
+ DEV_RX_OFFLOAD_VLAN_STRIP |
+ DEV_RX_OFFLOAD_QINQ_STRIP;
+ dev_info->tx_offload_capa =
+ DEV_TX_OFFLOAD_VLAN_INSERT |
+ DEV_TX_OFFLOAD_QINQ_INSERT;
dev_info->default_rxconf = (struct rte_eth_rxconf) {
.rx_thresh = {
diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
index 787f0bd..442494e 100644
--- a/drivers/net/i40e/i40e_rxtx.c
+++ b/drivers/net/i40e/i40e_rxtx.c
@@ -95,18 +95,44 @@ static uint16_t i40e_xmit_pkts_simple(void *tx_queue,
struct rte_mbuf **tx_pkts,
uint16_t nb_pkts);
+static inline void
+i40e_rxd_to_vlan_tci(struct rte_mbuf *mb, volatile union i40e_rx_desc *rxdp)
+{
+ if (rte_le_to_cpu_64(rxdp->wb.qword1.status_error_len) &
+ (1 << I40E_RX_DESC_STATUS_L2TAG1P_SHIFT)) {
+ mb->ol_flags |= PKT_RX_VLAN_PKT;
+ mb->vlan_tci =
+ rte_le_to_cpu_16(rxdp->wb.qword0.lo_dword.l2tag1);
+ PMD_RX_LOG(DEBUG, "Descriptor l2tag1: %u",
+ rte_le_to_cpu_16(rxdp->wb.qword0.lo_dword.l2tag1));
+ } else {
+ mb->vlan_tci = 0;
+ }
+#ifndef RTE_LIBRTE_I40E_16BYTE_RX_DESC
+ if (rte_le_to_cpu_16(rxdp->wb.qword2.ext_status) &
+ (1 << I40E_RX_DESC_EXT_STATUS_L2TAG2P_SHIFT)) {
+ mb->ol_flags |= PKT_RX_QINQ_PKT;
+ mb->vlan_tci_outer = mb->vlan_tci;
+ mb->vlan_tci = rte_le_to_cpu_16(rxdp->wb.qword2.l2tag2_2);
+ PMD_RX_LOG(DEBUG, "Descriptor l2tag2_1: %u, l2tag2_2: %u",
+ rte_le_to_cpu_16(rxdp->wb.qword2.l2tag2_1),
+ rte_le_to_cpu_16(rxdp->wb.qword2.l2tag2_2));
+ } else {
+ mb->vlan_tci_outer = 0;
+ }
+#endif
+ PMD_RX_LOG(DEBUG, "Mbuf vlan_tci: %u, vlan_tci_outer: %u",
+ mb->vlan_tci, mb->vlan_tci_outer);
+}
+
/* Translate the rx descriptor status to pkt flags */
static inline uint64_t
i40e_rxd_status_to_pkt_flags(uint64_t qword)
{
uint64_t flags;
- /* Check if VLAN packet */
- flags = qword & (1 << I40E_RX_DESC_STATUS_L2TAG1P_SHIFT) ?
- PKT_RX_VLAN_PKT : 0;
-
/* Check if RSS_HASH */
- flags |= (((qword >> I40E_RX_DESC_STATUS_FLTSTAT_SHIFT) &
+ flags = (((qword >> I40E_RX_DESC_STATUS_FLTSTAT_SHIFT) &
I40E_RX_DESC_FLTSTAT_RSS_HASH) ==
I40E_RX_DESC_FLTSTAT_RSS_HASH) ? PKT_RX_RSS_HASH : 0;
@@ -697,16 +723,12 @@ i40e_rx_scan_hw_ring(struct i40e_rx_queue *rxq)
mb = rxep[j].mbuf;
qword1 = rte_le_to_cpu_64(\
rxdp[j].wb.qword1.status_error_len);
- rx_status = (qword1 & I40E_RXD_QW1_STATUS_MASK) >>
- I40E_RXD_QW1_STATUS_SHIFT;
pkt_len = ((qword1 & I40E_RXD_QW1_LENGTH_PBUF_MASK) >>
I40E_RXD_QW1_LENGTH_PBUF_SHIFT) - rxq->crc_len;
mb->data_len = pkt_len;
mb->pkt_len = pkt_len;
- mb->vlan_tci = rx_status &
- (1 << I40E_RX_DESC_STATUS_L2TAG1P_SHIFT) ?
- rte_le_to_cpu_16(\
- rxdp[j].wb.qword0.lo_dword.l2tag1) : 0;
+ mb->ol_flags = 0;
+ i40e_rxd_to_vlan_tci(mb, &rxdp[j]);
pkt_flags = i40e_rxd_status_to_pkt_flags(qword1);
pkt_flags |= i40e_rxd_error_to_pkt_flags(qword1);
pkt_flags |= i40e_rxd_ptype_to_pkt_flags(qword1);
@@ -720,7 +742,7 @@ i40e_rx_scan_hw_ring(struct i40e_rx_queue *rxq)
if (pkt_flags & PKT_RX_FDIR)
pkt_flags |= i40e_rxd_build_fdir(&rxdp[j], mb);
- mb->ol_flags = pkt_flags;
+ mb->ol_flags |= pkt_flags;
}
for (j = 0; j < I40E_LOOK_AHEAD; j++)
@@ -946,10 +968,8 @@ i40e_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
rxm->pkt_len = rx_packet_len;
rxm->data_len = rx_packet_len;
rxm->port = rxq->port_id;
-
- rxm->vlan_tci = rx_status &
- (1 << I40E_RX_DESC_STATUS_L2TAG1P_SHIFT) ?
- rte_le_to_cpu_16(rxd.wb.qword0.lo_dword.l2tag1) : 0;
+ rxm->ol_flags = 0;
+ i40e_rxd_to_vlan_tci(rxm, &rxd);
pkt_flags = i40e_rxd_status_to_pkt_flags(qword1);
pkt_flags |= i40e_rxd_error_to_pkt_flags(qword1);
pkt_flags |= i40e_rxd_ptype_to_pkt_flags(qword1);
@@ -961,7 +981,7 @@ i40e_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
if (pkt_flags & PKT_RX_FDIR)
pkt_flags |= i40e_rxd_build_fdir(&rxd, rxm);
- rxm->ol_flags = pkt_flags;
+ rxm->ol_flags |= pkt_flags;
rx_pkts[nb_rx++] = rxm;
}
@@ -1106,9 +1126,8 @@ i40e_recv_scattered_pkts(void *rx_queue,
}
first_seg->port = rxq->port_id;
- first_seg->vlan_tci = (rx_status &
- (1 << I40E_RX_DESC_STATUS_L2TAG1P_SHIFT)) ?
- rte_le_to_cpu_16(rxd.wb.qword0.lo_dword.l2tag1) : 0;
+ first_seg->ol_flags = 0;
+ i40e_rxd_to_vlan_tci(first_seg, &rxd);
pkt_flags = i40e_rxd_status_to_pkt_flags(qword1);
pkt_flags |= i40e_rxd_error_to_pkt_flags(qword1);
pkt_flags |= i40e_rxd_ptype_to_pkt_flags(qword1);
@@ -1121,7 +1140,7 @@ i40e_recv_scattered_pkts(void *rx_queue,
if (pkt_flags & PKT_RX_FDIR)
pkt_flags |= i40e_rxd_build_fdir(&rxd, rxm);
- first_seg->ol_flags = pkt_flags;
+ first_seg->ol_flags |= pkt_flags;
/* Prefetch data of first segment, if configured to do so. */
rte_prefetch0(RTE_PTR_ADD(first_seg->buf_addr,
@@ -1159,17 +1178,15 @@ i40e_recv_scattered_pkts(void *rx_queue,
static inline uint16_t
i40e_calc_context_desc(uint64_t flags)
{
- uint64_t mask = 0ULL;
-
- mask |= (PKT_TX_OUTER_IP_CKSUM | PKT_TX_TCP_SEG);
+ static uint64_t mask = PKT_TX_OUTER_IP_CKSUM |
+ PKT_TX_TCP_SEG |
+ PKT_TX_QINQ_PKT;
#ifdef RTE_LIBRTE_IEEE1588
mask |= PKT_TX_IEEE1588_TMST;
#endif
- if (flags & mask)
- return 1;
- return 0;
+ return ((flags & mask) ? 1 : 0);
}
/* set i40e TSO context descriptor */
@@ -1290,9 +1307,9 @@ i40e_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
}
/* Descriptor based VLAN insertion */
- if (ol_flags & PKT_TX_VLAN_PKT) {
+ if (ol_flags & (PKT_TX_VLAN_PKT | PKT_TX_QINQ_PKT)) {
tx_flags |= tx_pkt->vlan_tci <<
- I40E_TX_FLAG_L2TAG1_SHIFT;
+ I40E_TX_FLAG_L2TAG1_SHIFT;
tx_flags |= I40E_TX_FLAG_INSERT_VLAN;
td_cmd |= I40E_TX_DESC_CMD_IL2TAG1;
td_tag = (tx_flags & I40E_TX_FLAG_L2TAG1_MASK) >>
@@ -1340,6 +1357,12 @@ i40e_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
ctx_txd->tunneling_params =
rte_cpu_to_le_32(cd_tunneling_params);
+ if (ol_flags & PKT_TX_QINQ_PKT) {
+ cd_l2tag2 = tx_pkt->vlan_tci_outer;
+ cd_type_cmd_tso_mss |=
+ ((uint64_t)I40E_TX_CTX_DESC_IL2TAG2 <<
+ I40E_TXD_CTX_QW1_CMD_SHIFT);
+ }
ctx_txd->l2tag2 = rte_cpu_to_le_16(cd_l2tag2);
ctx_txd->type_cmd_tso_mss =
rte_cpu_to_le_64(cd_type_cmd_tso_mss);
diff --git a/lib/librte_ether/rte_ethdev.h b/lib/librte_ether/rte_ethdev.h
index 16dbe00..b26670e 100644
--- a/lib/librte_ether/rte_ethdev.h
+++ b/lib/librte_ether/rte_ethdev.h
@@ -882,23 +882,25 @@ struct rte_eth_conf {
/**
* RX offload capabilities of a device.
*/
-#define DEV_RX_OFFLOAD_VLAN_STRIP 0x00000001
-#define DEV_RX_OFFLOAD_IPV4_CKSUM 0x00000002
-#define DEV_RX_OFFLOAD_UDP_CKSUM 0x00000004
-#define DEV_RX_OFFLOAD_TCP_CKSUM 0x00000008
-#define DEV_RX_OFFLOAD_TCP_LRO 0x00000010
+#define DEV_RX_OFFLOAD_VLAN_STRIP 0x00000001
+#define DEV_RX_OFFLOAD_QINQ_STRIP 0x00000002
+#define DEV_RX_OFFLOAD_IPV4_CKSUM 0x00000004
+#define DEV_RX_OFFLOAD_UDP_CKSUM 0x00000008
+#define DEV_RX_OFFLOAD_TCP_CKSUM 0x00000010
+#define DEV_RX_OFFLOAD_TCP_LRO 0x00000020
/**
* TX offload capabilities of a device.
*/
-#define DEV_TX_OFFLOAD_VLAN_INSERT 0x00000001
-#define DEV_TX_OFFLOAD_IPV4_CKSUM 0x00000002
-#define DEV_TX_OFFLOAD_UDP_CKSUM 0x00000004
-#define DEV_TX_OFFLOAD_TCP_CKSUM 0x00000008
-#define DEV_TX_OFFLOAD_SCTP_CKSUM 0x00000010
-#define DEV_TX_OFFLOAD_TCP_TSO 0x00000020
-#define DEV_TX_OFFLOAD_UDP_TSO 0x00000040
-#define DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM 0x00000080 /**< Used for tunneling packet. */
+#define DEV_TX_OFFLOAD_VLAN_INSERT 0x00000001
+#define DEV_TX_OFFLOAD_QINQ_INSERT 0x00000002
+#define DEV_TX_OFFLOAD_IPV4_CKSUM 0x00000004
+#define DEV_TX_OFFLOAD_UDP_CKSUM 0x00000008
+#define DEV_TX_OFFLOAD_TCP_CKSUM 0x00000010
+#define DEV_TX_OFFLOAD_SCTP_CKSUM 0x00000020
+#define DEV_TX_OFFLOAD_TCP_TSO 0x00000040
+#define DEV_TX_OFFLOAD_UDP_TSO 0x00000080
+#define DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM 0x00000100
struct rte_eth_dev_info {
struct rte_pci_device *pci_dev; /**< Device PCI information. */
--
1.9.3
^ permalink raw reply [flat|nested] 55+ messages in thread
* [dpdk-dev] [PATCH 4/5] i40evf: add supported offload capability flags
2015-05-26 8:36 ` [dpdk-dev] [PATCH 0/5] support i40e " Helin Zhang
` (2 preceding siblings ...)
2015-05-26 8:36 ` [dpdk-dev] [PATCH 3/5] i40e: support double vlan stripping and insertion Helin Zhang
@ 2015-05-26 8:36 ` Helin Zhang
2015-05-26 8:36 ` [dpdk-dev] [PATCH 5/5] app/testpmd: add test cases for qinq stripping and insertion Helin Zhang
2015-06-02 3:16 ` [dpdk-dev] [PATCH v2 0/6] support i40e QinQ " Helin Zhang
5 siblings, 0 replies; 55+ messages in thread
From: Helin Zhang @ 2015-05-26 8:36 UTC (permalink / raw)
To: dev
Add checksum offload capability flags which have already been
supported for a long time.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
drivers/net/i40e/i40e_ethdev_vf.c | 11 +++++++++--
1 file changed, 9 insertions(+), 2 deletions(-)
diff --git a/drivers/net/i40e/i40e_ethdev_vf.c b/drivers/net/i40e/i40e_ethdev_vf.c
index 1a4d088..12d7917 100644
--- a/drivers/net/i40e/i40e_ethdev_vf.c
+++ b/drivers/net/i40e/i40e_ethdev_vf.c
@@ -1645,10 +1645,17 @@ i40evf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->flow_type_rss_offloads = I40E_RSS_OFFLOAD_ALL;
dev_info->rx_offload_capa =
DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_QINQ_STRIP;
+ DEV_RX_OFFLOAD_QINQ_STRIP |
+ DEV_RX_OFFLOAD_IPV4_CKSUM |
+ DEV_RX_OFFLOAD_UDP_CKSUM |
+ DEV_RX_OFFLOAD_TCP_CKSUM;
dev_info->tx_offload_capa =
DEV_TX_OFFLOAD_VLAN_INSERT |
- DEV_TX_OFFLOAD_QINQ_INSERT;
+ DEV_TX_OFFLOAD_QINQ_INSERT |
+ DEV_TX_OFFLOAD_IPV4_CKSUM |
+ DEV_TX_OFFLOAD_UDP_CKSUM |
+ DEV_TX_OFFLOAD_TCP_CKSUM |
+ DEV_TX_OFFLOAD_SCTP_CKSUM;
dev_info->default_rxconf = (struct rte_eth_rxconf) {
.rx_thresh = {
--
1.9.3
^ permalink raw reply [flat|nested] 55+ messages in thread
* [dpdk-dev] [PATCH 5/5] app/testpmd: add test cases for qinq stripping and insertion
2015-05-26 8:36 ` [dpdk-dev] [PATCH 0/5] support i40e " Helin Zhang
` (3 preceding siblings ...)
2015-05-26 8:36 ` [dpdk-dev] [PATCH 4/5] i40evf: add supported offload capability flags Helin Zhang
@ 2015-05-26 8:36 ` Helin Zhang
2015-06-02 3:16 ` [dpdk-dev] [PATCH v2 0/6] support i40e QinQ " Helin Zhang
5 siblings, 0 replies; 55+ messages in thread
From: Helin Zhang @ 2015-05-26 8:36 UTC (permalink / raw)
To: dev
If double vlan is detected, its stripped flag and vlan tags can be
printed on rxonly mode. Test command of 'tx_vlan set' is expanded
to set both single and double vlan tags on TX side for each packets
to be sent out.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
app/test-pmd/cmdline.c | 78 +++++++++++++++++++++++++++++++++++++++++++++-----
app/test-pmd/config.c | 21 +++++++++++++-
app/test-pmd/flowgen.c | 4 ++-
app/test-pmd/macfwd.c | 3 ++
app/test-pmd/macswap.c | 3 ++
app/test-pmd/rxonly.c | 3 ++
app/test-pmd/testpmd.h | 6 +++-
app/test-pmd/txonly.c | 8 ++++--
8 files changed, 114 insertions(+), 12 deletions(-)
diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index f01db2a..db2e73e 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -304,9 +304,9 @@ static void cmd_help_long_parsed(void *parsed_result,
"rx_vxlan_port rm (udp_port) (port_id)\n"
" Remove an UDP port for VXLAN packet filter on a port\n\n"
- "tx_vlan set vlan_id (port_id)\n"
- " Set hardware insertion of VLAN ID in packets sent"
- " on a port.\n\n"
+ "tx_vlan set (port_id) vlan_id[, vlan_id_outer]\n"
+ " Set hardware insertion of VLAN IDs (single or double VLAN "
+ "depends on the number of VLAN IDs) in packets sent on a port.\n\n"
"tx_vlan set pvid port_id vlan_id (on|off)\n"
" Set port based TX VLAN insertion.\n\n"
@@ -2799,8 +2799,8 @@ cmdline_parse_inst_t cmd_rx_vlan_filter = {
struct cmd_tx_vlan_set_result {
cmdline_fixed_string_t tx_vlan;
cmdline_fixed_string_t set;
- uint16_t vlan_id;
uint8_t port_id;
+ uint16_t vlan_id;
};
static void
@@ -2809,6 +2809,13 @@ cmd_tx_vlan_set_parsed(void *parsed_result,
__attribute__((unused)) void *data)
{
struct cmd_tx_vlan_set_result *res = parsed_result;
+ int vlan_offload = rte_eth_dev_get_vlan_offload(res->port_id);
+
+ if (vlan_offload & ETH_VLAN_EXTEND_OFFLOAD) {
+ printf("Error, as QinQ has been enabled.\n");
+ return;
+ }
+
tx_vlan_set(res->port_id, res->vlan_id);
}
@@ -2828,13 +2835,69 @@ cmdline_parse_token_num_t cmd_tx_vlan_set_portid =
cmdline_parse_inst_t cmd_tx_vlan_set = {
.f = cmd_tx_vlan_set_parsed,
.data = NULL,
- .help_str = "enable hardware insertion of a VLAN header with a given "
- "TAG Identifier in packets sent on a port",
+ .help_str = "enable hardware insertion of a single VLAN header "
+ "with a given TAG Identifier in packets sent on a port",
.tokens = {
(void *)&cmd_tx_vlan_set_tx_vlan,
(void *)&cmd_tx_vlan_set_set,
- (void *)&cmd_tx_vlan_set_vlanid,
(void *)&cmd_tx_vlan_set_portid,
+ (void *)&cmd_tx_vlan_set_vlanid,
+ NULL,
+ },
+};
+
+/* *** ENABLE HARDWARE INSERTION OF Double VLAN HEADER IN TX PACKETS *** */
+struct cmd_tx_vlan_set_qinq_result {
+ cmdline_fixed_string_t tx_vlan;
+ cmdline_fixed_string_t set;
+ uint8_t port_id;
+ uint16_t vlan_id;
+ uint16_t vlan_id_outer;
+};
+
+static void
+cmd_tx_vlan_set_qinq_parsed(void *parsed_result,
+ __attribute__((unused)) struct cmdline *cl,
+ __attribute__((unused)) void *data)
+{
+ struct cmd_tx_vlan_set_qinq_result *res = parsed_result;
+ int vlan_offload = rte_eth_dev_get_vlan_offload(res->port_id);
+
+ if (!(vlan_offload & ETH_VLAN_EXTEND_OFFLOAD)) {
+ printf("Error, as QinQ hasn't been enabled.\n");
+ return;
+ }
+
+ tx_qinq_set(res->port_id, res->vlan_id, res->vlan_id_outer);
+}
+
+cmdline_parse_token_string_t cmd_tx_vlan_set_qinq_tx_vlan =
+ TOKEN_STRING_INITIALIZER(struct cmd_tx_vlan_set_qinq_result,
+ tx_vlan, "tx_vlan");
+cmdline_parse_token_string_t cmd_tx_vlan_set_qinq_set =
+ TOKEN_STRING_INITIALIZER(struct cmd_tx_vlan_set_qinq_result,
+ set, "set");
+cmdline_parse_token_num_t cmd_tx_vlan_set_qinq_portid =
+ TOKEN_NUM_INITIALIZER(struct cmd_tx_vlan_set_qinq_result,
+ port_id, UINT8);
+cmdline_parse_token_num_t cmd_tx_vlan_set_qinq_vlanid =
+ TOKEN_NUM_INITIALIZER(struct cmd_tx_vlan_set_qinq_result,
+ vlan_id, UINT16);
+cmdline_parse_token_num_t cmd_tx_vlan_set_qinq_vlanid_outer =
+ TOKEN_NUM_INITIALIZER(struct cmd_tx_vlan_set_qinq_result,
+ vlan_id_outer, UINT16);
+
+cmdline_parse_inst_t cmd_tx_vlan_set_qinq = {
+ .f = cmd_tx_vlan_set_qinq_parsed,
+ .data = NULL,
+ .help_str = "enable hardware insertion of double VLAN header "
+ "with given TAG Identifiers in packets sent on a port",
+ .tokens = {
+ (void *)&cmd_tx_vlan_set_qinq_tx_vlan,
+ (void *)&cmd_tx_vlan_set_qinq_set,
+ (void *)&cmd_tx_vlan_set_qinq_portid,
+ (void *)&cmd_tx_vlan_set_qinq_vlanid,
+ (void *)&cmd_tx_vlan_set_qinq_vlanid_outer,
NULL,
},
};
@@ -8782,6 +8845,7 @@ cmdline_parse_ctx_t main_ctx[] = {
(cmdline_parse_inst_t *)&cmd_rx_vlan_filter_all,
(cmdline_parse_inst_t *)&cmd_rx_vlan_filter,
(cmdline_parse_inst_t *)&cmd_tx_vlan_set,
+ (cmdline_parse_inst_t *)&cmd_tx_vlan_set_qinq,
(cmdline_parse_inst_t *)&cmd_tx_vlan_reset,
(cmdline_parse_inst_t *)&cmd_tx_vlan_set_pvid,
(cmdline_parse_inst_t *)&cmd_csum_set,
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index f788ed5..8c49e4d 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -1732,16 +1732,35 @@ tx_vlan_set(portid_t port_id, uint16_t vlan_id)
return;
if (vlan_id_is_invalid(vlan_id))
return;
+ tx_vlan_reset(port_id);
ports[port_id].tx_ol_flags |= TESTPMD_TX_OFFLOAD_INSERT_VLAN;
ports[port_id].tx_vlan_id = vlan_id;
}
void
+tx_qinq_set(portid_t port_id, uint16_t vlan_id, uint16_t vlan_id_outer)
+{
+ if (port_id_is_invalid(port_id, ENABLED_WARN))
+ return;
+ if (vlan_id_is_invalid(vlan_id))
+ return;
+ if (vlan_id_is_invalid(vlan_id_outer))
+ return;
+ tx_vlan_reset(port_id);
+ ports[port_id].tx_ol_flags |= TESTPMD_TX_OFFLOAD_INSERT_QINQ;
+ ports[port_id].tx_vlan_id = vlan_id;
+ ports[port_id].tx_vlan_id_outer = vlan_id_outer;
+}
+
+void
tx_vlan_reset(portid_t port_id)
{
if (port_id_is_invalid(port_id, ENABLED_WARN))
return;
- ports[port_id].tx_ol_flags &= ~TESTPMD_TX_OFFLOAD_INSERT_VLAN;
+ ports[port_id].tx_ol_flags &= ~(TESTPMD_TX_OFFLOAD_INSERT_VLAN |
+ TESTPMD_TX_OFFLOAD_INSERT_QINQ);
+ ports[port_id].tx_vlan_id = 0;
+ ports[port_id].tx_vlan_id_outer = 0;
}
void
diff --git a/app/test-pmd/flowgen.c b/app/test-pmd/flowgen.c
index 72016c9..fce96dc 100644
--- a/app/test-pmd/flowgen.c
+++ b/app/test-pmd/flowgen.c
@@ -136,7 +136,7 @@ pkt_burst_flow_gen(struct fwd_stream *fs)
struct ether_hdr *eth_hdr;
struct ipv4_hdr *ip_hdr;
struct udp_hdr *udp_hdr;
- uint16_t vlan_tci;
+ uint16_t vlan_tci, vlan_tci_outer;
uint16_t ol_flags;
uint16_t nb_rx;
uint16_t nb_tx;
@@ -163,6 +163,7 @@ pkt_burst_flow_gen(struct fwd_stream *fs)
mbp = current_fwd_lcore()->mbp;
vlan_tci = ports[fs->tx_port].tx_vlan_id;
+ vlan_tci_outer = ports[fs->tx_port].tx_vlan_id_outer;
ol_flags = ports[fs->tx_port].tx_ol_flags;
for (nb_pkt = 0; nb_pkt < nb_pkt_per_burst; nb_pkt++) {
@@ -208,6 +209,7 @@ pkt_burst_flow_gen(struct fwd_stream *fs)
pkt->pkt_len = pkt_size;
pkt->ol_flags = ol_flags;
pkt->vlan_tci = vlan_tci;
+ pkt->vlan_tci_outer = vlan_tci_outer;
pkt->l2_len = sizeof(struct ether_hdr);
pkt->l3_len = sizeof(struct ipv4_hdr);
pkts_burst[nb_pkt] = pkt;
diff --git a/app/test-pmd/macfwd.c b/app/test-pmd/macfwd.c
index 035e5eb..3b7fffb 100644
--- a/app/test-pmd/macfwd.c
+++ b/app/test-pmd/macfwd.c
@@ -110,6 +110,8 @@ pkt_burst_mac_forward(struct fwd_stream *fs)
txp = &ports[fs->tx_port];
if (txp->tx_ol_flags & TESTPMD_TX_OFFLOAD_INSERT_VLAN)
ol_flags = PKT_TX_VLAN_PKT;
+ if (txp->tx_ol_flags & TESTPMD_TX_OFFLOAD_INSERT_QINQ)
+ ol_flags |= PKT_TX_QINQ_PKT;
for (i = 0; i < nb_rx; i++) {
mb = pkts_burst[i];
eth_hdr = rte_pktmbuf_mtod(mb, struct ether_hdr *);
@@ -121,6 +123,7 @@ pkt_burst_mac_forward(struct fwd_stream *fs)
mb->l2_len = sizeof(struct ether_hdr);
mb->l3_len = sizeof(struct ipv4_hdr);
mb->vlan_tci = txp->tx_vlan_id;
+ mb->vlan_tci_outer = txp->tx_vlan_id_outer;
}
nb_tx = rte_eth_tx_burst(fs->tx_port, fs->tx_queue, pkts_burst, nb_rx);
fs->tx_packets += nb_tx;
diff --git a/app/test-pmd/macswap.c b/app/test-pmd/macswap.c
index 6729849..154889d 100644
--- a/app/test-pmd/macswap.c
+++ b/app/test-pmd/macswap.c
@@ -110,6 +110,8 @@ pkt_burst_mac_swap(struct fwd_stream *fs)
txp = &ports[fs->tx_port];
if (txp->tx_ol_flags & TESTPMD_TX_OFFLOAD_INSERT_VLAN)
ol_flags = PKT_TX_VLAN_PKT;
+ if (txp->tx_ol_flags & TESTPMD_TX_OFFLOAD_INSERT_QINQ)
+ ol_flags |= PKT_TX_QINQ_PKT;
for (i = 0; i < nb_rx; i++) {
mb = pkts_burst[i];
eth_hdr = rte_pktmbuf_mtod(mb, struct ether_hdr *);
@@ -123,6 +125,7 @@ pkt_burst_mac_swap(struct fwd_stream *fs)
mb->l2_len = sizeof(struct ether_hdr);
mb->l3_len = sizeof(struct ipv4_hdr);
mb->vlan_tci = txp->tx_vlan_id;
+ mb->vlan_tci_outer = txp->tx_vlan_id_outer;
}
nb_tx = rte_eth_tx_burst(fs->tx_port, fs->tx_queue, pkts_burst, nb_rx);
fs->tx_packets += nb_tx;
diff --git a/app/test-pmd/rxonly.c b/app/test-pmd/rxonly.c
index ac56090..f6a2f84 100644
--- a/app/test-pmd/rxonly.c
+++ b/app/test-pmd/rxonly.c
@@ -160,6 +160,9 @@ pkt_burst_receive(struct fwd_stream *fs)
}
if (ol_flags & PKT_RX_VLAN_PKT)
printf(" - VLAN tci=0x%x", mb->vlan_tci);
+ if (ol_flags & PKT_RX_QINQ_PKT)
+ printf(" - QinQ VLAN tci=0x%x, VLAN tci outer=0x%x",
+ mb->vlan_tci, mb->vlan_tci_outer);
if (is_encapsulation) {
struct ipv4_hdr *ipv4_hdr;
struct ipv6_hdr *ipv6_hdr;
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index c3b6700..e71951b 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -133,6 +133,8 @@ struct fwd_stream {
#define TESTPMD_TX_OFFLOAD_PARSE_TUNNEL 0x0020
/** Insert VLAN header in forward engine */
#define TESTPMD_TX_OFFLOAD_INSERT_VLAN 0x0040
+/** Insert double VLAN header in forward engine */
+#define TESTPMD_TX_OFFLOAD_INSERT_QINQ 0x0080
/**
* The data structure associated with each port.
@@ -149,7 +151,8 @@ struct rte_port {
unsigned int socket_id; /**< For NUMA support */
uint16_t tx_ol_flags;/**< TX Offload Flags (TESTPMD_TX_OFFLOAD...). */
uint16_t tso_segsz; /**< MSS for segmentation offload. */
- uint16_t tx_vlan_id; /**< Tag Id. in TX VLAN packets. */
+ uint16_t tx_vlan_id;/**< The tag ID */
+ uint16_t tx_vlan_id_outer;/**< The outer tag ID */
void *fwd_ctx; /**< Forwarding mode context */
uint64_t rx_bad_ip_csum; /**< rx pkts with bad ip checksum */
uint64_t rx_bad_l4_csum; /**< rx pkts with bad l4 checksum */
@@ -513,6 +516,7 @@ int rx_vft_set(portid_t port_id, uint16_t vlan_id, int on);
void vlan_extend_set(portid_t port_id, int on);
void vlan_tpid_set(portid_t port_id, uint16_t tp_id);
void tx_vlan_set(portid_t port_id, uint16_t vlan_id);
+void tx_qinq_set(portid_t port_id, uint16_t vlan_id, uint16_t vlan_id_outer);
void tx_vlan_reset(portid_t port_id);
void tx_vlan_pvid_set(portid_t port_id, uint16_t vlan_id, int on);
diff --git a/app/test-pmd/txonly.c b/app/test-pmd/txonly.c
index ca32c85..8ce6109 100644
--- a/app/test-pmd/txonly.c
+++ b/app/test-pmd/txonly.c
@@ -202,7 +202,7 @@ pkt_burst_transmit(struct fwd_stream *fs)
struct ether_hdr eth_hdr;
uint16_t nb_tx;
uint16_t nb_pkt;
- uint16_t vlan_tci;
+ uint16_t vlan_tci, vlan_tci_outer;
uint64_t ol_flags = 0;
uint8_t i;
#ifdef RTE_TEST_PMD_RECORD_CORE_CYCLES
@@ -218,8 +218,11 @@ pkt_burst_transmit(struct fwd_stream *fs)
mbp = current_fwd_lcore()->mbp;
txp = &ports[fs->tx_port];
vlan_tci = txp->tx_vlan_id;
+ vlan_tci_outer = txp->tx_vlan_id_outer;
if (txp->tx_ol_flags & TESTPMD_TX_OFFLOAD_INSERT_VLAN)
ol_flags = PKT_TX_VLAN_PKT;
+ if (txp->tx_ol_flags & TESTPMD_TX_OFFLOAD_INSERT_QINQ)
+ ol_flags |= PKT_TX_QINQ_PKT;
for (nb_pkt = 0; nb_pkt < nb_pkt_per_burst; nb_pkt++) {
pkt = tx_mbuf_alloc(mbp);
if (pkt == NULL) {
@@ -266,7 +269,8 @@ pkt_burst_transmit(struct fwd_stream *fs)
pkt->nb_segs = tx_pkt_nb_segs;
pkt->pkt_len = tx_pkt_length;
pkt->ol_flags = ol_flags;
- pkt->vlan_tci = vlan_tci;
+ pkt->vlan_tci = vlan_tci;
+ pkt->vlan_tci_outer = vlan_tci_outer;
pkt->l2_len = sizeof(struct ether_hdr);
pkt->l3_len = sizeof(struct ipv4_hdr);
pkts_burst[nb_pkt] = pkt;
--
1.9.3
^ permalink raw reply [flat|nested] 55+ messages in thread
* Re: [dpdk-dev] [PATCH 2/5] mbuf: use the reserved 16 bits for double vlan
2015-05-26 8:36 ` [dpdk-dev] [PATCH 2/5] mbuf: use the reserved 16 bits for double vlan Helin Zhang
@ 2015-05-26 14:55 ` Stephen Hemminger
2015-05-26 15:00 ` Zhang, Helin
2015-05-26 15:02 ` Ananyev, Konstantin
2015-06-01 8:50 ` Olivier MATZ
1 sibling, 2 replies; 55+ messages in thread
From: Stephen Hemminger @ 2015-05-26 14:55 UTC (permalink / raw)
To: Helin Zhang; +Cc: dev
On Tue, 26 May 2015 16:36:37 +0800
Helin Zhang <helin.zhang@intel.com> wrote:
> Use the reserved 16 bits in rte_mbuf structure for the outer vlan,
> also add QinQ offloading flags for both RX and TX sides.
>
> Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Yet another change that is much needed, but breaks ABI compatibility.
^ permalink raw reply [flat|nested] 55+ messages in thread
* Re: [dpdk-dev] [PATCH 2/5] mbuf: use the reserved 16 bits for double vlan
2015-05-26 14:55 ` Stephen Hemminger
@ 2015-05-26 15:00 ` Zhang, Helin
2015-05-26 15:02 ` Ananyev, Konstantin
1 sibling, 0 replies; 55+ messages in thread
From: Zhang, Helin @ 2015-05-26 15:00 UTC (permalink / raw)
To: Stephen Hemminger; +Cc: dev
Hi Stephen
> -----Original Message-----
> From: Stephen Hemminger [mailto:stephen@networkplumber.org]
> Sent: Tuesday, May 26, 2015 10:55 PM
> To: Zhang, Helin
> Cc: dev@dpdk.org
> Subject: Re: [dpdk-dev] [PATCH 2/5] mbuf: use the reserved 16 bits for
> double vlan
>
> On Tue, 26 May 2015 16:36:37 +0800
> Helin Zhang <helin.zhang@intel.com> wrote:
>
> > Use the reserved 16 bits in rte_mbuf structure for the outer vlan,
> > also add QinQ offloading flags for both RX and TX sides.
> >
> > Signed-off-by: Helin Zhang <helin.zhang@intel.com>
>
> Yet another change that is much needed, but breaks ABI compatibility.
Even just use the reserved 16 bits? It seems yes.
Would it be acceptable to use the original name of 'reserved' for the outer vlan?
And then announce the name change, and rename it one release after?
Regards,
Helin
^ permalink raw reply [flat|nested] 55+ messages in thread
* Re: [dpdk-dev] [PATCH 2/5] mbuf: use the reserved 16 bits for double vlan
2015-05-26 14:55 ` Stephen Hemminger
2015-05-26 15:00 ` Zhang, Helin
@ 2015-05-26 15:02 ` Ananyev, Konstantin
2015-05-26 15:35 ` Stephen Hemminger
1 sibling, 1 reply; 55+ messages in thread
From: Ananyev, Konstantin @ 2015-05-26 15:02 UTC (permalink / raw)
To: Stephen Hemminger, Zhang, Helin; +Cc: dev
Hi Stephen,
> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Stephen Hemminger
> Sent: Tuesday, May 26, 2015 3:55 PM
> To: Zhang, Helin
> Cc: dev@dpdk.org
> Subject: Re: [dpdk-dev] [PATCH 2/5] mbuf: use the reserved 16 bits for double vlan
>
> On Tue, 26 May 2015 16:36:37 +0800
> Helin Zhang <helin.zhang@intel.com> wrote:
>
> > Use the reserved 16 bits in rte_mbuf structure for the outer vlan,
> > also add QinQ offloading flags for both RX and TX sides.
> >
> > Signed-off-by: Helin Zhang <helin.zhang@intel.com>
>
> Yet another change that is much needed, but breaks ABI compatibility.
Why do you think it breaks ABI compatibility?
As I can see, it uses field that was reserved.
Konstantin
^ permalink raw reply [flat|nested] 55+ messages in thread
* Re: [dpdk-dev] [PATCH 2/5] mbuf: use the reserved 16 bits for double vlan
2015-05-26 15:02 ` Ananyev, Konstantin
@ 2015-05-26 15:35 ` Stephen Hemminger
2015-05-26 15:46 ` Ananyev, Konstantin
0 siblings, 1 reply; 55+ messages in thread
From: Stephen Hemminger @ 2015-05-26 15:35 UTC (permalink / raw)
To: Ananyev, Konstantin; +Cc: dev
On Tue, 26 May 2015 15:02:51 +0000
"Ananyev, Konstantin" <konstantin.ananyev@intel.com> wrote:
> Hi Stephen,
>
> > -----Original Message-----
> > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Stephen Hemminger
> > Sent: Tuesday, May 26, 2015 3:55 PM
> > To: Zhang, Helin
> > Cc: dev@dpdk.org
> > Subject: Re: [dpdk-dev] [PATCH 2/5] mbuf: use the reserved 16 bits for double vlan
> >
> > On Tue, 26 May 2015 16:36:37 +0800
> > Helin Zhang <helin.zhang@intel.com> wrote:
> >
> > > Use the reserved 16 bits in rte_mbuf structure for the outer vlan,
> > > also add QinQ offloading flags for both RX and TX sides.
> > >
> > > Signed-off-by: Helin Zhang <helin.zhang@intel.com>
> >
> > Yet another change that is much needed, but breaks ABI compatibility.
>
> Why do you think it breaks ABI compatibility?
> As I can see, it uses field that was reserved.
> Konstantin
Because an application maybe assuming something or reusing the reserved fields.
Yes, it would be dumb of application to do that but from absolute ABI point
of view it is a change.
^ permalink raw reply [flat|nested] 55+ messages in thread
* Re: [dpdk-dev] [PATCH 2/5] mbuf: use the reserved 16 bits for double vlan
2015-05-26 15:35 ` Stephen Hemminger
@ 2015-05-26 15:46 ` Ananyev, Konstantin
2015-05-27 1:07 ` Zhang, Helin
0 siblings, 1 reply; 55+ messages in thread
From: Ananyev, Konstantin @ 2015-05-26 15:46 UTC (permalink / raw)
To: Stephen Hemminger; +Cc: dev
> -----Original Message-----
> From: Stephen Hemminger [mailto:stephen@networkplumber.org]
> Sent: Tuesday, May 26, 2015 4:35 PM
> To: Ananyev, Konstantin
> Cc: Zhang, Helin; dev@dpdk.org
> Subject: Re: [dpdk-dev] [PATCH 2/5] mbuf: use the reserved 16 bits for double vlan
>
> On Tue, 26 May 2015 15:02:51 +0000
> "Ananyev, Konstantin" <konstantin.ananyev@intel.com> wrote:
>
> > Hi Stephen,
> >
> > > -----Original Message-----
> > > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Stephen Hemminger
> > > Sent: Tuesday, May 26, 2015 3:55 PM
> > > To: Zhang, Helin
> > > Cc: dev@dpdk.org
> > > Subject: Re: [dpdk-dev] [PATCH 2/5] mbuf: use the reserved 16 bits for double vlan
> > >
> > > On Tue, 26 May 2015 16:36:37 +0800
> > > Helin Zhang <helin.zhang@intel.com> wrote:
> > >
> > > > Use the reserved 16 bits in rte_mbuf structure for the outer vlan,
> > > > also add QinQ offloading flags for both RX and TX sides.
> > > >
> > > > Signed-off-by: Helin Zhang <helin.zhang@intel.com>
> > >
> > > Yet another change that is much needed, but breaks ABI compatibility.
> >
> > Why do you think it breaks ABI compatibility?
> > As I can see, it uses field that was reserved.
> > Konstantin
>
> Because an application maybe assuming something or reusing the reserved fields.
But properly behaving application, shouldn't do that right?
And for misbehaving ones, why should we care about them?
> Yes, it would be dumb of application to do that but from absolute ABI point
> of view it is a change.
So, in theory, even adding a new field to the end of rte_mbuf is an ABI breakage?
Konstantin
^ permalink raw reply [flat|nested] 55+ messages in thread
* Re: [dpdk-dev] [PATCH 2/5] mbuf: use the reserved 16 bits for double vlan
2015-05-26 15:46 ` Ananyev, Konstantin
@ 2015-05-27 1:07 ` Zhang, Helin
0 siblings, 0 replies; 55+ messages in thread
From: Zhang, Helin @ 2015-05-27 1:07 UTC (permalink / raw)
To: Ananyev, Konstantin, Stephen Hemminger; +Cc: dev
> -----Original Message-----
> From: Ananyev, Konstantin
> Sent: Tuesday, May 26, 2015 11:46 PM
> To: Stephen Hemminger
> Cc: Zhang, Helin; dev@dpdk.org
> Subject: RE: [dpdk-dev] [PATCH 2/5] mbuf: use the reserved 16 bits for
> double vlan
>
>
>
> > -----Original Message-----
> > From: Stephen Hemminger [mailto:stephen@networkplumber.org]
> > Sent: Tuesday, May 26, 2015 4:35 PM
> > To: Ananyev, Konstantin
> > Cc: Zhang, Helin; dev@dpdk.org
> > Subject: Re: [dpdk-dev] [PATCH 2/5] mbuf: use the reserved 16 bits for
> > double vlan
> >
> > On Tue, 26 May 2015 15:02:51 +0000
> > "Ananyev, Konstantin" <konstantin.ananyev@intel.com> wrote:
> >
> > > Hi Stephen,
> > >
> > > > -----Original Message-----
> > > > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Stephen
> > > > Hemminger
> > > > Sent: Tuesday, May 26, 2015 3:55 PM
> > > > To: Zhang, Helin
> > > > Cc: dev@dpdk.org
> > > > Subject: Re: [dpdk-dev] [PATCH 2/5] mbuf: use the reserved 16 bits
> > > > for double vlan
> > > >
> > > > On Tue, 26 May 2015 16:36:37 +0800 Helin Zhang
> > > > <helin.zhang@intel.com> wrote:
> > > >
> > > > > Use the reserved 16 bits in rte_mbuf structure for the outer
> > > > > vlan, also add QinQ offloading flags for both RX and TX sides.
> > > > >
> > > > > Signed-off-by: Helin Zhang <helin.zhang@intel.com>
> > > >
> > > > Yet another change that is much needed, but breaks ABI
> compatibility.
> > >
> > > Why do you think it breaks ABI compatibility?
> > > As I can see, it uses field that was reserved.
> > > Konstantin
> >
> > Because an application maybe assuming something or reusing the
> reserved fields.
>
> But properly behaving application, shouldn't do that right?
> And for misbehaving ones, why should we care about them?
For any reserved bits, I think all application users should avoid touching it,
as it is reserved for future use, or some special reason. Otherwise,
un-predicted behavior can be expected.
Regards,
Helin
>
> > Yes, it would be dumb of application to do that but from absolute ABI
> > point of view it is a change.
>
> So, in theory, even adding a new field to the end of rte_mbuf is an ABI
> breakage?
> Konstantin
^ permalink raw reply [flat|nested] 55+ messages in thread
* Re: [dpdk-dev] [PATCH 1/5] ixgbe: remove a discarded source line
2015-05-26 8:36 ` [dpdk-dev] [PATCH 1/5] ixgbe: remove a discarded source line Helin Zhang
@ 2015-06-01 8:50 ` Olivier MATZ
2015-06-02 1:45 ` Zhang, Helin
0 siblings, 1 reply; 55+ messages in thread
From: Olivier MATZ @ 2015-06-01 8:50 UTC (permalink / raw)
To: Helin Zhang, dev
Hi Helin,
On 05/26/2015 10:36 AM, Helin Zhang wrote:
> Little endian to CPU order conversion had been added for reading
> vlan tag from RX descriptor, while its original source line was
> forgotten to delete. That's a discarded source line and should be
> deleted.
>
> Signed-off-by: Helin Zhang <helin.zhang@intel.com>
> ---
> drivers/net/ixgbe/ixgbe_rxtx.c | 1 -
> 1 file changed, 1 deletion(-)
>
> diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c
> index 4f9ab22..041c544 100644
> --- a/drivers/net/ixgbe/ixgbe_rxtx.c
> +++ b/drivers/net/ixgbe/ixgbe_rxtx.c
> @@ -981,7 +981,6 @@ ixgbe_rx_scan_hw_ring(struct ixgbe_rx_queue *rxq)
> pkt_len = (uint16_t)(rxdp[j].wb.upper.length - rxq->crc_len);
> mb->data_len = pkt_len;
> mb->pkt_len = pkt_len;
> - mb->vlan_tci = rxdp[j].wb.upper.vlan;
> mb->vlan_tci = rte_le_to_cpu_16(rxdp[j].wb.upper.vlan);
>
> /* convert descriptor fields to rte mbuf flags */
>
Maybe the following should be added in the commit log:
Fixes: 23fcffe8ffac ("ixgbe: fix id and hash with flow director")
Acked-by: Olivier Matz <olivier.matz@6wind.com>
^ permalink raw reply [flat|nested] 55+ messages in thread
* Re: [dpdk-dev] [PATCH 2/5] mbuf: use the reserved 16 bits for double vlan
2015-05-26 8:36 ` [dpdk-dev] [PATCH 2/5] mbuf: use the reserved 16 bits for double vlan Helin Zhang
2015-05-26 14:55 ` Stephen Hemminger
@ 2015-06-01 8:50 ` Olivier MATZ
2015-06-02 2:37 ` Zhang, Helin
1 sibling, 1 reply; 55+ messages in thread
From: Olivier MATZ @ 2015-06-01 8:50 UTC (permalink / raw)
To: Helin Zhang, dev
Hi Helin,
On 05/26/2015 10:36 AM, Helin Zhang wrote:
> Use the reserved 16 bits in rte_mbuf structure for the outer vlan,
> also add QinQ offloading flags for both RX and TX sides.
>
> Signed-off-by: Helin Zhang <helin.zhang@intel.com>
> ---
> lib/librte_mbuf/rte_mbuf.h | 10 +++++++++-
> 1 file changed, 9 insertions(+), 1 deletion(-)
>
> diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
> index ab6de67..4551df9 100644
> --- a/lib/librte_mbuf/rte_mbuf.h
> +++ b/lib/librte_mbuf/rte_mbuf.h
> @@ -101,11 +101,17 @@ extern "C" {
> #define PKT_RX_TUNNEL_IPV6_HDR (1ULL << 12) /**< RX tunnel packet with IPv6 header. */
> #define PKT_RX_FDIR_ID (1ULL << 13) /**< FD id reported if FDIR match. */
> #define PKT_RX_FDIR_FLX (1ULL << 14) /**< Flexible bytes reported if FDIR match. */
> +#define PKT_RX_QINQ_PKT (1ULL << 15) /**< RX packet with double VLAN stripped. */
> /* add new RX flags here */
There's a small indent typo here: (1ULL << 15) is not aligned
with the lines above
>
> /* add new TX flags here */
>
> /**
> + * Second VLAN insertion (QinQ) flag.
> + */
> +#define PKT_TX_QINQ_PKT (1ULL << 49) /**< TX packet with double VLAN inserted. */
> +
> +/**
> * TCP segmentation offload. To enable this offload feature for a
> * packet to be transmitted on hardware supporting TSO:
> * - set the PKT_TX_TCP_SEG flag in mbuf->ol_flags (this flag implies
> @@ -279,7 +285,7 @@ struct rte_mbuf {
> uint16_t data_len; /**< Amount of data in segment buffer. */
> uint32_t pkt_len; /**< Total pkt len: sum of all segments. */
> uint16_t vlan_tci; /**< VLAN Tag Control Identifier (CPU order) */
> - uint16_t reserved;
> + uint16_t vlan_tci_outer; /**< Outer VLAN Tag Control Identifier (CPU order) */
> union {
> uint32_t rss; /**< RSS hash result if RSS enabled */
> struct {
> @@ -777,6 +783,7 @@ static inline void rte_pktmbuf_reset(struct rte_mbuf *m)
> m->pkt_len = 0;
> m->tx_offload = 0;
> m->vlan_tci = 0;
> + m->vlan_tci_outer = 0;
> m->nb_segs = 1;
> m->port = 0xff;
>
> @@ -849,6 +856,7 @@ static inline void rte_pktmbuf_attach(struct rte_mbuf *mi, struct rte_mbuf *m)
> mi->data_len = m->data_len;
> mi->port = m->port;
> mi->vlan_tci = m->vlan_tci;
> + mi->vlan_tci_outer = m->vlan_tci_outer;
> mi->tx_offload = m->tx_offload;
> mi->hash = m->hash;
>
>
Maybe some more affectations are missing. For instance in
examples/ipv4_multicast/main.c or in examples/vhost/main.c.
You can grep "->vlan_tci =" to find them all.
Do we need to update rte_vlan_insert() and rte_vlan_strip() to
support QinQ?
Regards,
Olivier
^ permalink raw reply [flat|nested] 55+ messages in thread
* Re: [dpdk-dev] [PATCH 3/5] i40e: support double vlan stripping and insertion
2015-05-26 8:36 ` [dpdk-dev] [PATCH 3/5] i40e: support double vlan stripping and insertion Helin Zhang
@ 2015-06-01 8:50 ` Olivier MATZ
2015-06-02 2:45 ` Zhang, Helin
0 siblings, 1 reply; 55+ messages in thread
From: Olivier MATZ @ 2015-06-01 8:50 UTC (permalink / raw)
To: Helin Zhang, dev
Hi Helin,
On 05/26/2015 10:36 AM, Helin Zhang wrote:
> It configures specific registers to enable double vlan stripping
> on RX side and insertion on TX side.
> The RX descriptors will be parsed, the vlan tags and flags will be
> saved to corresponding mbuf fields if vlan tag is detected.
> The TX descriptors will be configured according to the
> configurations in mbufs, to trigger the hardware insertion of
> double vlan tags for each packets sent out.
>
> Signed-off-by: Helin Zhang <helin.zhang@intel.com>
> [...]
> diff --git a/lib/librte_ether/rte_ethdev.h b/lib/librte_ether/rte_ethdev.h
> index 16dbe00..b26670e 100644
> --- a/lib/librte_ether/rte_ethdev.h
> +++ b/lib/librte_ether/rte_ethdev.h
> @@ -882,23 +882,25 @@ struct rte_eth_conf {
> /**
> * RX offload capabilities of a device.
> */
> -#define DEV_RX_OFFLOAD_VLAN_STRIP 0x00000001
> -#define DEV_RX_OFFLOAD_IPV4_CKSUM 0x00000002
> -#define DEV_RX_OFFLOAD_UDP_CKSUM 0x00000004
> -#define DEV_RX_OFFLOAD_TCP_CKSUM 0x00000008
> -#define DEV_RX_OFFLOAD_TCP_LRO 0x00000010
> +#define DEV_RX_OFFLOAD_VLAN_STRIP 0x00000001
> +#define DEV_RX_OFFLOAD_QINQ_STRIP 0x00000002
> +#define DEV_RX_OFFLOAD_IPV4_CKSUM 0x00000004
> +#define DEV_RX_OFFLOAD_UDP_CKSUM 0x00000008
> +#define DEV_RX_OFFLOAD_TCP_CKSUM 0x00000010
> +#define DEV_RX_OFFLOAD_TCP_LRO 0x00000020
>
> /**
> * TX offload capabilities of a device.
> */
> -#define DEV_TX_OFFLOAD_VLAN_INSERT 0x00000001
> -#define DEV_TX_OFFLOAD_IPV4_CKSUM 0x00000002
> -#define DEV_TX_OFFLOAD_UDP_CKSUM 0x00000004
> -#define DEV_TX_OFFLOAD_TCP_CKSUM 0x00000008
> -#define DEV_TX_OFFLOAD_SCTP_CKSUM 0x00000010
> -#define DEV_TX_OFFLOAD_TCP_TSO 0x00000020
> -#define DEV_TX_OFFLOAD_UDP_TSO 0x00000040
> -#define DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM 0x00000080 /**< Used for tunneling packet. */
> +#define DEV_TX_OFFLOAD_VLAN_INSERT 0x00000001
> +#define DEV_TX_OFFLOAD_QINQ_INSERT 0x00000002
> +#define DEV_TX_OFFLOAD_IPV4_CKSUM 0x00000004
> +#define DEV_TX_OFFLOAD_UDP_CKSUM 0x00000008
> +#define DEV_TX_OFFLOAD_TCP_CKSUM 0x00000010
> +#define DEV_TX_OFFLOAD_SCTP_CKSUM 0x00000020
> +#define DEV_TX_OFFLOAD_TCP_TSO 0x00000040
> +#define DEV_TX_OFFLOAD_UDP_TSO 0x00000080
> +#define DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM 0x00000100
>
> struct rte_eth_dev_info {
> struct rte_pci_device *pci_dev; /**< Device PCI information. */
>
It's probably better to add the new flags after the others
for ABI compat reasons.
Regards,
Olivier
^ permalink raw reply [flat|nested] 55+ messages in thread
* Re: [dpdk-dev] [PATCH 1/5] ixgbe: remove a discarded source line
2015-06-01 8:50 ` Olivier MATZ
@ 2015-06-02 1:45 ` Zhang, Helin
0 siblings, 0 replies; 55+ messages in thread
From: Zhang, Helin @ 2015-06-02 1:45 UTC (permalink / raw)
To: Olivier MATZ; +Cc: dev
> -----Original Message-----
> From: Olivier MATZ [mailto:olivier.matz@6wind.com]
> Sent: Monday, June 1, 2015 4:50 PM
> To: Zhang, Helin; dev@dpdk.org
> Subject: Re: [dpdk-dev] [PATCH 1/5] ixgbe: remove a discarded source line
>
> Hi Helin,
>
> On 05/26/2015 10:36 AM, Helin Zhang wrote:
> > Little endian to CPU order conversion had been added for reading vlan
> > tag from RX descriptor, while its original source line was forgotten
> > to delete. That's a discarded source line and should be deleted.
> >
> > Signed-off-by: Helin Zhang <helin.zhang@intel.com>
> > ---
> > drivers/net/ixgbe/ixgbe_rxtx.c | 1 -
> > 1 file changed, 1 deletion(-)
> >
> > diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c
> > b/drivers/net/ixgbe/ixgbe_rxtx.c index 4f9ab22..041c544 100644
> > --- a/drivers/net/ixgbe/ixgbe_rxtx.c
> > +++ b/drivers/net/ixgbe/ixgbe_rxtx.c
> > @@ -981,7 +981,6 @@ ixgbe_rx_scan_hw_ring(struct ixgbe_rx_queue *rxq)
> > pkt_len = (uint16_t)(rxdp[j].wb.upper.length - rxq->crc_len);
> > mb->data_len = pkt_len;
> > mb->pkt_len = pkt_len;
> > - mb->vlan_tci = rxdp[j].wb.upper.vlan;
> > mb->vlan_tci = rte_le_to_cpu_16(rxdp[j].wb.upper.vlan);
> >
> > /* convert descriptor fields to rte mbuf flags */
> >
>
> Maybe the following should be added in the commit log:
> Fixes: 23fcffe8ffac ("ixgbe: fix id and hash with flow director")
Agree, will add it. Thanks!
- Helin
>
> Acked-by: Olivier Matz <olivier.matz@6wind.com>
^ permalink raw reply [flat|nested] 55+ messages in thread
* Re: [dpdk-dev] [PATCH 2/5] mbuf: use the reserved 16 bits for double vlan
2015-06-01 8:50 ` Olivier MATZ
@ 2015-06-02 2:37 ` Zhang, Helin
0 siblings, 0 replies; 55+ messages in thread
From: Zhang, Helin @ 2015-06-02 2:37 UTC (permalink / raw)
To: Olivier MATZ, dev
> -----Original Message-----
> From: Olivier MATZ [mailto:olivier.matz@6wind.com]
> Sent: Monday, June 1, 2015 4:50 PM
> To: Zhang, Helin; dev@dpdk.org
> Subject: Re: [dpdk-dev] [PATCH 2/5] mbuf: use the reserved 16 bits for double
> vlan
>
> Hi Helin,
>
> On 05/26/2015 10:36 AM, Helin Zhang wrote:
> > Use the reserved 16 bits in rte_mbuf structure for the outer vlan,
> > also add QinQ offloading flags for both RX and TX sides.
> >
> > Signed-off-by: Helin Zhang <helin.zhang@intel.com>
> > ---
> > lib/librte_mbuf/rte_mbuf.h | 10 +++++++++-
> > 1 file changed, 9 insertions(+), 1 deletion(-)
> >
> > diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
> > index ab6de67..4551df9 100644
> > --- a/lib/librte_mbuf/rte_mbuf.h
> > +++ b/lib/librte_mbuf/rte_mbuf.h
> > @@ -101,11 +101,17 @@ extern "C" {
> > #define PKT_RX_TUNNEL_IPV6_HDR (1ULL << 12) /**< RX tunnel packet with
> IPv6 header. */
> > #define PKT_RX_FDIR_ID (1ULL << 13) /**< FD id reported if FDIR
> match. */
> > #define PKT_RX_FDIR_FLX (1ULL << 14) /**< Flexible bytes reported if
> FDIR match. */
> > +#define PKT_RX_QINQ_PKT (1ULL << 15) /**< RX packet with double
> VLAN stripped. */
> > /* add new RX flags here */
>
> There's a small indent typo here: (1ULL << 15) is not aligned with the lines above
Will fix it.
>
>
> >
> > /* add new TX flags here */
> >
> > /**
> > + * Second VLAN insertion (QinQ) flag.
> > + */
> > +#define PKT_TX_QINQ_PKT (1ULL << 49) /**< TX packet with double
> VLAN inserted. */
> > +
> > +/**
> > * TCP segmentation offload. To enable this offload feature for a
> > * packet to be transmitted on hardware supporting TSO:
> > * - set the PKT_TX_TCP_SEG flag in mbuf->ol_flags (this flag
> > implies @@ -279,7 +285,7 @@ struct rte_mbuf {
> > uint16_t data_len; /**< Amount of data in segment buffer. */
> > uint32_t pkt_len; /**< Total pkt len: sum of all segments. */
> > uint16_t vlan_tci; /**< VLAN Tag Control Identifier (CPU order) */
> > - uint16_t reserved;
> > + uint16_t vlan_tci_outer; /**< Outer VLAN Tag Control Identifier
> > +(CPU order) */
> > union {
> > uint32_t rss; /**< RSS hash result if RSS enabled */
> > struct {
> > @@ -777,6 +783,7 @@ static inline void rte_pktmbuf_reset(struct rte_mbuf
> *m)
> > m->pkt_len = 0;
> > m->tx_offload = 0;
> > m->vlan_tci = 0;
> > + m->vlan_tci_outer = 0;
> > m->nb_segs = 1;
> > m->port = 0xff;
> >
> > @@ -849,6 +856,7 @@ static inline void rte_pktmbuf_attach(struct rte_mbuf
> *mi, struct rte_mbuf *m)
> > mi->data_len = m->data_len;
> > mi->port = m->port;
> > mi->vlan_tci = m->vlan_tci;
> > + mi->vlan_tci_outer = m->vlan_tci_outer;
> > mi->tx_offload = m->tx_offload;
> > mi->hash = m->hash;
> >
> >
>
> Maybe some more affectations are missing. For instance in
> examples/ipv4_multicast/main.c or in examples/vhost/main.c.
> You can grep "->vlan_tci =" to find them all.
Will add vlan_tci_outer in ipv4_multicast/main.c.
After talking with vhost developers, it does not need to support double vlan at
this moment, so I will keep it as is.
>
> Do we need to update rte_vlan_insert() and rte_vlan_strip() to support QinQ?
They are the software version of vlan stripping and insertion. It was mainly for virtio.
I'd like to keep it as is, and let who want it to develop the double vlan stripping/insertion
version in the future.
Thank you very much!
- Helin
>
> Regards,
> Olivier
^ permalink raw reply [flat|nested] 55+ messages in thread
* Re: [dpdk-dev] [PATCH 3/5] i40e: support double vlan stripping and insertion
2015-06-01 8:50 ` Olivier MATZ
@ 2015-06-02 2:45 ` Zhang, Helin
0 siblings, 0 replies; 55+ messages in thread
From: Zhang, Helin @ 2015-06-02 2:45 UTC (permalink / raw)
To: Olivier MATZ; +Cc: dev
> -----Original Message-----
> From: Olivier MATZ [mailto:olivier.matz@6wind.com]
> Sent: Monday, June 1, 2015 4:51 PM
> To: Zhang, Helin; dev@dpdk.org
> Subject: Re: [dpdk-dev] [PATCH 3/5] i40e: support double vlan stripping and
> insertion
>
> Hi Helin,
>
> On 05/26/2015 10:36 AM, Helin Zhang wrote:
> > It configures specific registers to enable double vlan stripping on RX
> > side and insertion on TX side.
> > The RX descriptors will be parsed, the vlan tags and flags will be
> > saved to corresponding mbuf fields if vlan tag is detected.
> > The TX descriptors will be configured according to the configurations
> > in mbufs, to trigger the hardware insertion of double vlan tags for
> > each packets sent out.
> >
> > Signed-off-by: Helin Zhang <helin.zhang@intel.com>
>
> > [...]
>
> > diff --git a/lib/librte_ether/rte_ethdev.h
> > b/lib/librte_ether/rte_ethdev.h index 16dbe00..b26670e 100644
> > --- a/lib/librte_ether/rte_ethdev.h
> > +++ b/lib/librte_ether/rte_ethdev.h
> > @@ -882,23 +882,25 @@ struct rte_eth_conf {
> > /**
> > * RX offload capabilities of a device.
> > */
> > -#define DEV_RX_OFFLOAD_VLAN_STRIP 0x00000001 -#define
> > DEV_RX_OFFLOAD_IPV4_CKSUM 0x00000002
> > -#define DEV_RX_OFFLOAD_UDP_CKSUM 0x00000004
> > -#define DEV_RX_OFFLOAD_TCP_CKSUM 0x00000008
> > -#define DEV_RX_OFFLOAD_TCP_LRO 0x00000010
> > +#define DEV_RX_OFFLOAD_VLAN_STRIP 0x00000001
> > +#define DEV_RX_OFFLOAD_QINQ_STRIP 0x00000002
> > +#define DEV_RX_OFFLOAD_IPV4_CKSUM 0x00000004
> > +#define DEV_RX_OFFLOAD_UDP_CKSUM 0x00000008
> > +#define DEV_RX_OFFLOAD_TCP_CKSUM 0x00000010
> > +#define DEV_RX_OFFLOAD_TCP_LRO 0x00000020
> >
> > /**
> > * TX offload capabilities of a device.
> > */
> > -#define DEV_TX_OFFLOAD_VLAN_INSERT 0x00000001 -#define
> > DEV_TX_OFFLOAD_IPV4_CKSUM 0x00000002
> > -#define DEV_TX_OFFLOAD_UDP_CKSUM 0x00000004
> > -#define DEV_TX_OFFLOAD_TCP_CKSUM 0x00000008
> > -#define DEV_TX_OFFLOAD_SCTP_CKSUM 0x00000010
> > -#define DEV_TX_OFFLOAD_TCP_TSO 0x00000020
> > -#define DEV_TX_OFFLOAD_UDP_TSO 0x00000040
> > -#define DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM 0x00000080 /**< Used for
> > tunneling packet. */
> > +#define DEV_TX_OFFLOAD_VLAN_INSERT 0x00000001
> > +#define DEV_TX_OFFLOAD_QINQ_INSERT 0x00000002
> > +#define DEV_TX_OFFLOAD_IPV4_CKSUM 0x00000004
> > +#define DEV_TX_OFFLOAD_UDP_CKSUM 0x00000008
> > +#define DEV_TX_OFFLOAD_TCP_CKSUM 0x00000010
> > +#define DEV_TX_OFFLOAD_SCTP_CKSUM 0x00000020
> > +#define DEV_TX_OFFLOAD_TCP_TSO 0x00000040
> > +#define DEV_TX_OFFLOAD_UDP_TSO 0x00000080
> > +#define DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM 0x00000100
> >
> > struct rte_eth_dev_info {
> > struct rte_pci_device *pci_dev; /**< Device PCI information. */
> >
>
> It's probably better to add the new flags after the others for ABI compat reasons.
Agree, will fix it.
Thanks,
Helin
>
>
> Regards,
> Olivier
^ permalink raw reply [flat|nested] 55+ messages in thread
* [dpdk-dev] [PATCH v2 0/6] support i40e QinQ stripping and insertion
2015-05-26 8:36 ` [dpdk-dev] [PATCH 0/5] support i40e " Helin Zhang
` (4 preceding siblings ...)
2015-05-26 8:36 ` [dpdk-dev] [PATCH 5/5] app/testpmd: add test cases for qinq stripping and insertion Helin Zhang
@ 2015-06-02 3:16 ` Helin Zhang
2015-06-02 3:16 ` [dpdk-dev] [PATCH v2 1/6] ixgbe: remove a discarded source line Helin Zhang
` (8 more replies)
5 siblings, 9 replies; 55+ messages in thread
From: Helin Zhang @ 2015-06-02 3:16 UTC (permalink / raw)
To: dev
As i40e hardware can be reconfigured to support QinQ stripping and
insertion, this patch set is to enable that together with using the
reserved 16 bits in 'struct rte_mbuf' for the second vlan tag.
Corresponding command is added in testpmd for testing.
Note that no need to rework vPMD, as nothings used in it changed.
v2 changes:
* Added more commit logs of which commit it fix for.
* Fixed a typo.
* Kept the original RX/TX offload flags as they were, added new
flags after with new bit masks, for ABI compatibility.
* Supported double vlan stripping/insertion in examples/ipv4_multicast.
Helin Zhang (6):
ixgbe: remove a discarded source line
mbuf: use the reserved 16 bits for double vlan
i40e: support double vlan stripping and insertion
i40evf: add supported offload capability flags
app/testpmd: add test cases for qinq stripping and insertion
examples/ipv4_multicast: support double vlan stripping and insertion
app/test-pmd/cmdline.c | 78 +++++++++++++++++++++++++++++++++----
app/test-pmd/config.c | 21 +++++++++-
app/test-pmd/flowgen.c | 4 +-
app/test-pmd/macfwd.c | 3 ++
app/test-pmd/macswap.c | 3 ++
app/test-pmd/rxonly.c | 3 ++
app/test-pmd/testpmd.h | 6 ++-
app/test-pmd/txonly.c | 8 +++-
drivers/net/i40e/i40e_ethdev.c | 52 +++++++++++++++++++++++++
drivers/net/i40e/i40e_ethdev_vf.c | 13 +++++++
drivers/net/i40e/i40e_rxtx.c | 81 +++++++++++++++++++++++++--------------
drivers/net/ixgbe/ixgbe_rxtx.c | 1 -
examples/ipv4_multicast/main.c | 1 +
lib/librte_ether/rte_ethdev.h | 2 +
lib/librte_mbuf/rte_mbuf.h | 10 ++++-
15 files changed, 243 insertions(+), 43 deletions(-)
--
1.9.3
^ permalink raw reply [flat|nested] 55+ messages in thread
* [dpdk-dev] [PATCH v2 1/6] ixgbe: remove a discarded source line
2015-06-02 3:16 ` [dpdk-dev] [PATCH v2 0/6] support i40e QinQ " Helin Zhang
@ 2015-06-02 3:16 ` Helin Zhang
2015-06-02 3:16 ` [dpdk-dev] [PATCH v2 2/6] mbuf: use the reserved 16 bits for double vlan Helin Zhang
` (7 subsequent siblings)
8 siblings, 0 replies; 55+ messages in thread
From: Helin Zhang @ 2015-06-02 3:16 UTC (permalink / raw)
To: dev
Little endian to CPU order conversion had been added for reading
vlan tag from RX descriptor, while its original source line was
forgotten to delete. That's a discarded source line and should be
deleted.
Fixes: 23fcffe8ffac ("ixgbe: fix id and hash with flow director")
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
drivers/net/ixgbe/ixgbe_rxtx.c | 1 -
1 file changed, 1 deletion(-)
v2 changes:
* Added more commit logs of which commit it fix for.
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c
index 4f9ab22..041c544 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx.c
@@ -981,7 +981,6 @@ ixgbe_rx_scan_hw_ring(struct ixgbe_rx_queue *rxq)
pkt_len = (uint16_t)(rxdp[j].wb.upper.length - rxq->crc_len);
mb->data_len = pkt_len;
mb->pkt_len = pkt_len;
- mb->vlan_tci = rxdp[j].wb.upper.vlan;
mb->vlan_tci = rte_le_to_cpu_16(rxdp[j].wb.upper.vlan);
/* convert descriptor fields to rte mbuf flags */
--
1.9.3
^ permalink raw reply [flat|nested] 55+ messages in thread
* [dpdk-dev] [PATCH v2 2/6] mbuf: use the reserved 16 bits for double vlan
2015-06-02 3:16 ` [dpdk-dev] [PATCH v2 0/6] support i40e QinQ " Helin Zhang
2015-06-02 3:16 ` [dpdk-dev] [PATCH v2 1/6] ixgbe: remove a discarded source line Helin Zhang
@ 2015-06-02 3:16 ` Helin Zhang
2015-06-02 3:16 ` [dpdk-dev] [PATCH v2 3/6] i40e: support double vlan stripping and insertion Helin Zhang
` (6 subsequent siblings)
8 siblings, 0 replies; 55+ messages in thread
From: Helin Zhang @ 2015-06-02 3:16 UTC (permalink / raw)
To: dev
Use the reserved 16 bits in rte_mbuf structure for the outer vlan,
also add QinQ offloading flags for both RX and TX sides.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
lib/librte_mbuf/rte_mbuf.h | 10 +++++++++-
1 file changed, 9 insertions(+), 1 deletion(-)
v2 changes:
* Fixed a typo.
diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
index ab6de67..84fe181 100644
--- a/lib/librte_mbuf/rte_mbuf.h
+++ b/lib/librte_mbuf/rte_mbuf.h
@@ -101,11 +101,17 @@ extern "C" {
#define PKT_RX_TUNNEL_IPV6_HDR (1ULL << 12) /**< RX tunnel packet with IPv6 header. */
#define PKT_RX_FDIR_ID (1ULL << 13) /**< FD id reported if FDIR match. */
#define PKT_RX_FDIR_FLX (1ULL << 14) /**< Flexible bytes reported if FDIR match. */
+#define PKT_RX_QINQ_PKT (1ULL << 15) /**< RX packet with double VLAN stripped. */
/* add new RX flags here */
/* add new TX flags here */
/**
+ * Second VLAN insertion (QinQ) flag.
+ */
+#define PKT_TX_QINQ_PKT (1ULL << 49) /**< TX packet with double VLAN inserted. */
+
+/**
* TCP segmentation offload. To enable this offload feature for a
* packet to be transmitted on hardware supporting TSO:
* - set the PKT_TX_TCP_SEG flag in mbuf->ol_flags (this flag implies
@@ -279,7 +285,7 @@ struct rte_mbuf {
uint16_t data_len; /**< Amount of data in segment buffer. */
uint32_t pkt_len; /**< Total pkt len: sum of all segments. */
uint16_t vlan_tci; /**< VLAN Tag Control Identifier (CPU order) */
- uint16_t reserved;
+ uint16_t vlan_tci_outer; /**< Outer VLAN Tag Control Identifier (CPU order) */
union {
uint32_t rss; /**< RSS hash result if RSS enabled */
struct {
@@ -777,6 +783,7 @@ static inline void rte_pktmbuf_reset(struct rte_mbuf *m)
m->pkt_len = 0;
m->tx_offload = 0;
m->vlan_tci = 0;
+ m->vlan_tci_outer = 0;
m->nb_segs = 1;
m->port = 0xff;
@@ -849,6 +856,7 @@ static inline void rte_pktmbuf_attach(struct rte_mbuf *mi, struct rte_mbuf *m)
mi->data_len = m->data_len;
mi->port = m->port;
mi->vlan_tci = m->vlan_tci;
+ mi->vlan_tci_outer = m->vlan_tci_outer;
mi->tx_offload = m->tx_offload;
mi->hash = m->hash;
--
1.9.3
^ permalink raw reply [flat|nested] 55+ messages in thread
* [dpdk-dev] [PATCH v2 3/6] i40e: support double vlan stripping and insertion
2015-06-02 3:16 ` [dpdk-dev] [PATCH v2 0/6] support i40e QinQ " Helin Zhang
2015-06-02 3:16 ` [dpdk-dev] [PATCH v2 1/6] ixgbe: remove a discarded source line Helin Zhang
2015-06-02 3:16 ` [dpdk-dev] [PATCH v2 2/6] mbuf: use the reserved 16 bits for double vlan Helin Zhang
@ 2015-06-02 3:16 ` Helin Zhang
2015-06-02 3:16 ` [dpdk-dev] [PATCH v2 4/6] i40evf: add supported offload capability flags Helin Zhang
` (5 subsequent siblings)
8 siblings, 0 replies; 55+ messages in thread
From: Helin Zhang @ 2015-06-02 3:16 UTC (permalink / raw)
To: dev
It configures specific registers to enable double vlan stripping
on RX side and insertion on TX side.
The RX descriptors will be parsed, the vlan tags and flags will be
saved to corresponding mbuf fields if vlan tag is detected.
The TX descriptors will be configured according to the
configurations in mbufs, to trigger the hardware insertion of
double vlan tags for each packets sent out.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
drivers/net/i40e/i40e_ethdev.c | 52 +++++++++++++++++++++++++
drivers/net/i40e/i40e_ethdev_vf.c | 6 +++
drivers/net/i40e/i40e_rxtx.c | 81 +++++++++++++++++++++++++--------------
lib/librte_ether/rte_ethdev.h | 2 +
4 files changed, 112 insertions(+), 29 deletions(-)
v2 changes:
* Kept the original RX/TX offload flags as they were, added new
flags after with new bit masks, for ABI compatibility.
diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index da6c0b5..7593a70 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -211,6 +211,7 @@ static int i40e_dev_filter_ctrl(struct rte_eth_dev *dev,
void *arg);
static void i40e_configure_registers(struct i40e_hw *hw);
static void i40e_hw_init(struct i40e_hw *hw);
+static int i40e_config_qinq(struct i40e_hw *hw, struct i40e_vsi *vsi);
static const struct rte_pci_id pci_id_i40e_map[] = {
#define RTE_PCI_DEV_ID_DECL_I40E(vend, dev) {RTE_PCI_DEVICE(vend, dev)},
@@ -1529,11 +1530,13 @@ i40e_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->max_vfs = dev->pci_dev->max_vfs;
dev_info->rx_offload_capa =
DEV_RX_OFFLOAD_VLAN_STRIP |
+ DEV_RX_OFFLOAD_QINQ_STRIP |
DEV_RX_OFFLOAD_IPV4_CKSUM |
DEV_RX_OFFLOAD_UDP_CKSUM |
DEV_RX_OFFLOAD_TCP_CKSUM;
dev_info->tx_offload_capa =
DEV_TX_OFFLOAD_VLAN_INSERT |
+ DEV_TX_OFFLOAD_QINQ_INSERT |
DEV_TX_OFFLOAD_IPV4_CKSUM |
DEV_TX_OFFLOAD_UDP_CKSUM |
DEV_TX_OFFLOAD_TCP_CKSUM |
@@ -3056,6 +3059,7 @@ i40e_vsi_setup(struct i40e_pf *pf,
* macvlan filter which is expected and cannot be removed.
*/
i40e_update_default_filter_setting(vsi);
+ i40e_config_qinq(hw, vsi);
} else if (type == I40E_VSI_SRIOV) {
memset(&ctxt, 0, sizeof(ctxt));
/**
@@ -3096,6 +3100,8 @@ i40e_vsi_setup(struct i40e_pf *pf,
* Since VSI is not created yet, only configure parameter,
* will add vsi below.
*/
+
+ i40e_config_qinq(hw, vsi);
} else if (type == I40E_VSI_VMDQ2) {
memset(&ctxt, 0, sizeof(ctxt));
/*
@@ -5697,3 +5703,49 @@ i40e_configure_registers(struct i40e_hw *hw)
"0x%"PRIx32, reg_table[i].val, reg_table[i].addr);
}
}
+
+#define I40E_VSI_TSR(_i) (0x00050800 + ((_i) * 4))
+#define I40E_VSI_TSR_QINQ_CONFIG 0xc030
+#define I40E_VSI_L2TAGSTXVALID(_i) (0x00042800 + ((_i) * 4))
+#define I40E_VSI_L2TAGSTXVALID_QINQ 0xab
+static int
+i40e_config_qinq(struct i40e_hw *hw, struct i40e_vsi *vsi)
+{
+ uint32_t reg;
+ int ret;
+
+ if (vsi->vsi_id >= I40E_MAX_NUM_VSIS) {
+ PMD_DRV_LOG(ERR, "VSI ID exceeds the maximum");
+ return -EINVAL;
+ }
+
+ /* Configure for double VLAN RX stripping */
+ reg = I40E_READ_REG(hw, I40E_VSI_TSR(vsi->vsi_id));
+ if ((reg & I40E_VSI_TSR_QINQ_CONFIG) != I40E_VSI_TSR_QINQ_CONFIG) {
+ reg |= I40E_VSI_TSR_QINQ_CONFIG;
+ ret = i40e_aq_debug_write_register(hw,
+ I40E_VSI_TSR(vsi->vsi_id),
+ reg, NULL);
+ if (ret < 0) {
+ PMD_DRV_LOG(ERR, "Failed to update VSI_TSR[%d]",
+ vsi->vsi_id);
+ return I40E_ERR_CONFIG;
+ }
+ }
+
+ /* Configure for double VLAN TX insertion */
+ reg = I40E_READ_REG(hw, I40E_VSI_L2TAGSTXVALID(vsi->vsi_id));
+ if ((reg & 0xff) != I40E_VSI_L2TAGSTXVALID_QINQ) {
+ reg = I40E_VSI_L2TAGSTXVALID_QINQ;
+ ret = i40e_aq_debug_write_register(hw,
+ I40E_VSI_L2TAGSTXVALID(
+ vsi->vsi_id), reg, NULL);
+ if (ret < 0) {
+ PMD_DRV_LOG(ERR, "Failed to update "
+ "VSI_L2TAGSTXVALID[%d]", vsi->vsi_id);
+ return I40E_ERR_CONFIG;
+ }
+ }
+
+ return 0;
+}
diff --git a/drivers/net/i40e/i40e_ethdev_vf.c b/drivers/net/i40e/i40e_ethdev_vf.c
index 9f92a2f..1a4d088 100644
--- a/drivers/net/i40e/i40e_ethdev_vf.c
+++ b/drivers/net/i40e/i40e_ethdev_vf.c
@@ -1643,6 +1643,12 @@ i40evf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->max_rx_pktlen = I40E_FRAME_SIZE_MAX;
dev_info->reta_size = ETH_RSS_RETA_SIZE_64;
dev_info->flow_type_rss_offloads = I40E_RSS_OFFLOAD_ALL;
+ dev_info->rx_offload_capa =
+ DEV_RX_OFFLOAD_VLAN_STRIP |
+ DEV_RX_OFFLOAD_QINQ_STRIP;
+ dev_info->tx_offload_capa =
+ DEV_TX_OFFLOAD_VLAN_INSERT |
+ DEV_TX_OFFLOAD_QINQ_INSERT;
dev_info->default_rxconf = (struct rte_eth_rxconf) {
.rx_thresh = {
diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
index 787f0bd..442494e 100644
--- a/drivers/net/i40e/i40e_rxtx.c
+++ b/drivers/net/i40e/i40e_rxtx.c
@@ -95,18 +95,44 @@ static uint16_t i40e_xmit_pkts_simple(void *tx_queue,
struct rte_mbuf **tx_pkts,
uint16_t nb_pkts);
+static inline void
+i40e_rxd_to_vlan_tci(struct rte_mbuf *mb, volatile union i40e_rx_desc *rxdp)
+{
+ if (rte_le_to_cpu_64(rxdp->wb.qword1.status_error_len) &
+ (1 << I40E_RX_DESC_STATUS_L2TAG1P_SHIFT)) {
+ mb->ol_flags |= PKT_RX_VLAN_PKT;
+ mb->vlan_tci =
+ rte_le_to_cpu_16(rxdp->wb.qword0.lo_dword.l2tag1);
+ PMD_RX_LOG(DEBUG, "Descriptor l2tag1: %u",
+ rte_le_to_cpu_16(rxdp->wb.qword0.lo_dword.l2tag1));
+ } else {
+ mb->vlan_tci = 0;
+ }
+#ifndef RTE_LIBRTE_I40E_16BYTE_RX_DESC
+ if (rte_le_to_cpu_16(rxdp->wb.qword2.ext_status) &
+ (1 << I40E_RX_DESC_EXT_STATUS_L2TAG2P_SHIFT)) {
+ mb->ol_flags |= PKT_RX_QINQ_PKT;
+ mb->vlan_tci_outer = mb->vlan_tci;
+ mb->vlan_tci = rte_le_to_cpu_16(rxdp->wb.qword2.l2tag2_2);
+ PMD_RX_LOG(DEBUG, "Descriptor l2tag2_1: %u, l2tag2_2: %u",
+ rte_le_to_cpu_16(rxdp->wb.qword2.l2tag2_1),
+ rte_le_to_cpu_16(rxdp->wb.qword2.l2tag2_2));
+ } else {
+ mb->vlan_tci_outer = 0;
+ }
+#endif
+ PMD_RX_LOG(DEBUG, "Mbuf vlan_tci: %u, vlan_tci_outer: %u",
+ mb->vlan_tci, mb->vlan_tci_outer);
+}
+
/* Translate the rx descriptor status to pkt flags */
static inline uint64_t
i40e_rxd_status_to_pkt_flags(uint64_t qword)
{
uint64_t flags;
- /* Check if VLAN packet */
- flags = qword & (1 << I40E_RX_DESC_STATUS_L2TAG1P_SHIFT) ?
- PKT_RX_VLAN_PKT : 0;
-
/* Check if RSS_HASH */
- flags |= (((qword >> I40E_RX_DESC_STATUS_FLTSTAT_SHIFT) &
+ flags = (((qword >> I40E_RX_DESC_STATUS_FLTSTAT_SHIFT) &
I40E_RX_DESC_FLTSTAT_RSS_HASH) ==
I40E_RX_DESC_FLTSTAT_RSS_HASH) ? PKT_RX_RSS_HASH : 0;
@@ -697,16 +723,12 @@ i40e_rx_scan_hw_ring(struct i40e_rx_queue *rxq)
mb = rxep[j].mbuf;
qword1 = rte_le_to_cpu_64(\
rxdp[j].wb.qword1.status_error_len);
- rx_status = (qword1 & I40E_RXD_QW1_STATUS_MASK) >>
- I40E_RXD_QW1_STATUS_SHIFT;
pkt_len = ((qword1 & I40E_RXD_QW1_LENGTH_PBUF_MASK) >>
I40E_RXD_QW1_LENGTH_PBUF_SHIFT) - rxq->crc_len;
mb->data_len = pkt_len;
mb->pkt_len = pkt_len;
- mb->vlan_tci = rx_status &
- (1 << I40E_RX_DESC_STATUS_L2TAG1P_SHIFT) ?
- rte_le_to_cpu_16(\
- rxdp[j].wb.qword0.lo_dword.l2tag1) : 0;
+ mb->ol_flags = 0;
+ i40e_rxd_to_vlan_tci(mb, &rxdp[j]);
pkt_flags = i40e_rxd_status_to_pkt_flags(qword1);
pkt_flags |= i40e_rxd_error_to_pkt_flags(qword1);
pkt_flags |= i40e_rxd_ptype_to_pkt_flags(qword1);
@@ -720,7 +742,7 @@ i40e_rx_scan_hw_ring(struct i40e_rx_queue *rxq)
if (pkt_flags & PKT_RX_FDIR)
pkt_flags |= i40e_rxd_build_fdir(&rxdp[j], mb);
- mb->ol_flags = pkt_flags;
+ mb->ol_flags |= pkt_flags;
}
for (j = 0; j < I40E_LOOK_AHEAD; j++)
@@ -946,10 +968,8 @@ i40e_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
rxm->pkt_len = rx_packet_len;
rxm->data_len = rx_packet_len;
rxm->port = rxq->port_id;
-
- rxm->vlan_tci = rx_status &
- (1 << I40E_RX_DESC_STATUS_L2TAG1P_SHIFT) ?
- rte_le_to_cpu_16(rxd.wb.qword0.lo_dword.l2tag1) : 0;
+ rxm->ol_flags = 0;
+ i40e_rxd_to_vlan_tci(rxm, &rxd);
pkt_flags = i40e_rxd_status_to_pkt_flags(qword1);
pkt_flags |= i40e_rxd_error_to_pkt_flags(qword1);
pkt_flags |= i40e_rxd_ptype_to_pkt_flags(qword1);
@@ -961,7 +981,7 @@ i40e_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
if (pkt_flags & PKT_RX_FDIR)
pkt_flags |= i40e_rxd_build_fdir(&rxd, rxm);
- rxm->ol_flags = pkt_flags;
+ rxm->ol_flags |= pkt_flags;
rx_pkts[nb_rx++] = rxm;
}
@@ -1106,9 +1126,8 @@ i40e_recv_scattered_pkts(void *rx_queue,
}
first_seg->port = rxq->port_id;
- first_seg->vlan_tci = (rx_status &
- (1 << I40E_RX_DESC_STATUS_L2TAG1P_SHIFT)) ?
- rte_le_to_cpu_16(rxd.wb.qword0.lo_dword.l2tag1) : 0;
+ first_seg->ol_flags = 0;
+ i40e_rxd_to_vlan_tci(first_seg, &rxd);
pkt_flags = i40e_rxd_status_to_pkt_flags(qword1);
pkt_flags |= i40e_rxd_error_to_pkt_flags(qword1);
pkt_flags |= i40e_rxd_ptype_to_pkt_flags(qword1);
@@ -1121,7 +1140,7 @@ i40e_recv_scattered_pkts(void *rx_queue,
if (pkt_flags & PKT_RX_FDIR)
pkt_flags |= i40e_rxd_build_fdir(&rxd, rxm);
- first_seg->ol_flags = pkt_flags;
+ first_seg->ol_flags |= pkt_flags;
/* Prefetch data of first segment, if configured to do so. */
rte_prefetch0(RTE_PTR_ADD(first_seg->buf_addr,
@@ -1159,17 +1178,15 @@ i40e_recv_scattered_pkts(void *rx_queue,
static inline uint16_t
i40e_calc_context_desc(uint64_t flags)
{
- uint64_t mask = 0ULL;
-
- mask |= (PKT_TX_OUTER_IP_CKSUM | PKT_TX_TCP_SEG);
+ static uint64_t mask = PKT_TX_OUTER_IP_CKSUM |
+ PKT_TX_TCP_SEG |
+ PKT_TX_QINQ_PKT;
#ifdef RTE_LIBRTE_IEEE1588
mask |= PKT_TX_IEEE1588_TMST;
#endif
- if (flags & mask)
- return 1;
- return 0;
+ return ((flags & mask) ? 1 : 0);
}
/* set i40e TSO context descriptor */
@@ -1290,9 +1307,9 @@ i40e_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
}
/* Descriptor based VLAN insertion */
- if (ol_flags & PKT_TX_VLAN_PKT) {
+ if (ol_flags & (PKT_TX_VLAN_PKT | PKT_TX_QINQ_PKT)) {
tx_flags |= tx_pkt->vlan_tci <<
- I40E_TX_FLAG_L2TAG1_SHIFT;
+ I40E_TX_FLAG_L2TAG1_SHIFT;
tx_flags |= I40E_TX_FLAG_INSERT_VLAN;
td_cmd |= I40E_TX_DESC_CMD_IL2TAG1;
td_tag = (tx_flags & I40E_TX_FLAG_L2TAG1_MASK) >>
@@ -1340,6 +1357,12 @@ i40e_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
ctx_txd->tunneling_params =
rte_cpu_to_le_32(cd_tunneling_params);
+ if (ol_flags & PKT_TX_QINQ_PKT) {
+ cd_l2tag2 = tx_pkt->vlan_tci_outer;
+ cd_type_cmd_tso_mss |=
+ ((uint64_t)I40E_TX_CTX_DESC_IL2TAG2 <<
+ I40E_TXD_CTX_QW1_CMD_SHIFT);
+ }
ctx_txd->l2tag2 = rte_cpu_to_le_16(cd_l2tag2);
ctx_txd->type_cmd_tso_mss =
rte_cpu_to_le_64(cd_type_cmd_tso_mss);
diff --git a/lib/librte_ether/rte_ethdev.h b/lib/librte_ether/rte_ethdev.h
index 16dbe00..892280c 100644
--- a/lib/librte_ether/rte_ethdev.h
+++ b/lib/librte_ether/rte_ethdev.h
@@ -887,6 +887,7 @@ struct rte_eth_conf {
#define DEV_RX_OFFLOAD_UDP_CKSUM 0x00000004
#define DEV_RX_OFFLOAD_TCP_CKSUM 0x00000008
#define DEV_RX_OFFLOAD_TCP_LRO 0x00000010
+#define DEV_RX_OFFLOAD_QINQ_STRIP 0x00000020
/**
* TX offload capabilities of a device.
@@ -899,6 +900,7 @@ struct rte_eth_conf {
#define DEV_TX_OFFLOAD_TCP_TSO 0x00000020
#define DEV_TX_OFFLOAD_UDP_TSO 0x00000040
#define DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM 0x00000080 /**< Used for tunneling packet. */
+#define DEV_TX_OFFLOAD_QINQ_INSERT 0x00000100
struct rte_eth_dev_info {
struct rte_pci_device *pci_dev; /**< Device PCI information. */
--
1.9.3
^ permalink raw reply [flat|nested] 55+ messages in thread
* [dpdk-dev] [PATCH v2 4/6] i40evf: add supported offload capability flags
2015-06-02 3:16 ` [dpdk-dev] [PATCH v2 0/6] support i40e QinQ " Helin Zhang
` (2 preceding siblings ...)
2015-06-02 3:16 ` [dpdk-dev] [PATCH v2 3/6] i40e: support double vlan stripping and insertion Helin Zhang
@ 2015-06-02 3:16 ` Helin Zhang
2015-06-02 3:16 ` [dpdk-dev] [PATCH v2 5/6] app/testpmd: add test cases for qinq stripping and insertion Helin Zhang
` (4 subsequent siblings)
8 siblings, 0 replies; 55+ messages in thread
From: Helin Zhang @ 2015-06-02 3:16 UTC (permalink / raw)
To: dev
Add checksum offload capability flags which have already been
supported for a long time.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
drivers/net/i40e/i40e_ethdev_vf.c | 11 +++++++++--
1 file changed, 9 insertions(+), 2 deletions(-)
diff --git a/drivers/net/i40e/i40e_ethdev_vf.c b/drivers/net/i40e/i40e_ethdev_vf.c
index 1a4d088..12d7917 100644
--- a/drivers/net/i40e/i40e_ethdev_vf.c
+++ b/drivers/net/i40e/i40e_ethdev_vf.c
@@ -1645,10 +1645,17 @@ i40evf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->flow_type_rss_offloads = I40E_RSS_OFFLOAD_ALL;
dev_info->rx_offload_capa =
DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_QINQ_STRIP;
+ DEV_RX_OFFLOAD_QINQ_STRIP |
+ DEV_RX_OFFLOAD_IPV4_CKSUM |
+ DEV_RX_OFFLOAD_UDP_CKSUM |
+ DEV_RX_OFFLOAD_TCP_CKSUM;
dev_info->tx_offload_capa =
DEV_TX_OFFLOAD_VLAN_INSERT |
- DEV_TX_OFFLOAD_QINQ_INSERT;
+ DEV_TX_OFFLOAD_QINQ_INSERT |
+ DEV_TX_OFFLOAD_IPV4_CKSUM |
+ DEV_TX_OFFLOAD_UDP_CKSUM |
+ DEV_TX_OFFLOAD_TCP_CKSUM |
+ DEV_TX_OFFLOAD_SCTP_CKSUM;
dev_info->default_rxconf = (struct rte_eth_rxconf) {
.rx_thresh = {
--
1.9.3
^ permalink raw reply [flat|nested] 55+ messages in thread
* [dpdk-dev] [PATCH v2 5/6] app/testpmd: add test cases for qinq stripping and insertion
2015-06-02 3:16 ` [dpdk-dev] [PATCH v2 0/6] support i40e QinQ " Helin Zhang
` (3 preceding siblings ...)
2015-06-02 3:16 ` [dpdk-dev] [PATCH v2 4/6] i40evf: add supported offload capability flags Helin Zhang
@ 2015-06-02 3:16 ` Helin Zhang
2015-06-02 3:16 ` [dpdk-dev] [PATCH v2 6/6] examples/ipv4_multicast: support double vlan " Helin Zhang
` (3 subsequent siblings)
8 siblings, 0 replies; 55+ messages in thread
From: Helin Zhang @ 2015-06-02 3:16 UTC (permalink / raw)
To: dev
If double vlan is detected, its stripped flag and vlan tags can be
printed on rxonly mode. Test command of 'tx_vlan set' is expanded
to set both single and double vlan tags on TX side for each packets
to be sent out.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
app/test-pmd/cmdline.c | 78 +++++++++++++++++++++++++++++++++++++++++++++-----
app/test-pmd/config.c | 21 +++++++++++++-
app/test-pmd/flowgen.c | 4 ++-
app/test-pmd/macfwd.c | 3 ++
app/test-pmd/macswap.c | 3 ++
app/test-pmd/rxonly.c | 3 ++
app/test-pmd/testpmd.h | 6 +++-
app/test-pmd/txonly.c | 8 ++++--
8 files changed, 114 insertions(+), 12 deletions(-)
diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index f01db2a..db2e73e 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -304,9 +304,9 @@ static void cmd_help_long_parsed(void *parsed_result,
"rx_vxlan_port rm (udp_port) (port_id)\n"
" Remove an UDP port for VXLAN packet filter on a port\n\n"
- "tx_vlan set vlan_id (port_id)\n"
- " Set hardware insertion of VLAN ID in packets sent"
- " on a port.\n\n"
+ "tx_vlan set (port_id) vlan_id[, vlan_id_outer]\n"
+ " Set hardware insertion of VLAN IDs (single or double VLAN "
+ "depends on the number of VLAN IDs) in packets sent on a port.\n\n"
"tx_vlan set pvid port_id vlan_id (on|off)\n"
" Set port based TX VLAN insertion.\n\n"
@@ -2799,8 +2799,8 @@ cmdline_parse_inst_t cmd_rx_vlan_filter = {
struct cmd_tx_vlan_set_result {
cmdline_fixed_string_t tx_vlan;
cmdline_fixed_string_t set;
- uint16_t vlan_id;
uint8_t port_id;
+ uint16_t vlan_id;
};
static void
@@ -2809,6 +2809,13 @@ cmd_tx_vlan_set_parsed(void *parsed_result,
__attribute__((unused)) void *data)
{
struct cmd_tx_vlan_set_result *res = parsed_result;
+ int vlan_offload = rte_eth_dev_get_vlan_offload(res->port_id);
+
+ if (vlan_offload & ETH_VLAN_EXTEND_OFFLOAD) {
+ printf("Error, as QinQ has been enabled.\n");
+ return;
+ }
+
tx_vlan_set(res->port_id, res->vlan_id);
}
@@ -2828,13 +2835,69 @@ cmdline_parse_token_num_t cmd_tx_vlan_set_portid =
cmdline_parse_inst_t cmd_tx_vlan_set = {
.f = cmd_tx_vlan_set_parsed,
.data = NULL,
- .help_str = "enable hardware insertion of a VLAN header with a given "
- "TAG Identifier in packets sent on a port",
+ .help_str = "enable hardware insertion of a single VLAN header "
+ "with a given TAG Identifier in packets sent on a port",
.tokens = {
(void *)&cmd_tx_vlan_set_tx_vlan,
(void *)&cmd_tx_vlan_set_set,
- (void *)&cmd_tx_vlan_set_vlanid,
(void *)&cmd_tx_vlan_set_portid,
+ (void *)&cmd_tx_vlan_set_vlanid,
+ NULL,
+ },
+};
+
+/* *** ENABLE HARDWARE INSERTION OF Double VLAN HEADER IN TX PACKETS *** */
+struct cmd_tx_vlan_set_qinq_result {
+ cmdline_fixed_string_t tx_vlan;
+ cmdline_fixed_string_t set;
+ uint8_t port_id;
+ uint16_t vlan_id;
+ uint16_t vlan_id_outer;
+};
+
+static void
+cmd_tx_vlan_set_qinq_parsed(void *parsed_result,
+ __attribute__((unused)) struct cmdline *cl,
+ __attribute__((unused)) void *data)
+{
+ struct cmd_tx_vlan_set_qinq_result *res = parsed_result;
+ int vlan_offload = rte_eth_dev_get_vlan_offload(res->port_id);
+
+ if (!(vlan_offload & ETH_VLAN_EXTEND_OFFLOAD)) {
+ printf("Error, as QinQ hasn't been enabled.\n");
+ return;
+ }
+
+ tx_qinq_set(res->port_id, res->vlan_id, res->vlan_id_outer);
+}
+
+cmdline_parse_token_string_t cmd_tx_vlan_set_qinq_tx_vlan =
+ TOKEN_STRING_INITIALIZER(struct cmd_tx_vlan_set_qinq_result,
+ tx_vlan, "tx_vlan");
+cmdline_parse_token_string_t cmd_tx_vlan_set_qinq_set =
+ TOKEN_STRING_INITIALIZER(struct cmd_tx_vlan_set_qinq_result,
+ set, "set");
+cmdline_parse_token_num_t cmd_tx_vlan_set_qinq_portid =
+ TOKEN_NUM_INITIALIZER(struct cmd_tx_vlan_set_qinq_result,
+ port_id, UINT8);
+cmdline_parse_token_num_t cmd_tx_vlan_set_qinq_vlanid =
+ TOKEN_NUM_INITIALIZER(struct cmd_tx_vlan_set_qinq_result,
+ vlan_id, UINT16);
+cmdline_parse_token_num_t cmd_tx_vlan_set_qinq_vlanid_outer =
+ TOKEN_NUM_INITIALIZER(struct cmd_tx_vlan_set_qinq_result,
+ vlan_id_outer, UINT16);
+
+cmdline_parse_inst_t cmd_tx_vlan_set_qinq = {
+ .f = cmd_tx_vlan_set_qinq_parsed,
+ .data = NULL,
+ .help_str = "enable hardware insertion of double VLAN header "
+ "with given TAG Identifiers in packets sent on a port",
+ .tokens = {
+ (void *)&cmd_tx_vlan_set_qinq_tx_vlan,
+ (void *)&cmd_tx_vlan_set_qinq_set,
+ (void *)&cmd_tx_vlan_set_qinq_portid,
+ (void *)&cmd_tx_vlan_set_qinq_vlanid,
+ (void *)&cmd_tx_vlan_set_qinq_vlanid_outer,
NULL,
},
};
@@ -8782,6 +8845,7 @@ cmdline_parse_ctx_t main_ctx[] = {
(cmdline_parse_inst_t *)&cmd_rx_vlan_filter_all,
(cmdline_parse_inst_t *)&cmd_rx_vlan_filter,
(cmdline_parse_inst_t *)&cmd_tx_vlan_set,
+ (cmdline_parse_inst_t *)&cmd_tx_vlan_set_qinq,
(cmdline_parse_inst_t *)&cmd_tx_vlan_reset,
(cmdline_parse_inst_t *)&cmd_tx_vlan_set_pvid,
(cmdline_parse_inst_t *)&cmd_csum_set,
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index f788ed5..8c49e4d 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -1732,16 +1732,35 @@ tx_vlan_set(portid_t port_id, uint16_t vlan_id)
return;
if (vlan_id_is_invalid(vlan_id))
return;
+ tx_vlan_reset(port_id);
ports[port_id].tx_ol_flags |= TESTPMD_TX_OFFLOAD_INSERT_VLAN;
ports[port_id].tx_vlan_id = vlan_id;
}
void
+tx_qinq_set(portid_t port_id, uint16_t vlan_id, uint16_t vlan_id_outer)
+{
+ if (port_id_is_invalid(port_id, ENABLED_WARN))
+ return;
+ if (vlan_id_is_invalid(vlan_id))
+ return;
+ if (vlan_id_is_invalid(vlan_id_outer))
+ return;
+ tx_vlan_reset(port_id);
+ ports[port_id].tx_ol_flags |= TESTPMD_TX_OFFLOAD_INSERT_QINQ;
+ ports[port_id].tx_vlan_id = vlan_id;
+ ports[port_id].tx_vlan_id_outer = vlan_id_outer;
+}
+
+void
tx_vlan_reset(portid_t port_id)
{
if (port_id_is_invalid(port_id, ENABLED_WARN))
return;
- ports[port_id].tx_ol_flags &= ~TESTPMD_TX_OFFLOAD_INSERT_VLAN;
+ ports[port_id].tx_ol_flags &= ~(TESTPMD_TX_OFFLOAD_INSERT_VLAN |
+ TESTPMD_TX_OFFLOAD_INSERT_QINQ);
+ ports[port_id].tx_vlan_id = 0;
+ ports[port_id].tx_vlan_id_outer = 0;
}
void
diff --git a/app/test-pmd/flowgen.c b/app/test-pmd/flowgen.c
index 72016c9..fce96dc 100644
--- a/app/test-pmd/flowgen.c
+++ b/app/test-pmd/flowgen.c
@@ -136,7 +136,7 @@ pkt_burst_flow_gen(struct fwd_stream *fs)
struct ether_hdr *eth_hdr;
struct ipv4_hdr *ip_hdr;
struct udp_hdr *udp_hdr;
- uint16_t vlan_tci;
+ uint16_t vlan_tci, vlan_tci_outer;
uint16_t ol_flags;
uint16_t nb_rx;
uint16_t nb_tx;
@@ -163,6 +163,7 @@ pkt_burst_flow_gen(struct fwd_stream *fs)
mbp = current_fwd_lcore()->mbp;
vlan_tci = ports[fs->tx_port].tx_vlan_id;
+ vlan_tci_outer = ports[fs->tx_port].tx_vlan_id_outer;
ol_flags = ports[fs->tx_port].tx_ol_flags;
for (nb_pkt = 0; nb_pkt < nb_pkt_per_burst; nb_pkt++) {
@@ -208,6 +209,7 @@ pkt_burst_flow_gen(struct fwd_stream *fs)
pkt->pkt_len = pkt_size;
pkt->ol_flags = ol_flags;
pkt->vlan_tci = vlan_tci;
+ pkt->vlan_tci_outer = vlan_tci_outer;
pkt->l2_len = sizeof(struct ether_hdr);
pkt->l3_len = sizeof(struct ipv4_hdr);
pkts_burst[nb_pkt] = pkt;
diff --git a/app/test-pmd/macfwd.c b/app/test-pmd/macfwd.c
index 035e5eb..3b7fffb 100644
--- a/app/test-pmd/macfwd.c
+++ b/app/test-pmd/macfwd.c
@@ -110,6 +110,8 @@ pkt_burst_mac_forward(struct fwd_stream *fs)
txp = &ports[fs->tx_port];
if (txp->tx_ol_flags & TESTPMD_TX_OFFLOAD_INSERT_VLAN)
ol_flags = PKT_TX_VLAN_PKT;
+ if (txp->tx_ol_flags & TESTPMD_TX_OFFLOAD_INSERT_QINQ)
+ ol_flags |= PKT_TX_QINQ_PKT;
for (i = 0; i < nb_rx; i++) {
mb = pkts_burst[i];
eth_hdr = rte_pktmbuf_mtod(mb, struct ether_hdr *);
@@ -121,6 +123,7 @@ pkt_burst_mac_forward(struct fwd_stream *fs)
mb->l2_len = sizeof(struct ether_hdr);
mb->l3_len = sizeof(struct ipv4_hdr);
mb->vlan_tci = txp->tx_vlan_id;
+ mb->vlan_tci_outer = txp->tx_vlan_id_outer;
}
nb_tx = rte_eth_tx_burst(fs->tx_port, fs->tx_queue, pkts_burst, nb_rx);
fs->tx_packets += nb_tx;
diff --git a/app/test-pmd/macswap.c b/app/test-pmd/macswap.c
index 6729849..154889d 100644
--- a/app/test-pmd/macswap.c
+++ b/app/test-pmd/macswap.c
@@ -110,6 +110,8 @@ pkt_burst_mac_swap(struct fwd_stream *fs)
txp = &ports[fs->tx_port];
if (txp->tx_ol_flags & TESTPMD_TX_OFFLOAD_INSERT_VLAN)
ol_flags = PKT_TX_VLAN_PKT;
+ if (txp->tx_ol_flags & TESTPMD_TX_OFFLOAD_INSERT_QINQ)
+ ol_flags |= PKT_TX_QINQ_PKT;
for (i = 0; i < nb_rx; i++) {
mb = pkts_burst[i];
eth_hdr = rte_pktmbuf_mtod(mb, struct ether_hdr *);
@@ -123,6 +125,7 @@ pkt_burst_mac_swap(struct fwd_stream *fs)
mb->l2_len = sizeof(struct ether_hdr);
mb->l3_len = sizeof(struct ipv4_hdr);
mb->vlan_tci = txp->tx_vlan_id;
+ mb->vlan_tci_outer = txp->tx_vlan_id_outer;
}
nb_tx = rte_eth_tx_burst(fs->tx_port, fs->tx_queue, pkts_burst, nb_rx);
fs->tx_packets += nb_tx;
diff --git a/app/test-pmd/rxonly.c b/app/test-pmd/rxonly.c
index ac56090..f6a2f84 100644
--- a/app/test-pmd/rxonly.c
+++ b/app/test-pmd/rxonly.c
@@ -160,6 +160,9 @@ pkt_burst_receive(struct fwd_stream *fs)
}
if (ol_flags & PKT_RX_VLAN_PKT)
printf(" - VLAN tci=0x%x", mb->vlan_tci);
+ if (ol_flags & PKT_RX_QINQ_PKT)
+ printf(" - QinQ VLAN tci=0x%x, VLAN tci outer=0x%x",
+ mb->vlan_tci, mb->vlan_tci_outer);
if (is_encapsulation) {
struct ipv4_hdr *ipv4_hdr;
struct ipv6_hdr *ipv6_hdr;
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index c3b6700..e71951b 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -133,6 +133,8 @@ struct fwd_stream {
#define TESTPMD_TX_OFFLOAD_PARSE_TUNNEL 0x0020
/** Insert VLAN header in forward engine */
#define TESTPMD_TX_OFFLOAD_INSERT_VLAN 0x0040
+/** Insert double VLAN header in forward engine */
+#define TESTPMD_TX_OFFLOAD_INSERT_QINQ 0x0080
/**
* The data structure associated with each port.
@@ -149,7 +151,8 @@ struct rte_port {
unsigned int socket_id; /**< For NUMA support */
uint16_t tx_ol_flags;/**< TX Offload Flags (TESTPMD_TX_OFFLOAD...). */
uint16_t tso_segsz; /**< MSS for segmentation offload. */
- uint16_t tx_vlan_id; /**< Tag Id. in TX VLAN packets. */
+ uint16_t tx_vlan_id;/**< The tag ID */
+ uint16_t tx_vlan_id_outer;/**< The outer tag ID */
void *fwd_ctx; /**< Forwarding mode context */
uint64_t rx_bad_ip_csum; /**< rx pkts with bad ip checksum */
uint64_t rx_bad_l4_csum; /**< rx pkts with bad l4 checksum */
@@ -513,6 +516,7 @@ int rx_vft_set(portid_t port_id, uint16_t vlan_id, int on);
void vlan_extend_set(portid_t port_id, int on);
void vlan_tpid_set(portid_t port_id, uint16_t tp_id);
void tx_vlan_set(portid_t port_id, uint16_t vlan_id);
+void tx_qinq_set(portid_t port_id, uint16_t vlan_id, uint16_t vlan_id_outer);
void tx_vlan_reset(portid_t port_id);
void tx_vlan_pvid_set(portid_t port_id, uint16_t vlan_id, int on);
diff --git a/app/test-pmd/txonly.c b/app/test-pmd/txonly.c
index ca32c85..8ce6109 100644
--- a/app/test-pmd/txonly.c
+++ b/app/test-pmd/txonly.c
@@ -202,7 +202,7 @@ pkt_burst_transmit(struct fwd_stream *fs)
struct ether_hdr eth_hdr;
uint16_t nb_tx;
uint16_t nb_pkt;
- uint16_t vlan_tci;
+ uint16_t vlan_tci, vlan_tci_outer;
uint64_t ol_flags = 0;
uint8_t i;
#ifdef RTE_TEST_PMD_RECORD_CORE_CYCLES
@@ -218,8 +218,11 @@ pkt_burst_transmit(struct fwd_stream *fs)
mbp = current_fwd_lcore()->mbp;
txp = &ports[fs->tx_port];
vlan_tci = txp->tx_vlan_id;
+ vlan_tci_outer = txp->tx_vlan_id_outer;
if (txp->tx_ol_flags & TESTPMD_TX_OFFLOAD_INSERT_VLAN)
ol_flags = PKT_TX_VLAN_PKT;
+ if (txp->tx_ol_flags & TESTPMD_TX_OFFLOAD_INSERT_QINQ)
+ ol_flags |= PKT_TX_QINQ_PKT;
for (nb_pkt = 0; nb_pkt < nb_pkt_per_burst; nb_pkt++) {
pkt = tx_mbuf_alloc(mbp);
if (pkt == NULL) {
@@ -266,7 +269,8 @@ pkt_burst_transmit(struct fwd_stream *fs)
pkt->nb_segs = tx_pkt_nb_segs;
pkt->pkt_len = tx_pkt_length;
pkt->ol_flags = ol_flags;
- pkt->vlan_tci = vlan_tci;
+ pkt->vlan_tci = vlan_tci;
+ pkt->vlan_tci_outer = vlan_tci_outer;
pkt->l2_len = sizeof(struct ether_hdr);
pkt->l3_len = sizeof(struct ipv4_hdr);
pkts_burst[nb_pkt] = pkt;
--
1.9.3
^ permalink raw reply [flat|nested] 55+ messages in thread
* [dpdk-dev] [PATCH v2 6/6] examples/ipv4_multicast: support double vlan stripping and insertion
2015-06-02 3:16 ` [dpdk-dev] [PATCH v2 0/6] support i40e QinQ " Helin Zhang
` (4 preceding siblings ...)
2015-06-02 3:16 ` [dpdk-dev] [PATCH v2 5/6] app/testpmd: add test cases for qinq stripping and insertion Helin Zhang
@ 2015-06-02 3:16 ` Helin Zhang
2015-06-02 7:37 ` [dpdk-dev] [PATCH v2 0/6] support i40e QinQ " Liu, Jijiang
` (2 subsequent siblings)
8 siblings, 0 replies; 55+ messages in thread
From: Helin Zhang @ 2015-06-02 3:16 UTC (permalink / raw)
To: dev
The outer vlan should be copied from source packet buffer to
support double vlan stripping and insertion, as double vlan can be
stripped or inserted by some of NIC hardware.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
examples/ipv4_multicast/main.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/examples/ipv4_multicast/main.c b/examples/ipv4_multicast/main.c
index 2a2b915..d4253c0 100644
--- a/examples/ipv4_multicast/main.c
+++ b/examples/ipv4_multicast/main.c
@@ -298,6 +298,7 @@ mcast_out_pkt(struct rte_mbuf *pkt, int use_clone)
/* copy metadata from source packet*/
hdr->port = pkt->port;
hdr->vlan_tci = pkt->vlan_tci;
+ hdr->vlan_tci_outer = pkt->vlan_tci_outer;
hdr->tx_offload = pkt->tx_offload;
hdr->hash = pkt->hash;
--
1.9.3
^ permalink raw reply [flat|nested] 55+ messages in thread
* Re: [dpdk-dev] [PATCH v2 0/6] support i40e QinQ stripping and insertion
2015-06-02 3:16 ` [dpdk-dev] [PATCH v2 0/6] support i40e QinQ " Helin Zhang
` (5 preceding siblings ...)
2015-06-02 3:16 ` [dpdk-dev] [PATCH v2 6/6] examples/ipv4_multicast: support double vlan " Helin Zhang
@ 2015-06-02 7:37 ` Liu, Jijiang
2015-06-08 7:32 ` Cao, Min
2015-06-08 7:40 ` Olivier MATZ
2015-06-11 7:03 ` [dpdk-dev] [PATCH v3 0/7] " Helin Zhang
8 siblings, 1 reply; 55+ messages in thread
From: Liu, Jijiang @ 2015-06-02 7:37 UTC (permalink / raw)
To: Zhang, Helin, dev
Acked-by: Jijiang Liu <Jijiang.liu@intel.com>
> -----Original Message-----
> From: Zhang, Helin
> Sent: Tuesday, June 2, 2015 11:16 AM
> To: dev@dpdk.org
> Cc: Cao, Min; Liu, Jijiang; Wu, Jingjing; Ananyev, Konstantin; Richardson, Bruce;
> olivier.matz@6wind.com; Zhang, Helin
> Subject: [PATCH v2 0/6] support i40e QinQ stripping and insertion
>
> As i40e hardware can be reconfigured to support QinQ stripping and insertion,
> this patch set is to enable that together with using the reserved 16 bits in 'struct
> rte_mbuf' for the second vlan tag.
> Corresponding command is added in testpmd for testing.
> Note that no need to rework vPMD, as nothings used in it changed.
>
> v2 changes:
> * Added more commit logs of which commit it fix for.
> * Fixed a typo.
> * Kept the original RX/TX offload flags as they were, added new
> flags after with new bit masks, for ABI compatibility.
> * Supported double vlan stripping/insertion in examples/ipv4_multicast.
>
> Helin Zhang (6):
> ixgbe: remove a discarded source line
> mbuf: use the reserved 16 bits for double vlan
> i40e: support double vlan stripping and insertion
> i40evf: add supported offload capability flags
> app/testpmd: add test cases for qinq stripping and insertion
> examples/ipv4_multicast: support double vlan stripping and insertion
>
> app/test-pmd/cmdline.c | 78 +++++++++++++++++++++++++++++++++----
> app/test-pmd/config.c | 21 +++++++++-
> app/test-pmd/flowgen.c | 4 +-
> app/test-pmd/macfwd.c | 3 ++
> app/test-pmd/macswap.c | 3 ++
> app/test-pmd/rxonly.c | 3 ++
> app/test-pmd/testpmd.h | 6 ++-
> app/test-pmd/txonly.c | 8 +++-
> drivers/net/i40e/i40e_ethdev.c | 52 +++++++++++++++++++++++++
> drivers/net/i40e/i40e_ethdev_vf.c | 13 +++++++
> drivers/net/i40e/i40e_rxtx.c | 81 +++++++++++++++++++++++++--------------
> drivers/net/ixgbe/ixgbe_rxtx.c | 1 -
> examples/ipv4_multicast/main.c | 1 +
> lib/librte_ether/rte_ethdev.h | 2 +
> lib/librte_mbuf/rte_mbuf.h | 10 ++++-
> 15 files changed, 243 insertions(+), 43 deletions(-)
>
> --
> 1.9.3
^ permalink raw reply [flat|nested] 55+ messages in thread
* Re: [dpdk-dev] [PATCH v2 0/6] support i40e QinQ stripping and insertion
2015-06-02 7:37 ` [dpdk-dev] [PATCH v2 0/6] support i40e QinQ " Liu, Jijiang
@ 2015-06-08 7:32 ` Cao, Min
0 siblings, 0 replies; 55+ messages in thread
From: Cao, Min @ 2015-06-08 7:32 UTC (permalink / raw)
To: Liu, Jijiang, Zhang, Helin, dev
Tested-by: Min Cao <min.cao@intel.com>
- OS: Fedora20 3.11.10-301
- GCC: gcc version 4.8.2 20131212
- CPU: Intel(R) Xeon(R) CPU E5-2680 0 @ 2.70GHz
- NIC: Ethernet controller: Intel Corporation Device 1572 (rev 01)
- Default x86_64-native-linuxapp-gcc configuration
- Total 2 cases, 2 passed, 0 failed
- Case: double vlan filter
- Case: double vlan insertion
> -----Original Message-----
> From: Zhang, Helin
> Sent: Tuesday, June 2, 2015 11:16 AM
> To: dev@dpdk.org
> Cc: Cao, Min; Liu, Jijiang; Wu, Jingjing; Ananyev, Konstantin;
> Richardson, Bruce; olivier.matz@6wind.com; Zhang, Helin
> Subject: [PATCH v2 0/6] support i40e QinQ stripping and insertion
>
> As i40e hardware can be reconfigured to support QinQ stripping and
> insertion, this patch set is to enable that together with using the
> reserved 16 bits in 'struct rte_mbuf' for the second vlan tag.
> Corresponding command is added in testpmd for testing.
> Note that no need to rework vPMD, as nothings used in it changed.
>
> v2 changes:
> * Added more commit logs of which commit it fix for.
> * Fixed a typo.
> * Kept the original RX/TX offload flags as they were, added new
> flags after with new bit masks, for ABI compatibility.
> * Supported double vlan stripping/insertion in examples/ipv4_multicast.
>
> Helin Zhang (6):
> ixgbe: remove a discarded source line
> mbuf: use the reserved 16 bits for double vlan
> i40e: support double vlan stripping and insertion
> i40evf: add supported offload capability flags
> app/testpmd: add test cases for qinq stripping and insertion
> examples/ipv4_multicast: support double vlan stripping and insertion
>
> app/test-pmd/cmdline.c | 78 +++++++++++++++++++++++++++++++++----
> app/test-pmd/config.c | 21 +++++++++-
> app/test-pmd/flowgen.c | 4 +-
> app/test-pmd/macfwd.c | 3 ++
> app/test-pmd/macswap.c | 3 ++
> app/test-pmd/rxonly.c | 3 ++
> app/test-pmd/testpmd.h | 6 ++-
> app/test-pmd/txonly.c | 8 +++-
> drivers/net/i40e/i40e_ethdev.c | 52 +++++++++++++++++++++++++
> drivers/net/i40e/i40e_ethdev_vf.c | 13 +++++++
> drivers/net/i40e/i40e_rxtx.c | 81 +++++++++++++++++++++++++--------------
> drivers/net/ixgbe/ixgbe_rxtx.c | 1 -
> examples/ipv4_multicast/main.c | 1 +
> lib/librte_ether/rte_ethdev.h | 2 +
> lib/librte_mbuf/rte_mbuf.h | 10 ++++-
> 15 files changed, 243 insertions(+), 43 deletions(-)
>
> --
> 1.9.3
^ permalink raw reply [flat|nested] 55+ messages in thread
* Re: [dpdk-dev] [PATCH v2 0/6] support i40e QinQ stripping and insertion
2015-06-02 3:16 ` [dpdk-dev] [PATCH v2 0/6] support i40e QinQ " Helin Zhang
` (6 preceding siblings ...)
2015-06-02 7:37 ` [dpdk-dev] [PATCH v2 0/6] support i40e QinQ " Liu, Jijiang
@ 2015-06-08 7:40 ` Olivier MATZ
2015-06-11 7:03 ` [dpdk-dev] [PATCH v3 0/7] " Helin Zhang
8 siblings, 0 replies; 55+ messages in thread
From: Olivier MATZ @ 2015-06-08 7:40 UTC (permalink / raw)
To: Helin Zhang, dev
Hi Helin,
On 06/02/2015 05:16 AM, Helin Zhang wrote:
> As i40e hardware can be reconfigured to support QinQ stripping and
> insertion, this patch set is to enable that together with using the
> reserved 16 bits in 'struct rte_mbuf' for the second vlan tag.
> Corresponding command is added in testpmd for testing.
> Note that no need to rework vPMD, as nothings used in it changed.
>
> v2 changes:
> * Added more commit logs of which commit it fix for.
> * Fixed a typo.
> * Kept the original RX/TX offload flags as they were, added new
> flags after with new bit masks, for ABI compatibility.
> * Supported double vlan stripping/insertion in examples/ipv4_multicast.
Acked-by: Olivier Matz <olivier.matz@6wind.com>
^ permalink raw reply [flat|nested] 55+ messages in thread
* [dpdk-dev] [PATCH v3 0/7] support i40e QinQ stripping and insertion
2015-06-02 3:16 ` [dpdk-dev] [PATCH v2 0/6] support i40e QinQ " Helin Zhang
` (7 preceding siblings ...)
2015-06-08 7:40 ` Olivier MATZ
@ 2015-06-11 7:03 ` Helin Zhang
2015-06-11 7:03 ` [dpdk-dev] [PATCH v3 1/7] ixgbe: remove a discarded source line Helin Zhang
` (7 more replies)
8 siblings, 8 replies; 55+ messages in thread
From: Helin Zhang @ 2015-06-11 7:03 UTC (permalink / raw)
To: dev
As i40e hardware can be reconfigured to support QinQ stripping
and insertion, this patch set is to enable that together with
using the reserved 16 bits in 'struct rte_mbuf' for the second
vlan tag. Corresponding command is added in testpmd for testing.
Note that no need to rework vPMD, as nothings used in it changed.
v2 changes:
* Added more commit logs of which commit it fix for.
* Fixed a typo.
* Kept the original RX/TX offload flags as they were, added new
flags after with new bit masks, for ABI compatibility.
* Supported double vlan stripping/insertion in examples/ipv4_multicast.
v3 changes:
* update documentation (Testpmd Application User Guide).
Helin Zhang (7):
ixgbe: remove a discarded source line
mbuf: use the reserved 16 bits for double vlan
i40e: support double vlan stripping and insertion
i40evf: add supported offload capability flags
app/testpmd: add test cases for qinq stripping and insertion
examples/ipv4_multicast: support double vlan stripping and insertion
doc: update testpmd command
app/test-pmd/cmdline.c | 78 ++++++++++++++++++++++++---
app/test-pmd/config.c | 21 +++++++-
app/test-pmd/flowgen.c | 4 +-
app/test-pmd/macfwd.c | 3 ++
app/test-pmd/macswap.c | 3 ++
app/test-pmd/rxonly.c | 3 ++
app/test-pmd/testpmd.h | 6 ++-
app/test-pmd/txonly.c | 8 ++-
doc/guides/testpmd_app_ug/testpmd_funcs.rst | 14 ++++-
drivers/net/i40e/i40e_ethdev.c | 52 ++++++++++++++++++
drivers/net/i40e/i40e_ethdev_vf.c | 13 +++++
drivers/net/i40e/i40e_rxtx.c | 81 ++++++++++++++++++-----------
drivers/net/ixgbe/ixgbe_rxtx.c | 1 -
examples/ipv4_multicast/main.c | 1 +
lib/librte_ether/rte_ethdev.h | 2 +
lib/librte_mbuf/rte_mbuf.h | 10 +++-
16 files changed, 255 insertions(+), 45 deletions(-)
--
1.9.3
^ permalink raw reply [flat|nested] 55+ messages in thread
* [dpdk-dev] [PATCH v3 1/7] ixgbe: remove a discarded source line
2015-06-11 7:03 ` [dpdk-dev] [PATCH v3 0/7] " Helin Zhang
@ 2015-06-11 7:03 ` Helin Zhang
2015-06-11 7:03 ` [dpdk-dev] [PATCH v3 2/7] mbuf: use the reserved 16 bits for double vlan Helin Zhang
` (6 subsequent siblings)
7 siblings, 0 replies; 55+ messages in thread
From: Helin Zhang @ 2015-06-11 7:03 UTC (permalink / raw)
To: dev
Little endian to CPU order conversion had been added for reading
vlan tag from RX descriptor, while its original source line was
forgotten to delete. That's a discarded source line and should be
deleted.
Fixes: 23fcffe8ffac ("ixgbe: fix id and hash with flow director")
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
drivers/net/ixgbe/ixgbe_rxtx.c | 1 -
1 file changed, 1 deletion(-)
v2 changes:
* Added more commit logs of which commit it fix for.
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c
index 4f9ab22..041c544 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx.c
@@ -981,7 +981,6 @@ ixgbe_rx_scan_hw_ring(struct ixgbe_rx_queue *rxq)
pkt_len = (uint16_t)(rxdp[j].wb.upper.length - rxq->crc_len);
mb->data_len = pkt_len;
mb->pkt_len = pkt_len;
- mb->vlan_tci = rxdp[j].wb.upper.vlan;
mb->vlan_tci = rte_le_to_cpu_16(rxdp[j].wb.upper.vlan);
/* convert descriptor fields to rte mbuf flags */
--
1.9.3
^ permalink raw reply [flat|nested] 55+ messages in thread
* [dpdk-dev] [PATCH v3 2/7] mbuf: use the reserved 16 bits for double vlan
2015-06-11 7:03 ` [dpdk-dev] [PATCH v3 0/7] " Helin Zhang
2015-06-11 7:03 ` [dpdk-dev] [PATCH v3 1/7] ixgbe: remove a discarded source line Helin Zhang
@ 2015-06-11 7:03 ` Helin Zhang
2015-06-25 8:31 ` Zhang, Helin
2015-06-11 7:03 ` [dpdk-dev] [PATCH v3 3/7] i40e: support double vlan stripping and insertion Helin Zhang
` (5 subsequent siblings)
7 siblings, 1 reply; 55+ messages in thread
From: Helin Zhang @ 2015-06-11 7:03 UTC (permalink / raw)
To: dev
Use the reserved 16 bits in rte_mbuf structure for the outer vlan,
also add QinQ offloading flags for both RX and TX sides.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
lib/librte_mbuf/rte_mbuf.h | 10 +++++++++-
1 file changed, 9 insertions(+), 1 deletion(-)
v2 changes:
* Fixed a typo.
diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
index ab6de67..84fe181 100644
--- a/lib/librte_mbuf/rte_mbuf.h
+++ b/lib/librte_mbuf/rte_mbuf.h
@@ -101,11 +101,17 @@ extern "C" {
#define PKT_RX_TUNNEL_IPV6_HDR (1ULL << 12) /**< RX tunnel packet with IPv6 header. */
#define PKT_RX_FDIR_ID (1ULL << 13) /**< FD id reported if FDIR match. */
#define PKT_RX_FDIR_FLX (1ULL << 14) /**< Flexible bytes reported if FDIR match. */
+#define PKT_RX_QINQ_PKT (1ULL << 15) /**< RX packet with double VLAN stripped. */
/* add new RX flags here */
/* add new TX flags here */
/**
+ * Second VLAN insertion (QinQ) flag.
+ */
+#define PKT_TX_QINQ_PKT (1ULL << 49) /**< TX packet with double VLAN inserted. */
+
+/**
* TCP segmentation offload. To enable this offload feature for a
* packet to be transmitted on hardware supporting TSO:
* - set the PKT_TX_TCP_SEG flag in mbuf->ol_flags (this flag implies
@@ -279,7 +285,7 @@ struct rte_mbuf {
uint16_t data_len; /**< Amount of data in segment buffer. */
uint32_t pkt_len; /**< Total pkt len: sum of all segments. */
uint16_t vlan_tci; /**< VLAN Tag Control Identifier (CPU order) */
- uint16_t reserved;
+ uint16_t vlan_tci_outer; /**< Outer VLAN Tag Control Identifier (CPU order) */
union {
uint32_t rss; /**< RSS hash result if RSS enabled */
struct {
@@ -777,6 +783,7 @@ static inline void rte_pktmbuf_reset(struct rte_mbuf *m)
m->pkt_len = 0;
m->tx_offload = 0;
m->vlan_tci = 0;
+ m->vlan_tci_outer = 0;
m->nb_segs = 1;
m->port = 0xff;
@@ -849,6 +856,7 @@ static inline void rte_pktmbuf_attach(struct rte_mbuf *mi, struct rte_mbuf *m)
mi->data_len = m->data_len;
mi->port = m->port;
mi->vlan_tci = m->vlan_tci;
+ mi->vlan_tci_outer = m->vlan_tci_outer;
mi->tx_offload = m->tx_offload;
mi->hash = m->hash;
--
1.9.3
^ permalink raw reply [flat|nested] 55+ messages in thread
* [dpdk-dev] [PATCH v3 3/7] i40e: support double vlan stripping and insertion
2015-06-11 7:03 ` [dpdk-dev] [PATCH v3 0/7] " Helin Zhang
2015-06-11 7:03 ` [dpdk-dev] [PATCH v3 1/7] ixgbe: remove a discarded source line Helin Zhang
2015-06-11 7:03 ` [dpdk-dev] [PATCH v3 2/7] mbuf: use the reserved 16 bits for double vlan Helin Zhang
@ 2015-06-11 7:03 ` Helin Zhang
2015-06-11 7:03 ` [dpdk-dev] [PATCH v3 4/7] i40evf: add supported offload capability flags Helin Zhang
` (4 subsequent siblings)
7 siblings, 0 replies; 55+ messages in thread
From: Helin Zhang @ 2015-06-11 7:03 UTC (permalink / raw)
To: dev
It configures specific registers to enable double vlan stripping
on RX side and insertion on TX side.
The RX descriptors will be parsed, the vlan tags and flags will be
saved to corresponding mbuf fields if vlan tag is detected.
The TX descriptors will be configured according to the
configurations in mbufs, to trigger the hardware insertion of
double vlan tags for each packets sent out.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
drivers/net/i40e/i40e_ethdev.c | 52 +++++++++++++++++++++++++
drivers/net/i40e/i40e_ethdev_vf.c | 6 +++
drivers/net/i40e/i40e_rxtx.c | 81 +++++++++++++++++++++++++--------------
lib/librte_ether/rte_ethdev.h | 2 +
4 files changed, 112 insertions(+), 29 deletions(-)
v2 changes:
* Kept the original RX/TX offload flags as they were, added new
flags after with new bit masks, for ABI compatibility.
diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index da6c0b5..7593a70 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -211,6 +211,7 @@ static int i40e_dev_filter_ctrl(struct rte_eth_dev *dev,
void *arg);
static void i40e_configure_registers(struct i40e_hw *hw);
static void i40e_hw_init(struct i40e_hw *hw);
+static int i40e_config_qinq(struct i40e_hw *hw, struct i40e_vsi *vsi);
static const struct rte_pci_id pci_id_i40e_map[] = {
#define RTE_PCI_DEV_ID_DECL_I40E(vend, dev) {RTE_PCI_DEVICE(vend, dev)},
@@ -1529,11 +1530,13 @@ i40e_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->max_vfs = dev->pci_dev->max_vfs;
dev_info->rx_offload_capa =
DEV_RX_OFFLOAD_VLAN_STRIP |
+ DEV_RX_OFFLOAD_QINQ_STRIP |
DEV_RX_OFFLOAD_IPV4_CKSUM |
DEV_RX_OFFLOAD_UDP_CKSUM |
DEV_RX_OFFLOAD_TCP_CKSUM;
dev_info->tx_offload_capa =
DEV_TX_OFFLOAD_VLAN_INSERT |
+ DEV_TX_OFFLOAD_QINQ_INSERT |
DEV_TX_OFFLOAD_IPV4_CKSUM |
DEV_TX_OFFLOAD_UDP_CKSUM |
DEV_TX_OFFLOAD_TCP_CKSUM |
@@ -3056,6 +3059,7 @@ i40e_vsi_setup(struct i40e_pf *pf,
* macvlan filter which is expected and cannot be removed.
*/
i40e_update_default_filter_setting(vsi);
+ i40e_config_qinq(hw, vsi);
} else if (type == I40E_VSI_SRIOV) {
memset(&ctxt, 0, sizeof(ctxt));
/**
@@ -3096,6 +3100,8 @@ i40e_vsi_setup(struct i40e_pf *pf,
* Since VSI is not created yet, only configure parameter,
* will add vsi below.
*/
+
+ i40e_config_qinq(hw, vsi);
} else if (type == I40E_VSI_VMDQ2) {
memset(&ctxt, 0, sizeof(ctxt));
/*
@@ -5697,3 +5703,49 @@ i40e_configure_registers(struct i40e_hw *hw)
"0x%"PRIx32, reg_table[i].val, reg_table[i].addr);
}
}
+
+#define I40E_VSI_TSR(_i) (0x00050800 + ((_i) * 4))
+#define I40E_VSI_TSR_QINQ_CONFIG 0xc030
+#define I40E_VSI_L2TAGSTXVALID(_i) (0x00042800 + ((_i) * 4))
+#define I40E_VSI_L2TAGSTXVALID_QINQ 0xab
+static int
+i40e_config_qinq(struct i40e_hw *hw, struct i40e_vsi *vsi)
+{
+ uint32_t reg;
+ int ret;
+
+ if (vsi->vsi_id >= I40E_MAX_NUM_VSIS) {
+ PMD_DRV_LOG(ERR, "VSI ID exceeds the maximum");
+ return -EINVAL;
+ }
+
+ /* Configure for double VLAN RX stripping */
+ reg = I40E_READ_REG(hw, I40E_VSI_TSR(vsi->vsi_id));
+ if ((reg & I40E_VSI_TSR_QINQ_CONFIG) != I40E_VSI_TSR_QINQ_CONFIG) {
+ reg |= I40E_VSI_TSR_QINQ_CONFIG;
+ ret = i40e_aq_debug_write_register(hw,
+ I40E_VSI_TSR(vsi->vsi_id),
+ reg, NULL);
+ if (ret < 0) {
+ PMD_DRV_LOG(ERR, "Failed to update VSI_TSR[%d]",
+ vsi->vsi_id);
+ return I40E_ERR_CONFIG;
+ }
+ }
+
+ /* Configure for double VLAN TX insertion */
+ reg = I40E_READ_REG(hw, I40E_VSI_L2TAGSTXVALID(vsi->vsi_id));
+ if ((reg & 0xff) != I40E_VSI_L2TAGSTXVALID_QINQ) {
+ reg = I40E_VSI_L2TAGSTXVALID_QINQ;
+ ret = i40e_aq_debug_write_register(hw,
+ I40E_VSI_L2TAGSTXVALID(
+ vsi->vsi_id), reg, NULL);
+ if (ret < 0) {
+ PMD_DRV_LOG(ERR, "Failed to update "
+ "VSI_L2TAGSTXVALID[%d]", vsi->vsi_id);
+ return I40E_ERR_CONFIG;
+ }
+ }
+
+ return 0;
+}
diff --git a/drivers/net/i40e/i40e_ethdev_vf.c b/drivers/net/i40e/i40e_ethdev_vf.c
index 4f4404e..3ae2553 100644
--- a/drivers/net/i40e/i40e_ethdev_vf.c
+++ b/drivers/net/i40e/i40e_ethdev_vf.c
@@ -1672,6 +1672,12 @@ i40evf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->max_rx_pktlen = I40E_FRAME_SIZE_MAX;
dev_info->reta_size = ETH_RSS_RETA_SIZE_64;
dev_info->flow_type_rss_offloads = I40E_RSS_OFFLOAD_ALL;
+ dev_info->rx_offload_capa =
+ DEV_RX_OFFLOAD_VLAN_STRIP |
+ DEV_RX_OFFLOAD_QINQ_STRIP;
+ dev_info->tx_offload_capa =
+ DEV_TX_OFFLOAD_VLAN_INSERT |
+ DEV_TX_OFFLOAD_QINQ_INSERT;
dev_info->default_rxconf = (struct rte_eth_rxconf) {
.rx_thresh = {
diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
index 2de0ac4..b2e1d6d 100644
--- a/drivers/net/i40e/i40e_rxtx.c
+++ b/drivers/net/i40e/i40e_rxtx.c
@@ -94,18 +94,44 @@ static uint16_t i40e_xmit_pkts_simple(void *tx_queue,
struct rte_mbuf **tx_pkts,
uint16_t nb_pkts);
+static inline void
+i40e_rxd_to_vlan_tci(struct rte_mbuf *mb, volatile union i40e_rx_desc *rxdp)
+{
+ if (rte_le_to_cpu_64(rxdp->wb.qword1.status_error_len) &
+ (1 << I40E_RX_DESC_STATUS_L2TAG1P_SHIFT)) {
+ mb->ol_flags |= PKT_RX_VLAN_PKT;
+ mb->vlan_tci =
+ rte_le_to_cpu_16(rxdp->wb.qword0.lo_dword.l2tag1);
+ PMD_RX_LOG(DEBUG, "Descriptor l2tag1: %u",
+ rte_le_to_cpu_16(rxdp->wb.qword0.lo_dword.l2tag1));
+ } else {
+ mb->vlan_tci = 0;
+ }
+#ifndef RTE_LIBRTE_I40E_16BYTE_RX_DESC
+ if (rte_le_to_cpu_16(rxdp->wb.qword2.ext_status) &
+ (1 << I40E_RX_DESC_EXT_STATUS_L2TAG2P_SHIFT)) {
+ mb->ol_flags |= PKT_RX_QINQ_PKT;
+ mb->vlan_tci_outer = mb->vlan_tci;
+ mb->vlan_tci = rte_le_to_cpu_16(rxdp->wb.qword2.l2tag2_2);
+ PMD_RX_LOG(DEBUG, "Descriptor l2tag2_1: %u, l2tag2_2: %u",
+ rte_le_to_cpu_16(rxdp->wb.qword2.l2tag2_1),
+ rte_le_to_cpu_16(rxdp->wb.qword2.l2tag2_2));
+ } else {
+ mb->vlan_tci_outer = 0;
+ }
+#endif
+ PMD_RX_LOG(DEBUG, "Mbuf vlan_tci: %u, vlan_tci_outer: %u",
+ mb->vlan_tci, mb->vlan_tci_outer);
+}
+
/* Translate the rx descriptor status to pkt flags */
static inline uint64_t
i40e_rxd_status_to_pkt_flags(uint64_t qword)
{
uint64_t flags;
- /* Check if VLAN packet */
- flags = qword & (1 << I40E_RX_DESC_STATUS_L2TAG1P_SHIFT) ?
- PKT_RX_VLAN_PKT : 0;
-
/* Check if RSS_HASH */
- flags |= (((qword >> I40E_RX_DESC_STATUS_FLTSTAT_SHIFT) &
+ flags = (((qword >> I40E_RX_DESC_STATUS_FLTSTAT_SHIFT) &
I40E_RX_DESC_FLTSTAT_RSS_HASH) ==
I40E_RX_DESC_FLTSTAT_RSS_HASH) ? PKT_RX_RSS_HASH : 0;
@@ -696,16 +722,12 @@ i40e_rx_scan_hw_ring(struct i40e_rx_queue *rxq)
mb = rxep[j].mbuf;
qword1 = rte_le_to_cpu_64(\
rxdp[j].wb.qword1.status_error_len);
- rx_status = (qword1 & I40E_RXD_QW1_STATUS_MASK) >>
- I40E_RXD_QW1_STATUS_SHIFT;
pkt_len = ((qword1 & I40E_RXD_QW1_LENGTH_PBUF_MASK) >>
I40E_RXD_QW1_LENGTH_PBUF_SHIFT) - rxq->crc_len;
mb->data_len = pkt_len;
mb->pkt_len = pkt_len;
- mb->vlan_tci = rx_status &
- (1 << I40E_RX_DESC_STATUS_L2TAG1P_SHIFT) ?
- rte_le_to_cpu_16(\
- rxdp[j].wb.qword0.lo_dword.l2tag1) : 0;
+ mb->ol_flags = 0;
+ i40e_rxd_to_vlan_tci(mb, &rxdp[j]);
pkt_flags = i40e_rxd_status_to_pkt_flags(qword1);
pkt_flags |= i40e_rxd_error_to_pkt_flags(qword1);
pkt_flags |= i40e_rxd_ptype_to_pkt_flags(qword1);
@@ -719,7 +741,7 @@ i40e_rx_scan_hw_ring(struct i40e_rx_queue *rxq)
if (pkt_flags & PKT_RX_FDIR)
pkt_flags |= i40e_rxd_build_fdir(&rxdp[j], mb);
- mb->ol_flags = pkt_flags;
+ mb->ol_flags |= pkt_flags;
}
for (j = 0; j < I40E_LOOK_AHEAD; j++)
@@ -945,10 +967,8 @@ i40e_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
rxm->pkt_len = rx_packet_len;
rxm->data_len = rx_packet_len;
rxm->port = rxq->port_id;
-
- rxm->vlan_tci = rx_status &
- (1 << I40E_RX_DESC_STATUS_L2TAG1P_SHIFT) ?
- rte_le_to_cpu_16(rxd.wb.qword0.lo_dword.l2tag1) : 0;
+ rxm->ol_flags = 0;
+ i40e_rxd_to_vlan_tci(rxm, &rxd);
pkt_flags = i40e_rxd_status_to_pkt_flags(qword1);
pkt_flags |= i40e_rxd_error_to_pkt_flags(qword1);
pkt_flags |= i40e_rxd_ptype_to_pkt_flags(qword1);
@@ -960,7 +980,7 @@ i40e_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
if (pkt_flags & PKT_RX_FDIR)
pkt_flags |= i40e_rxd_build_fdir(&rxd, rxm);
- rxm->ol_flags = pkt_flags;
+ rxm->ol_flags |= pkt_flags;
rx_pkts[nb_rx++] = rxm;
}
@@ -1105,9 +1125,8 @@ i40e_recv_scattered_pkts(void *rx_queue,
}
first_seg->port = rxq->port_id;
- first_seg->vlan_tci = (rx_status &
- (1 << I40E_RX_DESC_STATUS_L2TAG1P_SHIFT)) ?
- rte_le_to_cpu_16(rxd.wb.qword0.lo_dword.l2tag1) : 0;
+ first_seg->ol_flags = 0;
+ i40e_rxd_to_vlan_tci(first_seg, &rxd);
pkt_flags = i40e_rxd_status_to_pkt_flags(qword1);
pkt_flags |= i40e_rxd_error_to_pkt_flags(qword1);
pkt_flags |= i40e_rxd_ptype_to_pkt_flags(qword1);
@@ -1120,7 +1139,7 @@ i40e_recv_scattered_pkts(void *rx_queue,
if (pkt_flags & PKT_RX_FDIR)
pkt_flags |= i40e_rxd_build_fdir(&rxd, rxm);
- first_seg->ol_flags = pkt_flags;
+ first_seg->ol_flags |= pkt_flags;
/* Prefetch data of first segment, if configured to do so. */
rte_prefetch0(RTE_PTR_ADD(first_seg->buf_addr,
@@ -1158,17 +1177,15 @@ i40e_recv_scattered_pkts(void *rx_queue,
static inline uint16_t
i40e_calc_context_desc(uint64_t flags)
{
- uint64_t mask = 0ULL;
-
- mask |= (PKT_TX_OUTER_IP_CKSUM | PKT_TX_TCP_SEG);
+ static uint64_t mask = PKT_TX_OUTER_IP_CKSUM |
+ PKT_TX_TCP_SEG |
+ PKT_TX_QINQ_PKT;
#ifdef RTE_LIBRTE_IEEE1588
mask |= PKT_TX_IEEE1588_TMST;
#endif
- if (flags & mask)
- return 1;
- return 0;
+ return ((flags & mask) ? 1 : 0);
}
/* set i40e TSO context descriptor */
@@ -1289,9 +1306,9 @@ i40e_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
}
/* Descriptor based VLAN insertion */
- if (ol_flags & PKT_TX_VLAN_PKT) {
+ if (ol_flags & (PKT_TX_VLAN_PKT | PKT_TX_QINQ_PKT)) {
tx_flags |= tx_pkt->vlan_tci <<
- I40E_TX_FLAG_L2TAG1_SHIFT;
+ I40E_TX_FLAG_L2TAG1_SHIFT;
tx_flags |= I40E_TX_FLAG_INSERT_VLAN;
td_cmd |= I40E_TX_DESC_CMD_IL2TAG1;
td_tag = (tx_flags & I40E_TX_FLAG_L2TAG1_MASK) >>
@@ -1339,6 +1356,12 @@ i40e_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
ctx_txd->tunneling_params =
rte_cpu_to_le_32(cd_tunneling_params);
+ if (ol_flags & PKT_TX_QINQ_PKT) {
+ cd_l2tag2 = tx_pkt->vlan_tci_outer;
+ cd_type_cmd_tso_mss |=
+ ((uint64_t)I40E_TX_CTX_DESC_IL2TAG2 <<
+ I40E_TXD_CTX_QW1_CMD_SHIFT);
+ }
ctx_txd->l2tag2 = rte_cpu_to_le_16(cd_l2tag2);
ctx_txd->type_cmd_tso_mss =
rte_cpu_to_le_64(cd_type_cmd_tso_mss);
diff --git a/lib/librte_ether/rte_ethdev.h b/lib/librte_ether/rte_ethdev.h
index 16dbe00..892280c 100644
--- a/lib/librte_ether/rte_ethdev.h
+++ b/lib/librte_ether/rte_ethdev.h
@@ -887,6 +887,7 @@ struct rte_eth_conf {
#define DEV_RX_OFFLOAD_UDP_CKSUM 0x00000004
#define DEV_RX_OFFLOAD_TCP_CKSUM 0x00000008
#define DEV_RX_OFFLOAD_TCP_LRO 0x00000010
+#define DEV_RX_OFFLOAD_QINQ_STRIP 0x00000020
/**
* TX offload capabilities of a device.
@@ -899,6 +900,7 @@ struct rte_eth_conf {
#define DEV_TX_OFFLOAD_TCP_TSO 0x00000020
#define DEV_TX_OFFLOAD_UDP_TSO 0x00000040
#define DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM 0x00000080 /**< Used for tunneling packet. */
+#define DEV_TX_OFFLOAD_QINQ_INSERT 0x00000100
struct rte_eth_dev_info {
struct rte_pci_device *pci_dev; /**< Device PCI information. */
--
1.9.3
^ permalink raw reply [flat|nested] 55+ messages in thread
* [dpdk-dev] [PATCH v3 4/7] i40evf: add supported offload capability flags
2015-06-11 7:03 ` [dpdk-dev] [PATCH v3 0/7] " Helin Zhang
` (2 preceding siblings ...)
2015-06-11 7:03 ` [dpdk-dev] [PATCH v3 3/7] i40e: support double vlan stripping and insertion Helin Zhang
@ 2015-06-11 7:03 ` Helin Zhang
2015-06-11 7:03 ` [dpdk-dev] [PATCH v3 5/7] app/testpmd: add test cases for qinq stripping and insertion Helin Zhang
` (3 subsequent siblings)
7 siblings, 0 replies; 55+ messages in thread
From: Helin Zhang @ 2015-06-11 7:03 UTC (permalink / raw)
To: dev
Add checksum offload capability flags which have already been
supported for a long time.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
drivers/net/i40e/i40e_ethdev_vf.c | 11 +++++++++--
1 file changed, 9 insertions(+), 2 deletions(-)
diff --git a/drivers/net/i40e/i40e_ethdev_vf.c b/drivers/net/i40e/i40e_ethdev_vf.c
index 3ae2553..669d05b 100644
--- a/drivers/net/i40e/i40e_ethdev_vf.c
+++ b/drivers/net/i40e/i40e_ethdev_vf.c
@@ -1674,10 +1674,17 @@ i40evf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->flow_type_rss_offloads = I40E_RSS_OFFLOAD_ALL;
dev_info->rx_offload_capa =
DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_QINQ_STRIP;
+ DEV_RX_OFFLOAD_QINQ_STRIP |
+ DEV_RX_OFFLOAD_IPV4_CKSUM |
+ DEV_RX_OFFLOAD_UDP_CKSUM |
+ DEV_RX_OFFLOAD_TCP_CKSUM;
dev_info->tx_offload_capa =
DEV_TX_OFFLOAD_VLAN_INSERT |
- DEV_TX_OFFLOAD_QINQ_INSERT;
+ DEV_TX_OFFLOAD_QINQ_INSERT |
+ DEV_TX_OFFLOAD_IPV4_CKSUM |
+ DEV_TX_OFFLOAD_UDP_CKSUM |
+ DEV_TX_OFFLOAD_TCP_CKSUM |
+ DEV_TX_OFFLOAD_SCTP_CKSUM;
dev_info->default_rxconf = (struct rte_eth_rxconf) {
.rx_thresh = {
--
1.9.3
^ permalink raw reply [flat|nested] 55+ messages in thread
* [dpdk-dev] [PATCH v3 5/7] app/testpmd: add test cases for qinq stripping and insertion
2015-06-11 7:03 ` [dpdk-dev] [PATCH v3 0/7] " Helin Zhang
` (3 preceding siblings ...)
2015-06-11 7:03 ` [dpdk-dev] [PATCH v3 4/7] i40evf: add supported offload capability flags Helin Zhang
@ 2015-06-11 7:03 ` Helin Zhang
2015-06-11 7:03 ` [dpdk-dev] [PATCH v3 6/7] examples/ipv4_multicast: support double vlan " Helin Zhang
` (2 subsequent siblings)
7 siblings, 0 replies; 55+ messages in thread
From: Helin Zhang @ 2015-06-11 7:03 UTC (permalink / raw)
To: dev
If double vlan is detected, its stripped flag and vlan tags can be
printed on rxonly mode. Test command of 'tx_vlan set' is expanded
to set both single and double vlan tags on TX side for each packets
to be sent out.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
app/test-pmd/cmdline.c | 78 +++++++++++++++++++++++++++++++++++++++++++++-----
app/test-pmd/config.c | 21 +++++++++++++-
app/test-pmd/flowgen.c | 4 ++-
app/test-pmd/macfwd.c | 3 ++
app/test-pmd/macswap.c | 3 ++
app/test-pmd/rxonly.c | 3 ++
app/test-pmd/testpmd.h | 6 +++-
app/test-pmd/txonly.c | 8 ++++--
8 files changed, 114 insertions(+), 12 deletions(-)
diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index f01db2a..db2e73e 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -304,9 +304,9 @@ static void cmd_help_long_parsed(void *parsed_result,
"rx_vxlan_port rm (udp_port) (port_id)\n"
" Remove an UDP port for VXLAN packet filter on a port\n\n"
- "tx_vlan set vlan_id (port_id)\n"
- " Set hardware insertion of VLAN ID in packets sent"
- " on a port.\n\n"
+ "tx_vlan set (port_id) vlan_id[, vlan_id_outer]\n"
+ " Set hardware insertion of VLAN IDs (single or double VLAN "
+ "depends on the number of VLAN IDs) in packets sent on a port.\n\n"
"tx_vlan set pvid port_id vlan_id (on|off)\n"
" Set port based TX VLAN insertion.\n\n"
@@ -2799,8 +2799,8 @@ cmdline_parse_inst_t cmd_rx_vlan_filter = {
struct cmd_tx_vlan_set_result {
cmdline_fixed_string_t tx_vlan;
cmdline_fixed_string_t set;
- uint16_t vlan_id;
uint8_t port_id;
+ uint16_t vlan_id;
};
static void
@@ -2809,6 +2809,13 @@ cmd_tx_vlan_set_parsed(void *parsed_result,
__attribute__((unused)) void *data)
{
struct cmd_tx_vlan_set_result *res = parsed_result;
+ int vlan_offload = rte_eth_dev_get_vlan_offload(res->port_id);
+
+ if (vlan_offload & ETH_VLAN_EXTEND_OFFLOAD) {
+ printf("Error, as QinQ has been enabled.\n");
+ return;
+ }
+
tx_vlan_set(res->port_id, res->vlan_id);
}
@@ -2828,13 +2835,69 @@ cmdline_parse_token_num_t cmd_tx_vlan_set_portid =
cmdline_parse_inst_t cmd_tx_vlan_set = {
.f = cmd_tx_vlan_set_parsed,
.data = NULL,
- .help_str = "enable hardware insertion of a VLAN header with a given "
- "TAG Identifier in packets sent on a port",
+ .help_str = "enable hardware insertion of a single VLAN header "
+ "with a given TAG Identifier in packets sent on a port",
.tokens = {
(void *)&cmd_tx_vlan_set_tx_vlan,
(void *)&cmd_tx_vlan_set_set,
- (void *)&cmd_tx_vlan_set_vlanid,
(void *)&cmd_tx_vlan_set_portid,
+ (void *)&cmd_tx_vlan_set_vlanid,
+ NULL,
+ },
+};
+
+/* *** ENABLE HARDWARE INSERTION OF Double VLAN HEADER IN TX PACKETS *** */
+struct cmd_tx_vlan_set_qinq_result {
+ cmdline_fixed_string_t tx_vlan;
+ cmdline_fixed_string_t set;
+ uint8_t port_id;
+ uint16_t vlan_id;
+ uint16_t vlan_id_outer;
+};
+
+static void
+cmd_tx_vlan_set_qinq_parsed(void *parsed_result,
+ __attribute__((unused)) struct cmdline *cl,
+ __attribute__((unused)) void *data)
+{
+ struct cmd_tx_vlan_set_qinq_result *res = parsed_result;
+ int vlan_offload = rte_eth_dev_get_vlan_offload(res->port_id);
+
+ if (!(vlan_offload & ETH_VLAN_EXTEND_OFFLOAD)) {
+ printf("Error, as QinQ hasn't been enabled.\n");
+ return;
+ }
+
+ tx_qinq_set(res->port_id, res->vlan_id, res->vlan_id_outer);
+}
+
+cmdline_parse_token_string_t cmd_tx_vlan_set_qinq_tx_vlan =
+ TOKEN_STRING_INITIALIZER(struct cmd_tx_vlan_set_qinq_result,
+ tx_vlan, "tx_vlan");
+cmdline_parse_token_string_t cmd_tx_vlan_set_qinq_set =
+ TOKEN_STRING_INITIALIZER(struct cmd_tx_vlan_set_qinq_result,
+ set, "set");
+cmdline_parse_token_num_t cmd_tx_vlan_set_qinq_portid =
+ TOKEN_NUM_INITIALIZER(struct cmd_tx_vlan_set_qinq_result,
+ port_id, UINT8);
+cmdline_parse_token_num_t cmd_tx_vlan_set_qinq_vlanid =
+ TOKEN_NUM_INITIALIZER(struct cmd_tx_vlan_set_qinq_result,
+ vlan_id, UINT16);
+cmdline_parse_token_num_t cmd_tx_vlan_set_qinq_vlanid_outer =
+ TOKEN_NUM_INITIALIZER(struct cmd_tx_vlan_set_qinq_result,
+ vlan_id_outer, UINT16);
+
+cmdline_parse_inst_t cmd_tx_vlan_set_qinq = {
+ .f = cmd_tx_vlan_set_qinq_parsed,
+ .data = NULL,
+ .help_str = "enable hardware insertion of double VLAN header "
+ "with given TAG Identifiers in packets sent on a port",
+ .tokens = {
+ (void *)&cmd_tx_vlan_set_qinq_tx_vlan,
+ (void *)&cmd_tx_vlan_set_qinq_set,
+ (void *)&cmd_tx_vlan_set_qinq_portid,
+ (void *)&cmd_tx_vlan_set_qinq_vlanid,
+ (void *)&cmd_tx_vlan_set_qinq_vlanid_outer,
NULL,
},
};
@@ -8782,6 +8845,7 @@ cmdline_parse_ctx_t main_ctx[] = {
(cmdline_parse_inst_t *)&cmd_rx_vlan_filter_all,
(cmdline_parse_inst_t *)&cmd_rx_vlan_filter,
(cmdline_parse_inst_t *)&cmd_tx_vlan_set,
+ (cmdline_parse_inst_t *)&cmd_tx_vlan_set_qinq,
(cmdline_parse_inst_t *)&cmd_tx_vlan_reset,
(cmdline_parse_inst_t *)&cmd_tx_vlan_set_pvid,
(cmdline_parse_inst_t *)&cmd_csum_set,
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index f788ed5..8c49e4d 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -1732,16 +1732,35 @@ tx_vlan_set(portid_t port_id, uint16_t vlan_id)
return;
if (vlan_id_is_invalid(vlan_id))
return;
+ tx_vlan_reset(port_id);
ports[port_id].tx_ol_flags |= TESTPMD_TX_OFFLOAD_INSERT_VLAN;
ports[port_id].tx_vlan_id = vlan_id;
}
void
+tx_qinq_set(portid_t port_id, uint16_t vlan_id, uint16_t vlan_id_outer)
+{
+ if (port_id_is_invalid(port_id, ENABLED_WARN))
+ return;
+ if (vlan_id_is_invalid(vlan_id))
+ return;
+ if (vlan_id_is_invalid(vlan_id_outer))
+ return;
+ tx_vlan_reset(port_id);
+ ports[port_id].tx_ol_flags |= TESTPMD_TX_OFFLOAD_INSERT_QINQ;
+ ports[port_id].tx_vlan_id = vlan_id;
+ ports[port_id].tx_vlan_id_outer = vlan_id_outer;
+}
+
+void
tx_vlan_reset(portid_t port_id)
{
if (port_id_is_invalid(port_id, ENABLED_WARN))
return;
- ports[port_id].tx_ol_flags &= ~TESTPMD_TX_OFFLOAD_INSERT_VLAN;
+ ports[port_id].tx_ol_flags &= ~(TESTPMD_TX_OFFLOAD_INSERT_VLAN |
+ TESTPMD_TX_OFFLOAD_INSERT_QINQ);
+ ports[port_id].tx_vlan_id = 0;
+ ports[port_id].tx_vlan_id_outer = 0;
}
void
diff --git a/app/test-pmd/flowgen.c b/app/test-pmd/flowgen.c
index 72016c9..fce96dc 100644
--- a/app/test-pmd/flowgen.c
+++ b/app/test-pmd/flowgen.c
@@ -136,7 +136,7 @@ pkt_burst_flow_gen(struct fwd_stream *fs)
struct ether_hdr *eth_hdr;
struct ipv4_hdr *ip_hdr;
struct udp_hdr *udp_hdr;
- uint16_t vlan_tci;
+ uint16_t vlan_tci, vlan_tci_outer;
uint16_t ol_flags;
uint16_t nb_rx;
uint16_t nb_tx;
@@ -163,6 +163,7 @@ pkt_burst_flow_gen(struct fwd_stream *fs)
mbp = current_fwd_lcore()->mbp;
vlan_tci = ports[fs->tx_port].tx_vlan_id;
+ vlan_tci_outer = ports[fs->tx_port].tx_vlan_id_outer;
ol_flags = ports[fs->tx_port].tx_ol_flags;
for (nb_pkt = 0; nb_pkt < nb_pkt_per_burst; nb_pkt++) {
@@ -208,6 +209,7 @@ pkt_burst_flow_gen(struct fwd_stream *fs)
pkt->pkt_len = pkt_size;
pkt->ol_flags = ol_flags;
pkt->vlan_tci = vlan_tci;
+ pkt->vlan_tci_outer = vlan_tci_outer;
pkt->l2_len = sizeof(struct ether_hdr);
pkt->l3_len = sizeof(struct ipv4_hdr);
pkts_burst[nb_pkt] = pkt;
diff --git a/app/test-pmd/macfwd.c b/app/test-pmd/macfwd.c
index 035e5eb..3b7fffb 100644
--- a/app/test-pmd/macfwd.c
+++ b/app/test-pmd/macfwd.c
@@ -110,6 +110,8 @@ pkt_burst_mac_forward(struct fwd_stream *fs)
txp = &ports[fs->tx_port];
if (txp->tx_ol_flags & TESTPMD_TX_OFFLOAD_INSERT_VLAN)
ol_flags = PKT_TX_VLAN_PKT;
+ if (txp->tx_ol_flags & TESTPMD_TX_OFFLOAD_INSERT_QINQ)
+ ol_flags |= PKT_TX_QINQ_PKT;
for (i = 0; i < nb_rx; i++) {
mb = pkts_burst[i];
eth_hdr = rte_pktmbuf_mtod(mb, struct ether_hdr *);
@@ -121,6 +123,7 @@ pkt_burst_mac_forward(struct fwd_stream *fs)
mb->l2_len = sizeof(struct ether_hdr);
mb->l3_len = sizeof(struct ipv4_hdr);
mb->vlan_tci = txp->tx_vlan_id;
+ mb->vlan_tci_outer = txp->tx_vlan_id_outer;
}
nb_tx = rte_eth_tx_burst(fs->tx_port, fs->tx_queue, pkts_burst, nb_rx);
fs->tx_packets += nb_tx;
diff --git a/app/test-pmd/macswap.c b/app/test-pmd/macswap.c
index 6729849..154889d 100644
--- a/app/test-pmd/macswap.c
+++ b/app/test-pmd/macswap.c
@@ -110,6 +110,8 @@ pkt_burst_mac_swap(struct fwd_stream *fs)
txp = &ports[fs->tx_port];
if (txp->tx_ol_flags & TESTPMD_TX_OFFLOAD_INSERT_VLAN)
ol_flags = PKT_TX_VLAN_PKT;
+ if (txp->tx_ol_flags & TESTPMD_TX_OFFLOAD_INSERT_QINQ)
+ ol_flags |= PKT_TX_QINQ_PKT;
for (i = 0; i < nb_rx; i++) {
mb = pkts_burst[i];
eth_hdr = rte_pktmbuf_mtod(mb, struct ether_hdr *);
@@ -123,6 +125,7 @@ pkt_burst_mac_swap(struct fwd_stream *fs)
mb->l2_len = sizeof(struct ether_hdr);
mb->l3_len = sizeof(struct ipv4_hdr);
mb->vlan_tci = txp->tx_vlan_id;
+ mb->vlan_tci_outer = txp->tx_vlan_id_outer;
}
nb_tx = rte_eth_tx_burst(fs->tx_port, fs->tx_queue, pkts_burst, nb_rx);
fs->tx_packets += nb_tx;
diff --git a/app/test-pmd/rxonly.c b/app/test-pmd/rxonly.c
index ac56090..f6a2f84 100644
--- a/app/test-pmd/rxonly.c
+++ b/app/test-pmd/rxonly.c
@@ -160,6 +160,9 @@ pkt_burst_receive(struct fwd_stream *fs)
}
if (ol_flags & PKT_RX_VLAN_PKT)
printf(" - VLAN tci=0x%x", mb->vlan_tci);
+ if (ol_flags & PKT_RX_QINQ_PKT)
+ printf(" - QinQ VLAN tci=0x%x, VLAN tci outer=0x%x",
+ mb->vlan_tci, mb->vlan_tci_outer);
if (is_encapsulation) {
struct ipv4_hdr *ipv4_hdr;
struct ipv6_hdr *ipv6_hdr;
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index c3b6700..e71951b 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -133,6 +133,8 @@ struct fwd_stream {
#define TESTPMD_TX_OFFLOAD_PARSE_TUNNEL 0x0020
/** Insert VLAN header in forward engine */
#define TESTPMD_TX_OFFLOAD_INSERT_VLAN 0x0040
+/** Insert double VLAN header in forward engine */
+#define TESTPMD_TX_OFFLOAD_INSERT_QINQ 0x0080
/**
* The data structure associated with each port.
@@ -149,7 +151,8 @@ struct rte_port {
unsigned int socket_id; /**< For NUMA support */
uint16_t tx_ol_flags;/**< TX Offload Flags (TESTPMD_TX_OFFLOAD...). */
uint16_t tso_segsz; /**< MSS for segmentation offload. */
- uint16_t tx_vlan_id; /**< Tag Id. in TX VLAN packets. */
+ uint16_t tx_vlan_id;/**< The tag ID */
+ uint16_t tx_vlan_id_outer;/**< The outer tag ID */
void *fwd_ctx; /**< Forwarding mode context */
uint64_t rx_bad_ip_csum; /**< rx pkts with bad ip checksum */
uint64_t rx_bad_l4_csum; /**< rx pkts with bad l4 checksum */
@@ -513,6 +516,7 @@ int rx_vft_set(portid_t port_id, uint16_t vlan_id, int on);
void vlan_extend_set(portid_t port_id, int on);
void vlan_tpid_set(portid_t port_id, uint16_t tp_id);
void tx_vlan_set(portid_t port_id, uint16_t vlan_id);
+void tx_qinq_set(portid_t port_id, uint16_t vlan_id, uint16_t vlan_id_outer);
void tx_vlan_reset(portid_t port_id);
void tx_vlan_pvid_set(portid_t port_id, uint16_t vlan_id, int on);
diff --git a/app/test-pmd/txonly.c b/app/test-pmd/txonly.c
index ca32c85..8ce6109 100644
--- a/app/test-pmd/txonly.c
+++ b/app/test-pmd/txonly.c
@@ -202,7 +202,7 @@ pkt_burst_transmit(struct fwd_stream *fs)
struct ether_hdr eth_hdr;
uint16_t nb_tx;
uint16_t nb_pkt;
- uint16_t vlan_tci;
+ uint16_t vlan_tci, vlan_tci_outer;
uint64_t ol_flags = 0;
uint8_t i;
#ifdef RTE_TEST_PMD_RECORD_CORE_CYCLES
@@ -218,8 +218,11 @@ pkt_burst_transmit(struct fwd_stream *fs)
mbp = current_fwd_lcore()->mbp;
txp = &ports[fs->tx_port];
vlan_tci = txp->tx_vlan_id;
+ vlan_tci_outer = txp->tx_vlan_id_outer;
if (txp->tx_ol_flags & TESTPMD_TX_OFFLOAD_INSERT_VLAN)
ol_flags = PKT_TX_VLAN_PKT;
+ if (txp->tx_ol_flags & TESTPMD_TX_OFFLOAD_INSERT_QINQ)
+ ol_flags |= PKT_TX_QINQ_PKT;
for (nb_pkt = 0; nb_pkt < nb_pkt_per_burst; nb_pkt++) {
pkt = tx_mbuf_alloc(mbp);
if (pkt == NULL) {
@@ -266,7 +269,8 @@ pkt_burst_transmit(struct fwd_stream *fs)
pkt->nb_segs = tx_pkt_nb_segs;
pkt->pkt_len = tx_pkt_length;
pkt->ol_flags = ol_flags;
- pkt->vlan_tci = vlan_tci;
+ pkt->vlan_tci = vlan_tci;
+ pkt->vlan_tci_outer = vlan_tci_outer;
pkt->l2_len = sizeof(struct ether_hdr);
pkt->l3_len = sizeof(struct ipv4_hdr);
pkts_burst[nb_pkt] = pkt;
--
1.9.3
^ permalink raw reply [flat|nested] 55+ messages in thread
* [dpdk-dev] [PATCH v3 6/7] examples/ipv4_multicast: support double vlan stripping and insertion
2015-06-11 7:03 ` [dpdk-dev] [PATCH v3 0/7] " Helin Zhang
` (4 preceding siblings ...)
2015-06-11 7:03 ` [dpdk-dev] [PATCH v3 5/7] app/testpmd: add test cases for qinq stripping and insertion Helin Zhang
@ 2015-06-11 7:03 ` Helin Zhang
2015-06-11 7:04 ` [dpdk-dev] [PATCH v3 7/7] doc: update testpmd command Helin Zhang
2015-06-11 7:25 ` [dpdk-dev] [PATCH v3 0/7] support i40e QinQ stripping and insertion Wu, Jingjing
7 siblings, 0 replies; 55+ messages in thread
From: Helin Zhang @ 2015-06-11 7:03 UTC (permalink / raw)
To: dev
The outer vlan should be copied from source packet buffer to
support double vlan stripping and insertion, as double vlan can be
stripped or inserted by some of NIC hardware.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
examples/ipv4_multicast/main.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/examples/ipv4_multicast/main.c b/examples/ipv4_multicast/main.c
index 2a2b915..d4253c0 100644
--- a/examples/ipv4_multicast/main.c
+++ b/examples/ipv4_multicast/main.c
@@ -298,6 +298,7 @@ mcast_out_pkt(struct rte_mbuf *pkt, int use_clone)
/* copy metadata from source packet*/
hdr->port = pkt->port;
hdr->vlan_tci = pkt->vlan_tci;
+ hdr->vlan_tci_outer = pkt->vlan_tci_outer;
hdr->tx_offload = pkt->tx_offload;
hdr->hash = pkt->hash;
--
1.9.3
^ permalink raw reply [flat|nested] 55+ messages in thread
* [dpdk-dev] [PATCH v3 7/7] doc: update testpmd command
2015-06-11 7:03 ` [dpdk-dev] [PATCH v3 0/7] " Helin Zhang
` (5 preceding siblings ...)
2015-06-11 7:03 ` [dpdk-dev] [PATCH v3 6/7] examples/ipv4_multicast: support double vlan " Helin Zhang
@ 2015-06-11 7:04 ` Helin Zhang
2015-06-11 7:25 ` [dpdk-dev] [PATCH v3 0/7] support i40e QinQ stripping and insertion Wu, Jingjing
7 siblings, 0 replies; 55+ messages in thread
From: Helin Zhang @ 2015-06-11 7:04 UTC (permalink / raw)
To: dev
testpmd command of 'tx_vlan' has been modified to support insertion
of both single and dual VLAN IDs. Corresponding update in 'Testpmd
Application User Guide' is added.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
doc/guides/testpmd_app_ug/testpmd_funcs.rst | 14 ++++++++++++--
1 file changed, 12 insertions(+), 2 deletions(-)
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index 761172e..f1fa523 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -503,9 +503,19 @@ rx_vxlan_port rm (udp_port) (port_id)
tx_vlan set
~~~~~~~~~~~
-Set hardware insertion of VLAN ID in packets sent on a port:
+Set hardware insertion of VLAN IDs in packets sent on a port:
-tx_vlan set (vlan_id) (port_id)
+tx_vlan set (port_id) vlan_id[, vlan_id_outer]
+
+.. code-block:: console
+
+ Set a single VLAN ID (5) insertion on port 0.
+
+ tx_vlan set 0 5
+
+ Set double VLAN ID (inner: 2, outer: 3) insertion on port 1.
+
+ tx_vlan set 1 2 3
tx_vlan set pvid
~~~~~~~~~~~~~~~~
--
1.9.3
^ permalink raw reply [flat|nested] 55+ messages in thread
* Re: [dpdk-dev] [PATCH v3 0/7] support i40e QinQ stripping and insertion
2015-06-11 7:03 ` [dpdk-dev] [PATCH v3 0/7] " Helin Zhang
` (6 preceding siblings ...)
2015-06-11 7:04 ` [dpdk-dev] [PATCH v3 7/7] doc: update testpmd command Helin Zhang
@ 2015-06-11 7:25 ` Wu, Jingjing
2015-07-07 14:43 ` Thomas Monjalon
7 siblings, 1 reply; 55+ messages in thread
From: Wu, Jingjing @ 2015-06-11 7:25 UTC (permalink / raw)
To: Zhang, Helin, dev
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
> -----Original Message-----
> From: Zhang, Helin
> Sent: Thursday, June 11, 2015 3:04 PM
> To: dev@dpdk.org
> Cc: Cao, Min; Liu, Jijiang; Wu, Jingjing; Ananyev, Konstantin; Richardson,
> Bruce; olivier.matz@6wind.com; Zhang, Helin
> Subject: [PATCH v3 0/7] support i40e QinQ stripping and insertion
>
> As i40e hardware can be reconfigured to support QinQ stripping and insertion,
> this patch set is to enable that together with using the reserved 16 bits in
> 'struct rte_mbuf' for the second vlan tag. Corresponding command is added
> in testpmd for testing.
> Note that no need to rework vPMD, as nothings used in it changed.
>
> v2 changes:
> * Added more commit logs of which commit it fix for.
> * Fixed a typo.
> * Kept the original RX/TX offload flags as they were, added new
> flags after with new bit masks, for ABI compatibility.
> * Supported double vlan stripping/insertion in examples/ipv4_multicast.
>
> v3 changes:
> * update documentation (Testpmd Application User Guide).
>
> Helin Zhang (7):
> ixgbe: remove a discarded source line
> mbuf: use the reserved 16 bits for double vlan
> i40e: support double vlan stripping and insertion
> i40evf: add supported offload capability flags
> app/testpmd: add test cases for qinq stripping and insertion
> examples/ipv4_multicast: support double vlan stripping and insertion
> doc: update testpmd command
>
> app/test-pmd/cmdline.c | 78 ++++++++++++++++++++++++---
> app/test-pmd/config.c | 21 +++++++-
> app/test-pmd/flowgen.c | 4 +-
> app/test-pmd/macfwd.c | 3 ++
> app/test-pmd/macswap.c | 3 ++
> app/test-pmd/rxonly.c | 3 ++
> app/test-pmd/testpmd.h | 6 ++-
> app/test-pmd/txonly.c | 8 ++-
> doc/guides/testpmd_app_ug/testpmd_funcs.rst | 14 ++++-
> drivers/net/i40e/i40e_ethdev.c | 52 ++++++++++++++++++
> drivers/net/i40e/i40e_ethdev_vf.c | 13 +++++
> drivers/net/i40e/i40e_rxtx.c | 81 ++++++++++++++++++-----------
> drivers/net/ixgbe/ixgbe_rxtx.c | 1 -
> examples/ipv4_multicast/main.c | 1 +
> lib/librte_ether/rte_ethdev.h | 2 +
> lib/librte_mbuf/rte_mbuf.h | 10 +++-
> 16 files changed, 255 insertions(+), 45 deletions(-)
>
> --
> 1.9.3
^ permalink raw reply [flat|nested] 55+ messages in thread
* Re: [dpdk-dev] [PATCH v3 2/7] mbuf: use the reserved 16 bits for double vlan
2015-06-11 7:03 ` [dpdk-dev] [PATCH v3 2/7] mbuf: use the reserved 16 bits for double vlan Helin Zhang
@ 2015-06-25 8:31 ` Zhang, Helin
2015-06-28 20:36 ` Thomas Monjalon
0 siblings, 1 reply; 55+ messages in thread
From: Zhang, Helin @ 2015-06-25 8:31 UTC (permalink / raw)
To: Neil Horman; +Cc: dev
Hi Neil
> -----Original Message-----
> From: Zhang, Helin
> Sent: Thursday, June 11, 2015 3:04 PM
> To: dev@dpdk.org
> Cc: Cao, Min; Liu, Jijiang; Wu, Jingjing; Ananyev, Konstantin; Richardson, Bruce;
> olivier.matz@6wind.com; Zhang, Helin
> Subject: [PATCH v3 2/7] mbuf: use the reserved 16 bits for double vlan
>
> Use the reserved 16 bits in rte_mbuf structure for the outer vlan, also add QinQ
> offloading flags for both RX and TX sides.
>
> Signed-off-by: Helin Zhang <helin.zhang@intel.com>
> ---
> lib/librte_mbuf/rte_mbuf.h | 10 +++++++++-
> 1 file changed, 9 insertions(+), 1 deletion(-)
>
> v2 changes:
> * Fixed a typo.
>
> diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h index
> ab6de67..84fe181 100644
> --- a/lib/librte_mbuf/rte_mbuf.h
> +++ b/lib/librte_mbuf/rte_mbuf.h
> @@ -101,11 +101,17 @@ extern "C" {
> #define PKT_RX_TUNNEL_IPV6_HDR (1ULL << 12) /**< RX tunnel packet with
> IPv6 header. */
> #define PKT_RX_FDIR_ID (1ULL << 13) /**< FD id reported if FDIR match.
> */
> #define PKT_RX_FDIR_FLX (1ULL << 14) /**< Flexible bytes reported if
> FDIR match. */
> +#define PKT_RX_QINQ_PKT (1ULL << 15) /**< RX packet with double
> VLAN stripped. */
> /* add new RX flags here */
>
> /* add new TX flags here */
>
> /**
> + * Second VLAN insertion (QinQ) flag.
> + */
> +#define PKT_TX_QINQ_PKT (1ULL << 49) /**< TX packet with double
> VLAN inserted. */
> +
> +/**
> * TCP segmentation offload. To enable this offload feature for a
> * packet to be transmitted on hardware supporting TSO:
> * - set the PKT_TX_TCP_SEG flag in mbuf->ol_flags (this flag implies @@
> -279,7 +285,7 @@ struct rte_mbuf {
> uint16_t data_len; /**< Amount of data in segment buffer. */
> uint32_t pkt_len; /**< Total pkt len: sum of all segments. */
> uint16_t vlan_tci; /**< VLAN Tag Control Identifier (CPU order) */
> - uint16_t reserved;
> + uint16_t vlan_tci_outer; /**< Outer VLAN Tag Control Identifier (CPU
> +order) */
Do you think here is a ABI break or not? Just using the reserved 16 bits, which was
intended for the second_vlan_tag. Thanks in advance!
I did not see any "Incompatible" reported by validate_abi.sh.
Regards,
Helin
> union {
> uint32_t rss; /**< RSS hash result if RSS enabled */
> struct {
> @@ -777,6 +783,7 @@ static inline void rte_pktmbuf_reset(struct rte_mbuf *m)
> m->pkt_len = 0;
> m->tx_offload = 0;
> m->vlan_tci = 0;
> + m->vlan_tci_outer = 0;
> m->nb_segs = 1;
> m->port = 0xff;
>
> @@ -849,6 +856,7 @@ static inline void rte_pktmbuf_attach(struct rte_mbuf *mi,
> struct rte_mbuf *m)
> mi->data_len = m->data_len;
> mi->port = m->port;
> mi->vlan_tci = m->vlan_tci;
> + mi->vlan_tci_outer = m->vlan_tci_outer;
> mi->tx_offload = m->tx_offload;
> mi->hash = m->hash;
>
> --
> 1.9.3
^ permalink raw reply [flat|nested] 55+ messages in thread
* Re: [dpdk-dev] [PATCH v3 2/7] mbuf: use the reserved 16 bits for double vlan
2015-06-25 8:31 ` Zhang, Helin
@ 2015-06-28 20:36 ` Thomas Monjalon
2015-06-30 7:33 ` Olivier MATZ
0 siblings, 1 reply; 55+ messages in thread
From: Thomas Monjalon @ 2015-06-28 20:36 UTC (permalink / raw)
To: Neil Horman, olivier.matz; +Cc: dev
Neil, Olivier,
Your opinions are requested here.
Thanks
2015-06-25 08:31, Zhang, Helin:
> Hi Neil
[...]
> > -279,7 +285,7 @@ struct rte_mbuf {
> > uint16_t data_len; /**< Amount of data in segment buffer. */
> > uint32_t pkt_len; /**< Total pkt len: sum of all segments. */
> > uint16_t vlan_tci; /**< VLAN Tag Control Identifier (CPU order) */
> > - uint16_t reserved;
> > + uint16_t vlan_tci_outer; /**< Outer VLAN Tag Control Identifier (CPU
> > +order) */
> Do you think here is a ABI break or not? Just using the reserved 16 bits, which was
> intended for the second_vlan_tag. Thanks in advance!
> I did not see any "Incompatible" reported by validate_abi.sh.
>
> Regards,
> Helin
^ permalink raw reply [flat|nested] 55+ messages in thread
* Re: [dpdk-dev] [PATCH v3 2/7] mbuf: use the reserved 16 bits for double vlan
2015-06-28 20:36 ` Thomas Monjalon
@ 2015-06-30 7:33 ` Olivier MATZ
0 siblings, 0 replies; 55+ messages in thread
From: Olivier MATZ @ 2015-06-30 7:33 UTC (permalink / raw)
To: Thomas Monjalon, Neil Horman; +Cc: dev
Hi,
On 06/28/2015 10:36 PM, Thomas Monjalon wrote:
> Neil, Olivier,
> Your opinions are requested here.
> Thanks
>
> 2015-06-25 08:31, Zhang, Helin:
>> Hi Neil
> [...]
>>> -279,7 +285,7 @@ struct rte_mbuf {
>>> uint16_t data_len; /**< Amount of data in segment buffer. */
>>> uint32_t pkt_len; /**< Total pkt len: sum of all segments. */
>>> uint16_t vlan_tci; /**< VLAN Tag Control Identifier (CPU order) */
>>> - uint16_t reserved;
>>> + uint16_t vlan_tci_outer; /**< Outer VLAN Tag Control Identifier (CPU
>>> +order) */
>> Do you think here is a ABI break or not? Just using the reserved 16 bits, which was
>> intended for the second_vlan_tag. Thanks in advance!
>> I did not see any "Incompatible" reported by validate_abi.sh.
I don't feel there's any ABI break here. I think an application
should not use the "reserved" fields.
Regards,
Olivier
^ permalink raw reply [flat|nested] 55+ messages in thread
* Re: [dpdk-dev] [PATCH v3 0/7] support i40e QinQ stripping and insertion
2015-06-11 7:25 ` [dpdk-dev] [PATCH v3 0/7] support i40e QinQ stripping and insertion Wu, Jingjing
@ 2015-07-07 14:43 ` Thomas Monjalon
0 siblings, 0 replies; 55+ messages in thread
From: Thomas Monjalon @ 2015-07-07 14:43 UTC (permalink / raw)
To: Zhang, Helin; +Cc: dev
> > As i40e hardware can be reconfigured to support QinQ stripping and insertion,
> > this patch set is to enable that together with using the reserved 16 bits in
> > 'struct rte_mbuf' for the second vlan tag. Corresponding command is added
> > in testpmd for testing.
> > Note that no need to rework vPMD, as nothings used in it changed.
> >
> > v2 changes:
> > * Added more commit logs of which commit it fix for.
> > * Fixed a typo.
> > * Kept the original RX/TX offload flags as they were, added new
> > flags after with new bit masks, for ABI compatibility.
> > * Supported double vlan stripping/insertion in examples/ipv4_multicast.
> >
> > v3 changes:
> > * update documentation (Testpmd Application User Guide).
> >
> > Helin Zhang (7):
> > ixgbe: remove a discarded source line
> > mbuf: use the reserved 16 bits for double vlan
> > i40e: support double vlan stripping and insertion
> > i40evf: add supported offload capability flags
> > app/testpmd: add test cases for qinq stripping and insertion
> > examples/ipv4_multicast: support double vlan stripping and insertion
> > doc: update testpmd command
>
> Acked-by: Jingjing Wu <jingjing.wu@intel.com>
Applied, thanks
^ permalink raw reply [flat|nested] 55+ messages in thread
end of thread, other threads:[~2015-07-07 14:45 UTC | newest]
Thread overview: 55+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-05-05 2:32 [dpdk-dev] [PATCH RFC 0/6] support of QinQ stripping and insertion of i40e Helin Zhang
2015-05-05 2:32 ` [dpdk-dev] [PATCH RFC 1/6] mbuf: update mbuf structure for QinQ support Helin Zhang
2015-05-05 11:04 ` Ananyev, Konstantin
2015-05-05 15:42 ` Chilikin, Andrey
2015-05-05 22:37 ` Ananyev, Konstantin
2015-05-06 4:07 ` Zhang, Helin
2015-05-06 4:06 ` Zhang, Helin
2015-05-06 8:39 ` Bruce Richardson
2015-05-06 8:48 ` Zhang, Helin
2015-05-05 2:32 ` [dpdk-dev] [PATCH RFC 2/6] i40e: reconfigure the hardware to support QinQ stripping/insertion Helin Zhang
2015-05-05 2:32 ` [dpdk-dev] [PATCH RFC 3/6] i40e: support of QinQ stripping/insertion in RX/TX Helin Zhang
2015-05-05 2:32 ` [dpdk-dev] [PATCH RFC 4/6] ethdev: add QinQ offload capability flags Helin Zhang
2015-05-05 2:32 ` [dpdk-dev] [PATCH RFC 5/6] i40e: update of " Helin Zhang
2015-05-05 2:32 ` [dpdk-dev] [PATCH RFC 6/6] app/testpmd: support of QinQ stripping and insertion Helin Zhang
2015-05-26 8:36 ` [dpdk-dev] [PATCH 0/5] support i40e " Helin Zhang
2015-05-26 8:36 ` [dpdk-dev] [PATCH 1/5] ixgbe: remove a discarded source line Helin Zhang
2015-06-01 8:50 ` Olivier MATZ
2015-06-02 1:45 ` Zhang, Helin
2015-05-26 8:36 ` [dpdk-dev] [PATCH 2/5] mbuf: use the reserved 16 bits for double vlan Helin Zhang
2015-05-26 14:55 ` Stephen Hemminger
2015-05-26 15:00 ` Zhang, Helin
2015-05-26 15:02 ` Ananyev, Konstantin
2015-05-26 15:35 ` Stephen Hemminger
2015-05-26 15:46 ` Ananyev, Konstantin
2015-05-27 1:07 ` Zhang, Helin
2015-06-01 8:50 ` Olivier MATZ
2015-06-02 2:37 ` Zhang, Helin
2015-05-26 8:36 ` [dpdk-dev] [PATCH 3/5] i40e: support double vlan stripping and insertion Helin Zhang
2015-06-01 8:50 ` Olivier MATZ
2015-06-02 2:45 ` Zhang, Helin
2015-05-26 8:36 ` [dpdk-dev] [PATCH 4/5] i40evf: add supported offload capability flags Helin Zhang
2015-05-26 8:36 ` [dpdk-dev] [PATCH 5/5] app/testpmd: add test cases for qinq stripping and insertion Helin Zhang
2015-06-02 3:16 ` [dpdk-dev] [PATCH v2 0/6] support i40e QinQ " Helin Zhang
2015-06-02 3:16 ` [dpdk-dev] [PATCH v2 1/6] ixgbe: remove a discarded source line Helin Zhang
2015-06-02 3:16 ` [dpdk-dev] [PATCH v2 2/6] mbuf: use the reserved 16 bits for double vlan Helin Zhang
2015-06-02 3:16 ` [dpdk-dev] [PATCH v2 3/6] i40e: support double vlan stripping and insertion Helin Zhang
2015-06-02 3:16 ` [dpdk-dev] [PATCH v2 4/6] i40evf: add supported offload capability flags Helin Zhang
2015-06-02 3:16 ` [dpdk-dev] [PATCH v2 5/6] app/testpmd: add test cases for qinq stripping and insertion Helin Zhang
2015-06-02 3:16 ` [dpdk-dev] [PATCH v2 6/6] examples/ipv4_multicast: support double vlan " Helin Zhang
2015-06-02 7:37 ` [dpdk-dev] [PATCH v2 0/6] support i40e QinQ " Liu, Jijiang
2015-06-08 7:32 ` Cao, Min
2015-06-08 7:40 ` Olivier MATZ
2015-06-11 7:03 ` [dpdk-dev] [PATCH v3 0/7] " Helin Zhang
2015-06-11 7:03 ` [dpdk-dev] [PATCH v3 1/7] ixgbe: remove a discarded source line Helin Zhang
2015-06-11 7:03 ` [dpdk-dev] [PATCH v3 2/7] mbuf: use the reserved 16 bits for double vlan Helin Zhang
2015-06-25 8:31 ` Zhang, Helin
2015-06-28 20:36 ` Thomas Monjalon
2015-06-30 7:33 ` Olivier MATZ
2015-06-11 7:03 ` [dpdk-dev] [PATCH v3 3/7] i40e: support double vlan stripping and insertion Helin Zhang
2015-06-11 7:03 ` [dpdk-dev] [PATCH v3 4/7] i40evf: add supported offload capability flags Helin Zhang
2015-06-11 7:03 ` [dpdk-dev] [PATCH v3 5/7] app/testpmd: add test cases for qinq stripping and insertion Helin Zhang
2015-06-11 7:03 ` [dpdk-dev] [PATCH v3 6/7] examples/ipv4_multicast: support double vlan " Helin Zhang
2015-06-11 7:04 ` [dpdk-dev] [PATCH v3 7/7] doc: update testpmd command Helin Zhang
2015-06-11 7:25 ` [dpdk-dev] [PATCH v3 0/7] support i40e QinQ stripping and insertion Wu, Jingjing
2015-07-07 14:43 ` Thomas Monjalon
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).