patches for DPDK stable branches
 help / color / mirror / Atom feed
From: Lijun Ou <oulijun@huawei.com>
To: <luca.boccassi@gmail.com>, <stable@dpdk.org>
Cc: <linuxarm@huawei.com>
Subject: [dpdk-stable] [PATCH 19.11.6 07/13] net/hns3: fix inserted VLAN tag position in Tx
Date: Fri, 13 Nov 2020 21:37:04 +0800	[thread overview]
Message-ID: <1605274630-23414-8-git-send-email-oulijun@huawei.com> (raw)
In-Reply-To: <1605274630-23414-1-git-send-email-oulijun@huawei.com>

From: "Wei Hu (Xavier)" <xavier.huwei@huawei.com>

[ upstream commit fc9b57ff576c4088138a8e0e53650b806927e683 ]

Based on hns3 network engine, in order to configure hardware VLAN insert
offload in Tx direction, PMD driver reads the VLAN tags from the
vlan_tci_outer and vlan_tci of the structure rte_mbuf, fills them into
the Tx Buffer Descriptor and sets the related offload flag for every
packet.

Currently, there are two VLAN related problems in the 'tx_pkt_burst' ops
implementation function:
1) When setting the related offload flag, PMD driver inserts the VLAN
   tag into the position that close to L3 header. So, when upper
   application sends a packet with a VLAN tag in the data buffer, the
   VLAN offloaded by hardware will be added to the wrong position. It is
   supposed to add the VLAN tag from the rte_mbuf to the position close
   to the MAC header in the packet when using VLAN insertion.

   And when PF PVID is enabled by calling the API function named
   rte_eth_dev_set_vlan_pvid or VF PVID is enabled by hns3 PF kernel
   ether driver, the VLAN tag from the structure rte_mbuf to enable the
   VLAN insertion should be filled into the position that close to L3
   header to avoid to be overwritten by the PVID which will always be
   inserted in the position that close to the MAC address.

2) When sending multiple segment packets, VLAN information is required
   to be filled into the first Tx Buffer descriptor. However, currently
   hns3 PMD driver incorrectly placed it in the last Tx Buffer
   Descriptor. This results in VLAN insert offload failure when sending
   multiple segment packets.

This patch fixed them by filling the VLAN information into the position
of the Tx Buffer Descriptor.

Fixes: bba636698316 ("net/hns3: support Rx/Tx and related operations")
Cc: stable@dpdk.org

Signed-off-by: Chengchang Tang <tangchengchang@huawei.com>
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
---
 drivers/net/hns3/hns3_rxtx.c | 102 +++++++++++++++++++++++++++----------------
 1 file changed, 65 insertions(+), 37 deletions(-)

diff --git a/drivers/net/hns3/hns3_rxtx.c b/drivers/net/hns3/hns3_rxtx.c
index 9d14632..e351f37 100644
--- a/drivers/net/hns3/hns3_rxtx.c
+++ b/drivers/net/hns3/hns3_rxtx.c
@@ -1956,51 +1956,58 @@ hns3_set_tso(struct hns3_desc *desc, uint64_t ol_flags,
 	desc->tx.mss = rte_cpu_to_le_16(rxm->tso_segsz);
 }
 
+static inline void
+hns3_fill_per_desc(struct hns3_desc *desc, struct rte_mbuf *rxm)
+{
+	desc->addr = rte_mbuf_data_iova(rxm);
+	desc->tx.send_size = rte_cpu_to_le_16(rte_pktmbuf_data_len(rxm));
+	desc->tx.tp_fe_sc_vld_ra_ri = rte_cpu_to_le_16(BIT(HNS3_TXD_VLD_B));
+}
+
 static void
-fill_desc(struct hns3_tx_queue *txq, uint16_t tx_desc_id, struct rte_mbuf *rxm,
-	  bool first, int offset)
+hns3_fill_first_desc(struct hns3_tx_queue *txq, struct hns3_desc *desc,
+		     struct rte_mbuf *rxm)
 {
-	struct hns3_desc *tx_ring = txq->tx_ring;
-	struct hns3_desc *desc = &tx_ring[tx_desc_id];
-	uint8_t frag_end = rxm->next == NULL ? 1 : 0;
 	uint64_t ol_flags = rxm->ol_flags;
-	uint16_t size = rxm->data_len;
-	uint16_t rrcfv = 0;
 	uint32_t hdr_len;
 	uint32_t paylen;
-	uint32_t tmp;
 
-	desc->addr = rte_mbuf_data_iova(rxm) + offset;
-	desc->tx.send_size = rte_cpu_to_le_16(size);
-	hns3_set_bit(rrcfv, HNS3_TXD_VLD_B, 1);
-
-	if (first) {
-		hdr_len = rxm->l2_len + rxm->l3_len + rxm->l4_len;
-		hdr_len += (ol_flags & PKT_TX_TUNNEL_MASK) ?
+	hdr_len = rxm->l2_len + rxm->l3_len + rxm->l4_len;
+	hdr_len += (ol_flags & PKT_TX_TUNNEL_MASK) ?
 			   rxm->outer_l2_len + rxm->outer_l3_len : 0;
-		paylen = rxm->pkt_len - hdr_len;
-		desc->tx.paylen = rte_cpu_to_le_32(paylen);
-		hns3_set_tso(desc, ol_flags, paylen, rxm);
-	}
-
-	hns3_set_bit(rrcfv, HNS3_TXD_FE_B, frag_end);
-	desc->tx.tp_fe_sc_vld_ra_ri = rte_cpu_to_le_16(rrcfv);
-
-	if (frag_end) {
-		if (ol_flags & (PKT_TX_VLAN_PKT | PKT_TX_QINQ_PKT)) {
-			tmp = rte_le_to_cpu_32(desc->tx.type_cs_vlan_tso_len);
-			hns3_set_bit(tmp, HNS3_TXD_VLAN_B, 1);
-			desc->tx.type_cs_vlan_tso_len = rte_cpu_to_le_32(tmp);
-			desc->tx.vlan_tag = rte_cpu_to_le_16(rxm->vlan_tci);
-		}
+	paylen = rxm->pkt_len - hdr_len;
+	desc->tx.paylen = rte_cpu_to_le_32(paylen);
+	hns3_set_tso(desc, ol_flags, paylen, rxm);
 
-		if (ol_flags & PKT_TX_QINQ_PKT) {
-			tmp = rte_le_to_cpu_32(desc->tx.ol_type_vlan_len_msec);
-			hns3_set_bit(tmp, HNS3_TXD_OVLAN_B, 1);
-			desc->tx.ol_type_vlan_len_msec = rte_cpu_to_le_32(tmp);
+	/*
+	 * Currently, hardware doesn't support more than two layers VLAN offload
+	 * in Tx direction based on hns3 network engine. So when the number of
+	 * VLANs in the packets represented by rxm plus the number of VLAN
+	 * offload by hardware such as PVID etc, exceeds two, the packets will
+	 * be discarded or the original VLAN of the packets will be overwitted
+	 * by hardware. When the PF PVID is enabled by calling the API function
+	 * named rte_eth_dev_set_vlan_pvid or the VF PVID is enabled by the hns3
+	 * PF kernel ether driver, the outer VLAN tag will always be the PVID.
+	 * To avoid the VLAN of Tx descriptor is overwritten by PVID, it should
+	 * be added to the position close to the IP header when PVID is enabled.
+	 */
+	if (!txq->pvid_state && ol_flags & (PKT_TX_VLAN_PKT |
+				PKT_TX_QINQ_PKT)) {
+		desc->tx.ol_type_vlan_len_msec |=
+				rte_cpu_to_le_32(BIT(HNS3_TXD_OVLAN_B));
+		if (ol_flags & PKT_TX_QINQ_PKT)
 			desc->tx.outer_vlan_tag =
-				rte_cpu_to_le_16(rxm->vlan_tci_outer);
-		}
+					rte_cpu_to_le_16(rxm->vlan_tci_outer);
+		else
+			desc->tx.outer_vlan_tag =
+					rte_cpu_to_le_16(rxm->vlan_tci);
+	}
+
+	if (ol_flags & PKT_TX_QINQ_PKT ||
+	    ((ol_flags & PKT_TX_VLAN_PKT) && txq->pvid_state)) {
+		desc->tx.type_cs_vlan_tso_len |=
+					rte_cpu_to_le_32(BIT(HNS3_TXD_VLAN_B));
+		desc->tx.vlan_tag = rte_cpu_to_le_16(rxm->vlan_tci);
 	}
 }
 
@@ -2523,8 +2530,10 @@ hns3_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 	struct rte_net_hdr_lens hdr_lens = {0};
 	struct hns3_tx_queue *txq = tx_queue;
 	struct hns3_entry *tx_bak_pkt;
+	struct hns3_desc *tx_ring;
 	struct rte_mbuf *tx_pkt;
 	struct rte_mbuf *m_seg;
+	struct hns3_desc *desc;
 	uint32_t nb_hold = 0;
 	uint16_t tx_next_use;
 	uint16_t tx_pkt_num;
@@ -2539,6 +2548,7 @@ hns3_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 	tx_next_use   = txq->next_to_use;
 	tx_bd_max     = txq->nb_tx_desc;
 	tx_pkt_num = nb_pkts;
+	tx_ring = txq->tx_ring;
 
 	/* send packets */
 	tx_bak_pkt = &txq->sw_ring[tx_next_use];
@@ -2580,8 +2590,22 @@ hns3_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 			goto end_of_tx;
 
 		i = 0;
+		desc = &tx_ring[tx_next_use];
+
+		/*
+		 * If the packet is divided into multiple Tx Buffer Descriptors,
+		 * only need to fill vlan, paylen and tso into the first Tx
+		 * Buffer Descriptor.
+		 */
+		hns3_fill_first_desc(txq, desc, m_seg);
+
 		do {
-			fill_desc(txq, tx_next_use, m_seg, (i == 0), 0);
+			desc = &tx_ring[tx_next_use];
+			/*
+			 * Fill valid bits, DMA address and data length for each
+			 * Tx Buffer Descriptor.
+			 */
+			hns3_fill_per_desc(desc, m_seg);
 			tx_bak_pkt->mbuf = m_seg;
 			m_seg = m_seg->next;
 			tx_next_use++;
@@ -2594,6 +2618,10 @@ hns3_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 			i++;
 		} while (m_seg != NULL);
 
+		/* Add end flag for the last Tx Buffer Descriptor */
+		desc->tx.tp_fe_sc_vld_ra_ri |=
+				 rte_cpu_to_le_16(BIT(HNS3_TXD_FE_B));
+
 		nb_hold += i;
 		txq->next_to_use = tx_next_use;
 		txq->tx_bd_ready -= i;
-- 
2.7.4


  parent reply	other threads:[~2020-11-13 13:37 UTC|newest]

Thread overview: 42+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-11-13 13:36 [dpdk-stable] [PATCH 19.11.6 00/13] backport for 19.11.6 Lijun Ou
2020-11-13 13:36 ` [dpdk-stable] [PATCH 19.11.6 01/13] net/hns3: support TSO Lijun Ou
2020-11-13 13:36 ` [dpdk-stable] [PATCH 19.11.6 02/13] net/hns3: check TSO segment size during Tx Lijun Ou
2020-11-13 13:37 ` [dpdk-stable] [PATCH 19.11.6 03/13] net/hns3: fix reassembling multiple segment packets in Tx Lijun Ou
2020-11-13 13:37 ` [dpdk-stable] [PATCH 19.11.6 04/13] net/hns3: support promiscuous and allmulticast mode for VF Lijun Ou
2020-11-13 13:37 ` [dpdk-stable] [PATCH 19.11.6 05/13] net/hns3: decrease non-nearby memory access in Rx Lijun Ou
2020-11-13 13:37 ` [dpdk-stable] [PATCH 19.11.6 06/13] net/hns3: cleanup duplicated code on processing TSO in Tx Lijun Ou
2020-11-13 13:37 ` Lijun Ou [this message]
2020-11-13 13:37 ` [dpdk-stable] [PATCH 19.11.6 08/13] net/hns3: report Tx descriptor segment limitations Lijun Ou
2020-11-13 13:37 ` [dpdk-stable] [PATCH 19.11.6 09/13] net/hns3: report Rx drop packets enable configuration Lijun Ou
2020-11-13 13:37 ` [dpdk-stable] [PATCH 19.11.6 10/13] net/hns3: report Rx free threshold Lijun Ou
2020-11-13 13:37 ` [dpdk-stable] [PATCH 19.11.6 11/13] net/hns3: reduce address calculation in Rx Lijun Ou
2020-11-13 13:37 ` [dpdk-stable] [PATCH 19.11.6 12/13] net/hns3: fix Tx checksum outer header prepare Lijun Ou
2020-11-13 13:37 ` [dpdk-stable] [PATCH 19.11.6 13/13] net/hns3: fix Tx checksum with fixed header length Lijun Ou
2020-11-23 15:52 ` [dpdk-stable] [PATCH 19.11.6 00/13] backport for 19.11.6 Luca Boccassi
2020-11-24 14:27   ` oulijun
2020-11-24 15:35     ` Luca Boccassi
2020-11-25  3:24 ` [dpdk-stable] [PATCH v2 19.11.6 0/7] " Lijun Ou
2020-11-25  3:24   ` [dpdk-stable] [PATCH v2 19.11.6 1/7] net/hns3: fix reassembling multiple segment packets in Tx Lijun Ou
2020-11-25  3:24   ` [dpdk-stable] [PATCH v2 19.11.6 2/7] net/hns3: decrease non-nearby memory access in Rx Lijun Ou
2020-11-25  3:24   ` [dpdk-stable] [PATCH v2 19.11.6 3/7] net/hns3: report Tx descriptor segment limitations Lijun Ou
2020-11-25  9:16     ` Luca Boccassi
2020-11-25 10:59       ` oulijun
2020-11-25  3:24   ` [dpdk-stable] [PATCH v2 19.11.6 4/7] net/hns3: report Rx drop packets enable configuration Lijun Ou
2020-11-25  3:24   ` [dpdk-stable] [PATCH v2 19.11.6 5/7] net/hns3: report Rx free threshold Lijun Ou
2020-11-25  3:24   ` [dpdk-stable] [PATCH v2 19.11.6 6/7] net/hns3: reduce address calculation in Rx Lijun Ou
2020-11-25  3:24   ` [dpdk-stable] [PATCH v2 19.11.6 7/7] net/hns3: fix TX checksum with fix header length Lijun Ou
2020-11-25 10:58   ` [dpdk-stable] [PATCH v3 19.11.6 0/6] backport for 19.11.6 Lijun Ou
2020-11-25 10:58     ` [dpdk-stable] [PATCH v3 19.11.6 1/6] net/hns3: fix reassembling multiple segment packets in Tx Lijun Ou
2020-11-25 10:58     ` [dpdk-stable] [PATCH v3 19.11.6 2/6] net/hns3: decrease non-nearby memory access in Rx Lijun Ou
2020-11-25 10:58     ` [dpdk-stable] [PATCH v3 19.11.6 3/6] net/hns3: report Rx drop packets enable configuration Lijun Ou
2020-11-25 10:58     ` [dpdk-stable] [PATCH v3 19.11.6 4/6] net/hns3: report Rx free threshold Lijun Ou
2020-11-25 10:58     ` [dpdk-stable] [PATCH v3 19.11.6 5/6] net/hns3: reduce address calculation in Rx Lijun Ou
2020-11-25 10:58     ` [dpdk-stable] [PATCH v3 19.11.6 6/6] net/hns3: fix TX checksum with fix header length Lijun Ou
2020-11-25 11:15     ` [dpdk-stable] [PATCH v4 19.11.6 0/6] backport for 19.11.6 Lijun Ou
2020-11-25 11:15       ` [dpdk-stable] [PATCH v4 19.11.6 1/6] net/hns3: fix reassembling multiple segment packets in Tx Lijun Ou
2020-11-25 11:15       ` [dpdk-stable] [PATCH v4 19.11.6 2/6] net/hns3: decrease non-nearby memory access in Rx Lijun Ou
2020-11-25 11:15       ` [dpdk-stable] [PATCH v4 19.11.6 3/6] net/hns3: report Rx drop packets enable configuration Lijun Ou
2020-11-25 11:15       ` [dpdk-stable] [PATCH v4 19.11.6 4/6] net/hns3: report Rx free threshold Lijun Ou
2020-11-25 11:15       ` [dpdk-stable] [PATCH v4 19.11.6 5/6] net/hns3: reduce address calculation in Rx Lijun Ou
2020-11-25 11:15       ` [dpdk-stable] [PATCH v4 19.11.6 6/6] net/hns3: fix TX checksum with fix header length Lijun Ou
2020-11-26  9:21       ` [dpdk-stable] [PATCH v4 19.11.6 0/6] backport for 19.11.6 Luca Boccassi

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1605274630-23414-8-git-send-email-oulijun@huawei.com \
    --to=oulijun@huawei.com \
    --cc=linuxarm@huawei.com \
    --cc=luca.boccassi@gmail.com \
    --cc=stable@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).