patches for DPDK stable branches
 help / color / mirror / Atom feed
* [PATCH v2 03/12] net/txgbe: fix reserved extra FDIR headroom
       [not found] ` <20250609070454.223387-1-jiawenwu@trustnetic.com>
@ 2025-06-09  7:04   ` Jiawen Wu
  2025-06-09  7:04   ` [PATCH v2 06/12] net/txgbe: fix MAC control frame forwarding Jiawen Wu
                     ` (6 subsequent siblings)
  7 siblings, 0 replies; 22+ messages in thread
From: Jiawen Wu @ 2025-06-09  7:04 UTC (permalink / raw)
  To: dev; +Cc: zaiyuwang, Jiawen Wu, stable

Remove redundant 256KB FDIR headroom reservation. FDIR headroom was
already allocated in txgbe_fdir_configure() when FDIR is enabled, the
second reservation resulted in 256KB less available RX packet buffer than
the theoretical size.

Fixes: 8bdc7882f376 ("net/txgbe: support DCB")
Cc: stable@dpdk.org

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
 drivers/net/txgbe/base/txgbe_hw.c | 4 +---
 1 file changed, 1 insertion(+), 3 deletions(-)

diff --git a/drivers/net/txgbe/base/txgbe_hw.c b/drivers/net/txgbe/base/txgbe_hw.c
index ae2ad87c83..76b9ee3c0a 100644
--- a/drivers/net/txgbe/base/txgbe_hw.c
+++ b/drivers/net/txgbe/base/txgbe_hw.c
@@ -2106,9 +2106,7 @@ void txgbe_set_pba(struct txgbe_hw *hw, int num_pb, u32 headroom,
 	u32 rxpktsize, txpktsize, txpbthresh;
 
 	UNREFERENCED_PARAMETER(hw);
-
-	/* Reserve headroom */
-	pbsize -= headroom;
+	UNREFERENCED_PARAMETER(headroom);
 
 	if (!num_pb)
 		num_pb = 1;
-- 
2.48.1


^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH v2 06/12] net/txgbe: fix MAC control frame forwarding
       [not found] ` <20250609070454.223387-1-jiawenwu@trustnetic.com>
  2025-06-09  7:04   ` [PATCH v2 03/12] net/txgbe: fix reserved extra FDIR headroom Jiawen Wu
@ 2025-06-09  7:04   ` Jiawen Wu
  2025-06-09  7:04   ` [PATCH v2 07/12] net/ngbe: " Jiawen Wu
                     ` (5 subsequent siblings)
  7 siblings, 0 replies; 22+ messages in thread
From: Jiawen Wu @ 2025-06-09  7:04 UTC (permalink / raw)
  To: dev; +Cc: zaiyuwang, Jiawen Wu, stable

Test Failure on the case "test_pause_fwd_port_stop_start", which expect
MAC control frame forwarding setting still working after port stop/start.
Fix the bug to pass the test case.

Fixes: 69ce8c8a4ce3 ("net/txgbe: support flow control")
Cc: stable@dpdk.org

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
 drivers/net/txgbe/base/txgbe_hw.c   | 9 +++++++++
 drivers/net/txgbe/base/txgbe_type.h | 1 +
 drivers/net/txgbe/txgbe_ethdev.c    | 1 +
 3 files changed, 11 insertions(+)

diff --git a/drivers/net/txgbe/base/txgbe_hw.c b/drivers/net/txgbe/base/txgbe_hw.c
index 76b9ee3c0a..42cd0e0e2c 100644
--- a/drivers/net/txgbe/base/txgbe_hw.c
+++ b/drivers/net/txgbe/base/txgbe_hw.c
@@ -226,6 +226,15 @@ s32 txgbe_setup_fc(struct txgbe_hw *hw)
 				      TXGBE_MD_DEV_AUTO_NEG, reg_cu);
 	}
 
+	/*
+	 * Reconfig mac ctrl frame fwd rule to make sure it still
+	 * working after port stop/start.
+	 */
+	wr32m(hw, TXGBE_MACRXFLT, TXGBE_MACRXFLT_CTL_MASK,
+	      (hw->fc.mac_ctrl_frame_fwd ?
+	       TXGBE_MACRXFLT_CTL_NOPS : TXGBE_MACRXFLT_CTL_DROP));
+	txgbe_flush(hw);
+
 	DEBUGOUT("Set up FC; reg = 0x%08X", reg);
 out:
 	return err;
diff --git a/drivers/net/txgbe/base/txgbe_type.h b/drivers/net/txgbe/base/txgbe_type.h
index 383438ea3c..65527a22e7 100644
--- a/drivers/net/txgbe/base/txgbe_type.h
+++ b/drivers/net/txgbe/base/txgbe_type.h
@@ -299,6 +299,7 @@ struct txgbe_fc_info {
 	u32 high_water[TXGBE_DCB_TC_MAX]; /* Flow Ctrl High-water */
 	u32 low_water[TXGBE_DCB_TC_MAX]; /* Flow Ctrl Low-water */
 	u16 pause_time; /* Flow Control Pause timer */
+	u8 mac_ctrl_frame_fwd; /* Forward MAC control frames */
 	bool send_xon; /* Flow control send XON */
 	bool strict_ieee; /* Strict IEEE mode */
 	bool disable_fc_autoneg; /* Do not autonegotiate FC */
diff --git a/drivers/net/txgbe/txgbe_ethdev.c b/drivers/net/txgbe/txgbe_ethdev.c
index e5736bf387..b68a0557be 100644
--- a/drivers/net/txgbe/txgbe_ethdev.c
+++ b/drivers/net/txgbe/txgbe_ethdev.c
@@ -3586,6 +3586,7 @@ txgbe_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 	hw->fc.low_water[0]   = fc_conf->low_water;
 	hw->fc.send_xon       = fc_conf->send_xon;
 	hw->fc.disable_fc_autoneg = !fc_conf->autoneg;
+	hw->fc.mac_ctrl_frame_fwd = fc_conf->mac_ctrl_frame_fwd;
 
 	err = txgbe_fc_enable(hw);
 
-- 
2.48.1


^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH v2 07/12] net/ngbe: fix MAC control frame forwarding
       [not found] ` <20250609070454.223387-1-jiawenwu@trustnetic.com>
  2025-06-09  7:04   ` [PATCH v2 03/12] net/txgbe: fix reserved extra FDIR headroom Jiawen Wu
  2025-06-09  7:04   ` [PATCH v2 06/12] net/txgbe: fix MAC control frame forwarding Jiawen Wu
@ 2025-06-09  7:04   ` Jiawen Wu
  2025-06-09  7:04   ` [PATCH v2 08/12] net/txgbe: fix incorrect device statistics Jiawen Wu
                     ` (4 subsequent siblings)
  7 siblings, 0 replies; 22+ messages in thread
From: Jiawen Wu @ 2025-06-09  7:04 UTC (permalink / raw)
  To: dev; +Cc: zaiyuwang, Jiawen Wu, stable

Test failure on the case "test_pause_fwd_port_stop_start", which expect
MAC control frame forwarding setting still working after port stop/start.
Fix the bug to pass the test case.

Fixes: f40e9f0e2278 ("net/ngbe: support flow control")
Cc: stable@dpdk.org

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
 drivers/net/ngbe/base/ngbe_hw.c   | 9 +++++++++
 drivers/net/ngbe/base/ngbe_type.h | 1 +
 drivers/net/ngbe/ngbe_ethdev.c    | 1 +
 3 files changed, 11 insertions(+)

diff --git a/drivers/net/ngbe/base/ngbe_hw.c b/drivers/net/ngbe/base/ngbe_hw.c
index 6688ae6a31..bf09f8a817 100644
--- a/drivers/net/ngbe/base/ngbe_hw.c
+++ b/drivers/net/ngbe/base/ngbe_hw.c
@@ -865,6 +865,15 @@ s32 ngbe_setup_fc_em(struct ngbe_hw *hw)
 		goto out;
 	}
 
+	/*
+	 * Reconfig mac ctrl frame fwd rule to make sure it still
+	 * working after port stop/start.
+	 */
+	wr32m(hw, NGBE_MACRXFLT, NGBE_MACRXFLT_CTL_MASK,
+	      (hw->fc.mac_ctrl_frame_fwd ?
+	       NGBE_MACRXFLT_CTL_NOPS : NGBE_MACRXFLT_CTL_DROP));
+	ngbe_flush(hw);
+
 	err = hw->phy.set_pause_adv(hw, reg_cu);
 
 out:
diff --git a/drivers/net/ngbe/base/ngbe_type.h b/drivers/net/ngbe/base/ngbe_type.h
index 7a3b52ffd4..fc571c7457 100644
--- a/drivers/net/ngbe/base/ngbe_type.h
+++ b/drivers/net/ngbe/base/ngbe_type.h
@@ -112,6 +112,7 @@ struct ngbe_fc_info {
 	u32 high_water; /* Flow Ctrl High-water */
 	u32 low_water; /* Flow Ctrl Low-water */
 	u16 pause_time; /* Flow Control Pause timer */
+	u8 mac_ctrl_frame_fwd; /* Forward MAC control frames */
 	bool send_xon; /* Flow control send XON */
 	bool strict_ieee; /* Strict IEEE mode */
 	bool disable_fc_autoneg; /* Do not autonegotiate FC */
diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c
index 08e87471f6..a8f847de8d 100644
--- a/drivers/net/ngbe/ngbe_ethdev.c
+++ b/drivers/net/ngbe/ngbe_ethdev.c
@@ -2420,6 +2420,7 @@ ngbe_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 	hw->fc.low_water      = fc_conf->low_water;
 	hw->fc.send_xon       = fc_conf->send_xon;
 	hw->fc.disable_fc_autoneg = !fc_conf->autoneg;
+	hw->fc.mac_ctrl_frame_fwd = fc_conf->mac_ctrl_frame_fwd;
 
 	err = hw->mac.fc_enable(hw);
 
-- 
2.48.1


^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH v2 08/12] net/txgbe: fix incorrect device statistics
       [not found] ` <20250609070454.223387-1-jiawenwu@trustnetic.com>
                     ` (2 preceding siblings ...)
  2025-06-09  7:04   ` [PATCH v2 07/12] net/ngbe: " Jiawen Wu
@ 2025-06-09  7:04   ` Jiawen Wu
  2025-06-09  7:04   ` [PATCH v2 09/12] net/ngbe: " Jiawen Wu
                     ` (3 subsequent siblings)
  7 siblings, 0 replies; 22+ messages in thread
From: Jiawen Wu @ 2025-06-09  7:04 UTC (permalink / raw)
  To: dev; +Cc: zaiyuwang, Jiawen Wu, stable

The extend statistic "rx_undersize_errors" is incorrectly read as the
counter of frames received with a length error, which names
"rx_length_error". And "rx_undersize_errors" is the counter of
shorter-than-64B frames received without any errors.

In addition, "tx_broadcast_packets" should use rd64() to get the full
count on the low and high registers.

Fixes: c9bb590d4295 ("net/txgbe: support device statistics")
Cc: stable@dpdk.org

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
 drivers/net/txgbe/txgbe_ethdev.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/drivers/net/txgbe/txgbe_ethdev.c b/drivers/net/txgbe/txgbe_ethdev.c
index b68a0557be..580579094b 100644
--- a/drivers/net/txgbe/txgbe_ethdev.c
+++ b/drivers/net/txgbe/txgbe_ethdev.c
@@ -2250,7 +2250,7 @@ txgbe_read_stats_registers(struct txgbe_hw *hw,
 	hw_stats->rx_total_bytes += rd64(hw, TXGBE_MACRXGBOCTL);
 
 	hw_stats->rx_broadcast_packets += rd64(hw, TXGBE_MACRXOCTL);
-	hw_stats->tx_broadcast_packets += rd32(hw, TXGBE_MACTXOCTL);
+	hw_stats->tx_broadcast_packets += rd64(hw, TXGBE_MACTXOCTL);
 
 	hw_stats->rx_size_64_packets += rd64(hw, TXGBE_MACRX1TO64L);
 	hw_stats->rx_size_65_to_127_packets += rd64(hw, TXGBE_MACRX65TO127L);
@@ -2269,7 +2269,8 @@ txgbe_read_stats_registers(struct txgbe_hw *hw,
 	hw_stats->tx_size_1024_to_max_packets +=
 			rd64(hw, TXGBE_MACTX1024TOMAXL);
 
-	hw_stats->rx_undersize_errors += rd64(hw, TXGBE_MACRXERRLENL);
+	hw_stats->rx_length_errors += rd64(hw, TXGBE_MACRXERRLENL);
+	hw_stats->rx_undersize_errors += rd32(hw, TXGBE_MACRXUNDERSIZE);
 	hw_stats->rx_oversize_cnt += rd32(hw, TXGBE_MACRXOVERSIZE);
 	hw_stats->rx_jabber_errors += rd32(hw, TXGBE_MACRXJABBER);
 
-- 
2.48.1


^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH v2 09/12] net/ngbe: fix incorrect device statistics
       [not found] ` <20250609070454.223387-1-jiawenwu@trustnetic.com>
                     ` (3 preceding siblings ...)
  2025-06-09  7:04   ` [PATCH v2 08/12] net/txgbe: fix incorrect device statistics Jiawen Wu
@ 2025-06-09  7:04   ` Jiawen Wu
  2025-06-09  7:04   ` [PATCH v2 10/12] net/txgbe: restrict VLAN strip configuration on VF Jiawen Wu
                     ` (2 subsequent siblings)
  7 siblings, 0 replies; 22+ messages in thread
From: Jiawen Wu @ 2025-06-09  7:04 UTC (permalink / raw)
  To: dev; +Cc: zaiyuwang, Jiawen Wu, stable

The extend statistic "rx_undersize_errors" is incorrectly read as the
counter of frames received with a length error, which names
"rx_length_error". And "rx_undersize_errors" is the counter of
shorter-than-64B frames received without any errors.

In addition, "tx_broadcast_packets" should use rd64() to get the full
count on the low and high registers.

Fixes: fdb1e851975a ("net/ngbe: support basic statistics")
Cc: stable@dpdk.org

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
 drivers/net/ngbe/ngbe_ethdev.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c
index a8f847de8d..d3ac40299f 100644
--- a/drivers/net/ngbe/ngbe_ethdev.c
+++ b/drivers/net/ngbe/ngbe_ethdev.c
@@ -1429,7 +1429,7 @@ ngbe_read_stats_registers(struct ngbe_hw *hw,
 	hw_stats->rx_total_bytes += rd64(hw, NGBE_MACRXGBOCTL);
 
 	hw_stats->rx_broadcast_packets += rd64(hw, NGBE_MACRXOCTL);
-	hw_stats->tx_broadcast_packets += rd32(hw, NGBE_MACTXOCTL);
+	hw_stats->tx_broadcast_packets += rd64(hw, NGBE_MACTXOCTL);
 
 	hw_stats->rx_size_64_packets += rd64(hw, NGBE_MACRX1TO64L);
 	hw_stats->rx_size_65_to_127_packets += rd64(hw, NGBE_MACRX65TO127L);
@@ -1448,7 +1448,8 @@ ngbe_read_stats_registers(struct ngbe_hw *hw,
 	hw_stats->tx_size_1024_to_max_packets +=
 			rd64(hw, NGBE_MACTX1024TOMAXL);
 
-	hw_stats->rx_undersize_errors += rd64(hw, NGBE_MACRXERRLENL);
+	hw_stats->rx_length_errors += rd64(hw, NGBE_MACRXERRLENL);
+	hw_stats->rx_undersize_errors += rd32(hw, NGBE_MACRXUNDERSIZE);
 	hw_stats->rx_oversize_cnt += rd32(hw, NGBE_MACRXOVERSIZE);
 	hw_stats->rx_jabber_errors += rd32(hw, NGBE_MACRXJABBER);
 
-- 
2.48.1


^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH v2 10/12] net/txgbe: restrict VLAN strip configuration on VF
       [not found] ` <20250609070454.223387-1-jiawenwu@trustnetic.com>
                     ` (4 preceding siblings ...)
  2025-06-09  7:04   ` [PATCH v2 09/12] net/ngbe: " Jiawen Wu
@ 2025-06-09  7:04   ` Jiawen Wu
  2025-06-09  7:04   ` [PATCH v2 11/12] net/ngbe: " Jiawen Wu
  2025-06-09  7:04   ` [PATCH v2 12/12] net/txgbe: add missing LRO flag in mbuf when LRO enabled Jiawen Wu
  7 siblings, 0 replies; 22+ messages in thread
From: Jiawen Wu @ 2025-06-09  7:04 UTC (permalink / raw)
  To: dev; +Cc: zaiyuwang, Jiawen Wu, stable

Fix the same issue as PF in commit 66364efcf958 ("net/txgbe: restrict
configuration of VLAN strip offload").

There is a hardware limitation that Rx ring config register is not
writable when Rx ring is enabled, i.e. the TXGBE_RXCFG_ENA bit is set.
But disabling the ring when there is traffic will cause ring get stuck.
So restrict the configuration of VLAN strip offload only if device is
started.

Fixes: aa1ae7941e71 ("net/txgbe: support VF VLAN")
Cc: stable@dpdk.org

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
 drivers/net/txgbe/txgbe_ethdev_vf.c | 31 +++++++++++++++++++++--------
 1 file changed, 23 insertions(+), 8 deletions(-)

diff --git a/drivers/net/txgbe/txgbe_ethdev_vf.c b/drivers/net/txgbe/txgbe_ethdev_vf.c
index c0d8aa15b2..847febf8c3 100644
--- a/drivers/net/txgbe/txgbe_ethdev_vf.c
+++ b/drivers/net/txgbe/txgbe_ethdev_vf.c
@@ -935,7 +935,7 @@ txgbevf_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on)
 }
 
 static void
-txgbevf_vlan_strip_queue_set(struct rte_eth_dev *dev, uint16_t queue, int on)
+txgbevf_vlan_strip_q_set(struct rte_eth_dev *dev, uint16_t queue, int on)
 {
 	struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
 	uint32_t ctrl;
@@ -946,20 +946,28 @@ txgbevf_vlan_strip_queue_set(struct rte_eth_dev *dev, uint16_t queue, int on)
 		return;
 
 	ctrl = rd32(hw, TXGBE_RXCFG(queue));
-	txgbe_dev_save_rx_queue(hw, queue);
 	if (on)
 		ctrl |= TXGBE_RXCFG_VLAN;
 	else
 		ctrl &= ~TXGBE_RXCFG_VLAN;
-	wr32(hw, TXGBE_RXCFG(queue), 0);
-	msec_delay(100);
-	txgbe_dev_store_rx_queue(hw, queue);
-	wr32m(hw, TXGBE_RXCFG(queue),
-		TXGBE_RXCFG_VLAN | TXGBE_RXCFG_ENA, ctrl);
+	wr32(hw, TXGBE_RXCFG(queue), ctrl);
 
 	txgbe_vlan_hw_strip_bitmap_set(dev, queue, on);
 }
 
+static void
+txgbevf_vlan_strip_queue_set(struct rte_eth_dev *dev, uint16_t queue, int on)
+{
+	struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
+
+	if (!hw->adapter_stopped) {
+		PMD_DRV_LOG(ERR, "Please stop port first");
+		return;
+	}
+
+	txgbevf_vlan_strip_q_set(dev, queue, on);
+}
+
 static int
 txgbevf_vlan_offload_config(struct rte_eth_dev *dev, int mask)
 {
@@ -972,7 +980,7 @@ txgbevf_vlan_offload_config(struct rte_eth_dev *dev, int mask)
 		for (i = 0; i < dev->data->nb_rx_queues; i++) {
 			rxq = dev->data->rx_queues[i];
 			on = !!(rxq->offloads &	RTE_ETH_RX_OFFLOAD_VLAN_STRIP);
-			txgbevf_vlan_strip_queue_set(dev, i, on);
+			txgbevf_vlan_strip_q_set(dev, i, on);
 		}
 	}
 
@@ -982,6 +990,13 @@ txgbevf_vlan_offload_config(struct rte_eth_dev *dev, int mask)
 static int
 txgbevf_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 {
+	struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
+
+	if (!hw->adapter_stopped && (mask & RTE_ETH_VLAN_STRIP_MASK)) {
+		PMD_DRV_LOG(ERR, "Please stop port first");
+		return -EPERM;
+	}
+
 	txgbe_config_vlan_strip_on_all_queues(dev, mask);
 
 	txgbevf_vlan_offload_config(dev, mask);
-- 
2.48.1


^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH v2 11/12] net/ngbe: restrict VLAN strip configuration on VF
       [not found] ` <20250609070454.223387-1-jiawenwu@trustnetic.com>
                     ` (5 preceding siblings ...)
  2025-06-09  7:04   ` [PATCH v2 10/12] net/txgbe: restrict VLAN strip configuration on VF Jiawen Wu
@ 2025-06-09  7:04   ` Jiawen Wu
  2025-06-09  7:04   ` [PATCH v2 12/12] net/txgbe: add missing LRO flag in mbuf when LRO enabled Jiawen Wu
  7 siblings, 0 replies; 22+ messages in thread
From: Jiawen Wu @ 2025-06-09  7:04 UTC (permalink / raw)
  To: dev; +Cc: zaiyuwang, Jiawen Wu, stable

Fix the same issue as PF in commit baca8ec066dc ("net/ngbe: restrict
configuration of VLAN strip offload").

There is a hardware limitation that Rx ring config register is not
writable when Rx ring is enabled, i.e. the TXGBE_RXCFG_ENA bit is set.
But disabling the ring when there is traffic will cause ring get stuck.
So restrict the configuration of VLAN strip offload only if device is
started.

Fixes: f47dc03c706f ("net/ngbe: add VLAN ops for VF device")
Cc: stable@dpdk.org

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
 drivers/net/ngbe/ngbe_ethdev_vf.c | 24 ++++++++++++++++++++++--
 1 file changed, 22 insertions(+), 2 deletions(-)

diff --git a/drivers/net/ngbe/ngbe_ethdev_vf.c b/drivers/net/ngbe/ngbe_ethdev_vf.c
index 5d68f1602d..846bc981f6 100644
--- a/drivers/net/ngbe/ngbe_ethdev_vf.c
+++ b/drivers/net/ngbe/ngbe_ethdev_vf.c
@@ -828,7 +828,7 @@ ngbevf_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on)
 }
 
 static void
-ngbevf_vlan_strip_queue_set(struct rte_eth_dev *dev, uint16_t queue, int on)
+ngbevf_vlan_strip_q_set(struct rte_eth_dev *dev, uint16_t queue, int on)
 {
 	struct ngbe_hw *hw = ngbe_dev_hw(dev);
 	uint32_t ctrl;
@@ -848,6 +848,19 @@ ngbevf_vlan_strip_queue_set(struct rte_eth_dev *dev, uint16_t queue, int on)
 	ngbe_vlan_hw_strip_bitmap_set(dev, queue, on);
 }
 
+static void
+ngbevf_vlan_strip_queue_set(struct rte_eth_dev *dev, uint16_t queue, int on)
+{
+	struct ngbe_hw *hw = ngbe_dev_hw(dev);
+
+	if (!hw->adapter_stopped) {
+		PMD_DRV_LOG(ERR, "Please stop port first");
+		return;
+	}
+
+	ngbevf_vlan_strip_q_set(dev, queue, on);
+}
+
 static int
 ngbevf_vlan_offload_config(struct rte_eth_dev *dev, int mask)
 {
@@ -860,7 +873,7 @@ ngbevf_vlan_offload_config(struct rte_eth_dev *dev, int mask)
 		for (i = 0; i < dev->data->nb_rx_queues; i++) {
 			rxq = dev->data->rx_queues[i];
 			on = !!(rxq->offloads &	RTE_ETH_RX_OFFLOAD_VLAN_STRIP);
-			ngbevf_vlan_strip_queue_set(dev, i, on);
+			ngbevf_vlan_strip_q_set(dev, i, on);
 		}
 	}
 
@@ -870,6 +883,13 @@ ngbevf_vlan_offload_config(struct rte_eth_dev *dev, int mask)
 static int
 ngbevf_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 {
+	struct ngbe_hw *hw = ngbe_dev_hw(dev);
+
+	if (!hw->adapter_stopped && (mask & RTE_ETH_VLAN_STRIP_MASK)) {
+		PMD_DRV_LOG(ERR, "Please stop port first");
+		return -EPERM;
+	}
+
 	ngbe_config_vlan_strip_on_all_queues(dev, mask);
 
 	ngbevf_vlan_offload_config(dev, mask);
-- 
2.48.1


^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH v2 12/12] net/txgbe: add missing LRO flag in mbuf when LRO enabled
       [not found] ` <20250609070454.223387-1-jiawenwu@trustnetic.com>
                     ` (6 preceding siblings ...)
  2025-06-09  7:04   ` [PATCH v2 11/12] net/ngbe: " Jiawen Wu
@ 2025-06-09  7:04   ` Jiawen Wu
  7 siblings, 0 replies; 22+ messages in thread
From: Jiawen Wu @ 2025-06-09  7:04 UTC (permalink / raw)
  To: dev; +Cc: zaiyuwang, Jiawen Wu, stable

When LRO is enabled, the driver must set the LRO flag in received
aggregated packets to indicate LRO processing to upper-layer
applications. Add the missing LRO flag into the ol_flags field of mbuf
to fix it.

Fixes: 0e484278c85f ("net/txgbe: support Rx")
Cc: stable@dpdk.org

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
 drivers/net/txgbe/txgbe_rxtx.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/drivers/net/txgbe/txgbe_rxtx.c b/drivers/net/txgbe/txgbe_rxtx.c
index a85d417ff6..e6f33739c4 100644
--- a/drivers/net/txgbe/txgbe_rxtx.c
+++ b/drivers/net/txgbe/txgbe_rxtx.c
@@ -1793,6 +1793,8 @@ txgbe_fill_cluster_head_buf(struct rte_mbuf *head, struct txgbe_rx_desc *desc,
 	pkt_flags = rx_desc_status_to_pkt_flags(staterr, rxq->vlan_flags);
 	pkt_flags |= rx_desc_error_to_pkt_flags(staterr);
 	pkt_flags |= txgbe_rxd_pkt_info_to_pkt_flags(pkt_info);
+	if (TXGBE_RXD_RSCCNT(desc->qw0.dw0))
+		pkt_flags |= RTE_MBUF_F_RX_LRO;
 	head->ol_flags = pkt_flags;
 	head->packet_type = txgbe_rxd_pkt_info_to_pkt_type(pkt_info,
 						rxq->pkt_type_mask);
-- 
2.48.1


^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH v3 02/17] net/txgbe: fix incorrect parsing to ntuple filter
       [not found] ` <20250613084159.22184-1-jiawenwu@trustnetic.com>
@ 2025-06-13  8:41   ` Jiawen Wu
  2025-06-13  8:41   ` [PATCH v3 03/17] net/txgbe: fix raw pattern match for FDIR rules Jiawen Wu
                     ` (12 subsequent siblings)
  13 siblings, 0 replies; 22+ messages in thread
From: Jiawen Wu @ 2025-06-13  8:41 UTC (permalink / raw)
  To: dev; +Cc: zaiyuwang, Jiawen Wu, stable

The rule is incorrectly parsed to ntuple filter when setting the pattern
likes:
flow create ... ipv4 / udp dst is ... / raw ... / end actions ... / end

It causes the rule to be created successfully, but not works. Fix it to
parse for FDIR rules.

Fixes: b7eeecb17556 ("net/txgbe: parse n-tuple filter")
Cc: stable@dpdk.org

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
 drivers/net/txgbe/txgbe_flow.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/net/txgbe/txgbe_flow.c b/drivers/net/txgbe/txgbe_flow.c
index 1d854d0767..269f0b54e3 100644
--- a/drivers/net/txgbe/txgbe_flow.c
+++ b/drivers/net/txgbe/txgbe_flow.c
@@ -361,7 +361,7 @@ cons_parse_ntuple_filter(const struct rte_flow_attr *attr,
 
 	if (item->type != RTE_FLOW_ITEM_TYPE_END &&
 		(!item->spec && !item->mask)) {
-		goto action;
+		goto item_end;
 	}
 
 	/* get the TCP/UDP/SCTP info */
@@ -490,6 +490,7 @@ cons_parse_ntuple_filter(const struct rte_flow_attr *attr,
 		goto action;
 	}
 
+item_end:
 	/* check if the next not void item is END */
 	item = next_no_void_pattern(pattern, item);
 	if (item->type != RTE_FLOW_ITEM_TYPE_END) {
-- 
2.48.1


^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH v3 03/17] net/txgbe: fix raw pattern match for FDIR rules
       [not found] ` <20250613084159.22184-1-jiawenwu@trustnetic.com>
  2025-06-13  8:41   ` [PATCH v3 02/17] net/txgbe: fix incorrect parsing to ntuple filter Jiawen Wu
@ 2025-06-13  8:41   ` Jiawen Wu
  2025-06-13  8:41   ` [PATCH v3 04/17] net/txgbe: fix packet type for FDIR filters Jiawen Wu
                     ` (11 subsequent siblings)
  13 siblings, 0 replies; 22+ messages in thread
From: Jiawen Wu @ 2025-06-13  8:41 UTC (permalink / raw)
  To: dev; +Cc: zaiyuwang, Jiawen Wu, stable

The raw patthern is required to be two hex bytes on hardware, but it is
string in the raw item. So the length of raw spec should be 4, and the
string should be converted to the two hex bytes. And relative of raw
spec is supported to be optical.

Fixes: b973ee26747a ("net/txgbe: parse flow director filter")
Cc: stable@dpdk.org

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
 drivers/net/txgbe/txgbe_ethdev.h |  5 ++-
 drivers/net/txgbe/txgbe_fdir.c   | 24 +++++++++++++--
 drivers/net/txgbe/txgbe_flow.c   | 53 ++++++++++++++++++++++++--------
 3 files changed, 67 insertions(+), 15 deletions(-)

diff --git a/drivers/net/txgbe/txgbe_ethdev.h b/drivers/net/txgbe/txgbe_ethdev.h
index 36d51fcbb8..0a3c634937 100644
--- a/drivers/net/txgbe/txgbe_ethdev.h
+++ b/drivers/net/txgbe/txgbe_ethdev.h
@@ -116,11 +116,13 @@ struct txgbe_fdir_rule {
 	uint32_t soft_id; /* an unique value for this rule */
 	uint8_t queue; /* assigned rx queue */
 	uint8_t flex_bytes_offset;
+	bool flex_relative;
 };
 
 struct txgbe_hw_fdir_info {
 	struct txgbe_hw_fdir_mask mask;
 	uint8_t     flex_bytes_offset;
+	bool        flex_relative;
 	uint16_t    collision;
 	uint16_t    free;
 	uint16_t    maxhash;
@@ -561,8 +563,9 @@ void txgbe_set_ivar_map(struct txgbe_hw *hw, int8_t direction,
  */
 int txgbe_fdir_configure(struct rte_eth_dev *dev);
 int txgbe_fdir_set_input_mask(struct rte_eth_dev *dev);
+uint16_t txgbe_fdir_get_flex_base(struct txgbe_fdir_rule *rule);
 int txgbe_fdir_set_flexbytes_offset(struct rte_eth_dev *dev,
-				    uint16_t offset);
+				    uint16_t offset, uint16_t flex_base);
 int txgbe_fdir_filter_program(struct rte_eth_dev *dev,
 			      struct txgbe_fdir_rule *rule,
 			      bool del, bool update);
diff --git a/drivers/net/txgbe/txgbe_fdir.c b/drivers/net/txgbe/txgbe_fdir.c
index f627ab681d..75bf30c00c 100644
--- a/drivers/net/txgbe/txgbe_fdir.c
+++ b/drivers/net/txgbe/txgbe_fdir.c
@@ -258,9 +258,24 @@ txgbe_fdir_store_input_mask(struct rte_eth_dev *dev)
 	return 0;
 }
 
+uint16_t
+txgbe_fdir_get_flex_base(struct txgbe_fdir_rule *rule)
+{
+	if (!rule->flex_relative)
+		return TXGBE_FDIRFLEXCFG_BASE_MAC;
+
+	if (rule->input.flow_type & TXGBE_ATR_L4TYPE_MASK)
+		return TXGBE_FDIRFLEXCFG_BASE_PAY;
+
+	if (rule->input.flow_type & TXGBE_ATR_L3TYPE_MASK)
+		return TXGBE_FDIRFLEXCFG_BASE_L3;
+
+	return TXGBE_FDIRFLEXCFG_BASE_L2;
+}
+
 int
 txgbe_fdir_set_flexbytes_offset(struct rte_eth_dev *dev,
-				uint16_t offset)
+				uint16_t offset, uint16_t flex_base)
 {
 	struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
 	int i;
@@ -268,7 +283,7 @@ txgbe_fdir_set_flexbytes_offset(struct rte_eth_dev *dev,
 	for (i = 0; i < 64; i++) {
 		uint32_t flexreg, flex;
 		flexreg = rd32(hw, TXGBE_FDIRFLEXCFG(i / 4));
-		flex = TXGBE_FDIRFLEXCFG_BASE_MAC;
+		flex = flex_base;
 		flex |= TXGBE_FDIRFLEXCFG_OFST(offset / 2);
 		flexreg &= ~(TXGBE_FDIRFLEXCFG_ALL(~0UL, i % 4));
 		flexreg |= TXGBE_FDIRFLEXCFG_ALL(flex, i % 4);
@@ -910,6 +925,11 @@ txgbe_fdir_flush(struct rte_eth_dev *dev)
 	info->add = 0;
 	info->remove = 0;
 
+	memset(&info->mask, 0, sizeof(struct txgbe_hw_fdir_mask));
+	info->mask_added = false;
+	info->flex_relative = false;
+	info->flex_bytes_offset = 0;
+
 	return ret;
 }
 
diff --git a/drivers/net/txgbe/txgbe_flow.c b/drivers/net/txgbe/txgbe_flow.c
index 269f0b54e3..8670c3e1d7 100644
--- a/drivers/net/txgbe/txgbe_flow.c
+++ b/drivers/net/txgbe/txgbe_flow.c
@@ -2066,6 +2066,8 @@ txgbe_parse_fdir_filter_normal(struct rte_eth_dev *dev __rte_unused,
 
 	/* Get the flex byte info */
 	if (item->type == RTE_FLOW_ITEM_TYPE_RAW) {
+		uint16_t pattern = 0;
+
 		/* Not supported last point for range*/
 		if (item->last) {
 			rte_flow_error_set(error, EINVAL,
@@ -2082,6 +2084,7 @@ txgbe_parse_fdir_filter_normal(struct rte_eth_dev *dev __rte_unused,
 			return -rte_errno;
 		}
 
+		rule->b_mask = TRUE;
 		raw_mask = item->mask;
 
 		/* check mask */
@@ -2098,19 +2101,21 @@ txgbe_parse_fdir_filter_normal(struct rte_eth_dev *dev __rte_unused,
 			return -rte_errno;
 		}
 
+		rule->b_spec = TRUE;
 		raw_spec = item->spec;
 
 		/* check spec */
-		if (raw_spec->relative != 0 ||
-		    raw_spec->search != 0 ||
+		if (raw_spec->search != 0 ||
 		    raw_spec->reserved != 0 ||
 		    raw_spec->offset > TXGBE_MAX_FLX_SOURCE_OFF ||
 		    raw_spec->offset % 2 ||
 		    raw_spec->limit != 0 ||
-		    raw_spec->length != 2 ||
+		    raw_spec->length != 4 ||
 		    /* pattern can't be 0xffff */
 		    (raw_spec->pattern[0] == 0xff &&
-		     raw_spec->pattern[1] == 0xff)) {
+		     raw_spec->pattern[1] == 0xff &&
+		     raw_spec->pattern[2] == 0xff &&
+		     raw_spec->pattern[3] == 0xff)) {
 			memset(rule, 0, sizeof(struct txgbe_fdir_rule));
 			rte_flow_error_set(error, EINVAL,
 				RTE_FLOW_ERROR_TYPE_ITEM,
@@ -2120,7 +2125,9 @@ txgbe_parse_fdir_filter_normal(struct rte_eth_dev *dev __rte_unused,
 
 		/* check pattern mask */
 		if (raw_mask->pattern[0] != 0xff ||
-		    raw_mask->pattern[1] != 0xff) {
+		    raw_mask->pattern[1] != 0xff ||
+		    raw_mask->pattern[2] != 0xff ||
+		    raw_mask->pattern[3] != 0xff) {
 			memset(rule, 0, sizeof(struct txgbe_fdir_rule));
 			rte_flow_error_set(error, EINVAL,
 				RTE_FLOW_ERROR_TYPE_ITEM,
@@ -2129,10 +2136,19 @@ txgbe_parse_fdir_filter_normal(struct rte_eth_dev *dev __rte_unused,
 		}
 
 		rule->mask.flex_bytes_mask = 0xffff;
-		rule->input.flex_bytes =
-			(((uint16_t)raw_spec->pattern[1]) << 8) |
-			raw_spec->pattern[0];
+		/* Convert pattern string to hex bytes */
+		if (sscanf((const char *)raw_spec->pattern, "%hx", &pattern) != 1) {
+			memset(rule, 0, sizeof(struct txgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Failed to parse raw pattern");
+			return -rte_errno;
+		}
+		rule->input.flex_bytes = (pattern & 0x00FF) << 8;
+		rule->input.flex_bytes |= (pattern & 0xFF00) >> 8;
+
 		rule->flex_bytes_offset = raw_spec->offset;
+		rule->flex_relative = raw_spec->relative;
 	}
 
 	if (item->type != RTE_FLOW_ITEM_TYPE_END) {
@@ -2836,10 +2852,16 @@ txgbe_flow_create(struct rte_eth_dev *dev,
 				fdir_info->mask = fdir_rule.mask;
 				fdir_info->flex_bytes_offset =
 					fdir_rule.flex_bytes_offset;
+				fdir_info->flex_relative = fdir_rule.flex_relative;
+
+				if (fdir_rule.mask.flex_bytes_mask) {
+					uint16_t flex_base;
 
-				if (fdir_rule.mask.flex_bytes_mask)
+					flex_base = txgbe_fdir_get_flex_base(&fdir_rule);
 					txgbe_fdir_set_flexbytes_offset(dev,
-						fdir_rule.flex_bytes_offset);
+									fdir_rule.flex_bytes_offset,
+									flex_base);
+				}
 
 				ret = txgbe_fdir_set_input_mask(dev);
 				if (ret)
@@ -2861,7 +2883,9 @@ txgbe_flow_create(struct rte_eth_dev *dev,
 				}
 
 				if (fdir_info->flex_bytes_offset !=
-						fdir_rule.flex_bytes_offset)
+				    fdir_rule.flex_bytes_offset ||
+				    fdir_info->flex_relative !=
+				    fdir_rule.flex_relative)
 					goto out;
 			}
 		}
@@ -3089,8 +3113,13 @@ txgbe_flow_destroy(struct rte_eth_dev *dev,
 			TAILQ_REMOVE(&filter_fdir_list,
 				fdir_rule_ptr, entries);
 			rte_free(fdir_rule_ptr);
-			if (TAILQ_EMPTY(&filter_fdir_list))
+			if (TAILQ_EMPTY(&filter_fdir_list)) {
+				memset(&fdir_info->mask, 0,
+					sizeof(struct txgbe_hw_fdir_mask));
 				fdir_info->mask_added = false;
+				fdir_info->flex_relative = false;
+				fdir_info->flex_bytes_offset = 0;
+			}
 		}
 		break;
 	case RTE_ETH_FILTER_L2_TUNNEL:
-- 
2.48.1


^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH v3 04/17] net/txgbe: fix packet type for FDIR filters
       [not found] ` <20250613084159.22184-1-jiawenwu@trustnetic.com>
  2025-06-13  8:41   ` [PATCH v3 02/17] net/txgbe: fix incorrect parsing to ntuple filter Jiawen Wu
  2025-06-13  8:41   ` [PATCH v3 03/17] net/txgbe: fix raw pattern match for FDIR rules Jiawen Wu
@ 2025-06-13  8:41   ` Jiawen Wu
  2025-06-13  8:41   ` [PATCH v3 05/17] net/txgbe: fix to create FDIR filters for SCTP packets Jiawen Wu
                     ` (10 subsequent siblings)
  13 siblings, 0 replies; 22+ messages in thread
From: Jiawen Wu @ 2025-06-13  8:41 UTC (permalink / raw)
  To: dev; +Cc: zaiyuwang, Jiawen Wu, stable

To match the packet type more flexibly when the pattern is default, add
packet type mask for FDIR filters.

Fixes: b973ee26747a ("net/txgbe: parse flow director filter")
Cc: stable@dpdk.org

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
 drivers/net/txgbe/base/txgbe_type.h |  20 +--
 drivers/net/txgbe/txgbe_ethdev.h    |   3 +-
 drivers/net/txgbe/txgbe_fdir.c      |  16 +--
 drivers/net/txgbe/txgbe_flow.c      | 188 +++++++++++++++-------------
 4 files changed, 116 insertions(+), 111 deletions(-)

diff --git a/drivers/net/txgbe/base/txgbe_type.h b/drivers/net/txgbe/base/txgbe_type.h
index 4371876649..383438ea3c 100644
--- a/drivers/net/txgbe/base/txgbe_type.h
+++ b/drivers/net/txgbe/base/txgbe_type.h
@@ -88,8 +88,11 @@ enum {
 #define TXGBE_ATR_L4TYPE_UDP			0x1
 #define TXGBE_ATR_L4TYPE_TCP			0x2
 #define TXGBE_ATR_L4TYPE_SCTP			0x3
-#define TXGBE_ATR_TUNNEL_MASK			0x10
-#define TXGBE_ATR_TUNNEL_ANY			0x10
+#define TXGBE_ATR_TYPE_MASK_TUN			0x80
+#define TXGBE_ATR_TYPE_MASK_TUN_OUTIP		0x40
+#define TXGBE_ATR_TYPE_MASK_TUN_TYPE		0x20
+#define TXGBE_ATR_TYPE_MASK_L3P			0x10
+#define TXGBE_ATR_TYPE_MASK_L4P			0x08
 enum txgbe_atr_flow_type {
 	TXGBE_ATR_FLOW_TYPE_IPV4		= 0x0,
 	TXGBE_ATR_FLOW_TYPE_UDPV4		= 0x1,
@@ -99,14 +102,6 @@ enum txgbe_atr_flow_type {
 	TXGBE_ATR_FLOW_TYPE_UDPV6		= 0x5,
 	TXGBE_ATR_FLOW_TYPE_TCPV6		= 0x6,
 	TXGBE_ATR_FLOW_TYPE_SCTPV6		= 0x7,
-	TXGBE_ATR_FLOW_TYPE_TUNNELED_IPV4	= 0x10,
-	TXGBE_ATR_FLOW_TYPE_TUNNELED_UDPV4	= 0x11,
-	TXGBE_ATR_FLOW_TYPE_TUNNELED_TCPV4	= 0x12,
-	TXGBE_ATR_FLOW_TYPE_TUNNELED_SCTPV4	= 0x13,
-	TXGBE_ATR_FLOW_TYPE_TUNNELED_IPV6	= 0x14,
-	TXGBE_ATR_FLOW_TYPE_TUNNELED_UDPV6	= 0x15,
-	TXGBE_ATR_FLOW_TYPE_TUNNELED_TCPV6	= 0x16,
-	TXGBE_ATR_FLOW_TYPE_TUNNELED_SCTPV6	= 0x17,
 };
 
 /* Flow Director ATR input struct. */
@@ -116,11 +111,8 @@ struct txgbe_atr_input {
 	 *
 	 * vm_pool	- 1 byte
 	 * flow_type	- 1 byte
-	 * vlan_id	- 2 bytes
+	 * pkt_type	- 2 bytes
 	 * src_ip	- 16 bytes
-	 * inner_mac	- 6 bytes
-	 * cloud_mode	- 2 bytes
-	 * tni_vni	- 4 bytes
 	 * dst_ip	- 16 bytes
 	 * src_port	- 2 bytes
 	 * dst_port	- 2 bytes
diff --git a/drivers/net/txgbe/txgbe_ethdev.h b/drivers/net/txgbe/txgbe_ethdev.h
index 0a3c634937..01e8a9fc05 100644
--- a/drivers/net/txgbe/txgbe_ethdev.h
+++ b/drivers/net/txgbe/txgbe_ethdev.h
@@ -91,8 +91,7 @@ struct txgbe_hw_fdir_mask {
 	uint16_t dst_port_mask;
 	uint16_t flex_bytes_mask;
 	uint8_t  mac_addr_byte_mask;
-	uint32_t tunnel_id_mask;
-	uint8_t  tunnel_type_mask;
+	uint8_t  pkt_type_mask; /* reversed mask for hw */
 };
 
 struct txgbe_fdir_filter {
diff --git a/drivers/net/txgbe/txgbe_fdir.c b/drivers/net/txgbe/txgbe_fdir.c
index 75bf30c00c..0d12fb9a11 100644
--- a/drivers/net/txgbe/txgbe_fdir.c
+++ b/drivers/net/txgbe/txgbe_fdir.c
@@ -187,18 +187,12 @@ txgbe_fdir_set_input_mask(struct rte_eth_dev *dev)
 		return -ENOTSUP;
 	}
 
-	/*
-	 * Program the relevant mask registers.  If src/dst_port or src/dst_addr
-	 * are zero, then assume a full mask for that field. Also assume that
-	 * a VLAN of 0 is unspecified, so mask that out as well.  L4type
-	 * cannot be masked out in this implementation.
-	 */
-	if (info->mask.dst_port_mask == 0 && info->mask.src_port_mask == 0) {
-		/* use the L4 protocol mask for raw IPv4/IPv6 traffic */
-		fdirm |= TXGBE_FDIRMSK_L4P;
-	}
+	/* use the L4 protocol mask for raw IPv4/IPv6 traffic */
+	if (info->mask.pkt_type_mask == 0 && info->mask.dst_port_mask == 0 &&
+	    info->mask.src_port_mask == 0)
+		info->mask.pkt_type_mask |= TXGBE_FDIRMSK_L4P;
 
-	/* TBD: don't support encapsulation yet */
+	fdirm |= info->mask.pkt_type_mask;
 	wr32(hw, TXGBE_FDIRMSK, fdirm);
 
 	/* store the TCP/UDP port masks */
diff --git a/drivers/net/txgbe/txgbe_flow.c b/drivers/net/txgbe/txgbe_flow.c
index 8670c3e1d7..bce88aebd3 100644
--- a/drivers/net/txgbe/txgbe_flow.c
+++ b/drivers/net/txgbe/txgbe_flow.c
@@ -1487,8 +1487,41 @@ static inline uint8_t signature_match(const struct rte_flow_item pattern[])
 	return 0;
 }
 
+static void
+txgbe_fdir_parse_flow_type(struct txgbe_atr_input *input, u8 ptid, bool tun)
+{
+	if (!tun)
+		ptid = TXGBE_PTID_PKT_IP;
+
+	switch (input->flow_type & TXGBE_ATR_L4TYPE_MASK) {
+	case TXGBE_ATR_L4TYPE_UDP:
+		ptid |= TXGBE_PTID_TYP_UDP;
+		break;
+	case TXGBE_ATR_L4TYPE_TCP:
+		ptid |= TXGBE_PTID_TYP_TCP;
+		break;
+	case TXGBE_ATR_L4TYPE_SCTP:
+		ptid |= TXGBE_PTID_TYP_SCTP;
+		break;
+	default:
+		break;
+	}
+
+	switch (input->flow_type & TXGBE_ATR_L3TYPE_MASK) {
+	case TXGBE_ATR_L3TYPE_IPV4:
+		break;
+	case TXGBE_ATR_L3TYPE_IPV6:
+		ptid |= TXGBE_PTID_PKT_IPV6;
+		break;
+	default:
+		break;
+	}
+
+	input->pkt_type = cpu_to_be16(ptid);
+}
+
 /**
- * Parse the rule to see if it is a IP or MAC VLAN flow director rule.
+ * Parse the rule to see if it is a IP flow director rule.
  * And get the flow director filter info BTW.
  * UDP/TCP/SCTP PATTERN:
  * The first not void item can be ETH or IPV4 or IPV6
@@ -1555,7 +1588,6 @@ txgbe_parse_fdir_filter_normal(struct rte_eth_dev *dev __rte_unused,
 	const struct rte_flow_item_sctp *sctp_mask;
 	const struct rte_flow_item_raw *raw_mask;
 	const struct rte_flow_item_raw *raw_spec;
-	u32 ptype = 0;
 	uint8_t j;
 
 	if (!pattern) {
@@ -1585,6 +1617,9 @@ txgbe_parse_fdir_filter_normal(struct rte_eth_dev *dev __rte_unused,
 	 */
 	memset(rule, 0, sizeof(struct txgbe_fdir_rule));
 	memset(&rule->mask, 0, sizeof(struct txgbe_hw_fdir_mask));
+	rule->mask.pkt_type_mask = TXGBE_ATR_TYPE_MASK_L3P |
+				   TXGBE_ATR_TYPE_MASK_L4P;
+	memset(&rule->input, 0, sizeof(struct txgbe_atr_input));
 
 	/**
 	 * The first not void item should be
@@ -1687,7 +1722,9 @@ txgbe_parse_fdir_filter_normal(struct rte_eth_dev *dev __rte_unused,
 			}
 		} else {
 			if (item->type != RTE_FLOW_ITEM_TYPE_IPV4 &&
-					item->type != RTE_FLOW_ITEM_TYPE_VLAN) {
+			    item->type != RTE_FLOW_ITEM_TYPE_VLAN &&
+			    item->type != RTE_FLOW_ITEM_TYPE_IPV6 &&
+			    item->type != RTE_FLOW_ITEM_TYPE_RAW) {
 				memset(rule, 0, sizeof(struct txgbe_fdir_rule));
 				rte_flow_error_set(error, EINVAL,
 					RTE_FLOW_ERROR_TYPE_ITEM,
@@ -1695,6 +1732,8 @@ txgbe_parse_fdir_filter_normal(struct rte_eth_dev *dev __rte_unused,
 				return -rte_errno;
 			}
 		}
+		if (item->type == RTE_FLOW_ITEM_TYPE_VLAN)
+			item = next_no_fuzzy_pattern(pattern, item);
 	}
 
 	/* Get the IPV4 info. */
@@ -1704,7 +1743,7 @@ txgbe_parse_fdir_filter_normal(struct rte_eth_dev *dev __rte_unused,
 		 * as we must have a flow type.
 		 */
 		rule->input.flow_type = TXGBE_ATR_FLOW_TYPE_IPV4;
-		ptype = txgbe_ptype_table[TXGBE_PT_IPV4];
+		rule->mask.pkt_type_mask &= ~TXGBE_ATR_TYPE_MASK_L3P;
 		/*Not supported last point for range*/
 		if (item->last) {
 			rte_flow_error_set(error, EINVAL,
@@ -1716,31 +1755,26 @@ txgbe_parse_fdir_filter_normal(struct rte_eth_dev *dev __rte_unused,
 		 * Only care about src & dst addresses,
 		 * others should be masked.
 		 */
-		if (!item->mask) {
-			memset(rule, 0, sizeof(struct txgbe_fdir_rule));
-			rte_flow_error_set(error, EINVAL,
-				RTE_FLOW_ERROR_TYPE_ITEM,
-				item, "Not supported by fdir filter");
-			return -rte_errno;
-		}
-		rule->b_mask = TRUE;
-		ipv4_mask = item->mask;
-		if (ipv4_mask->hdr.version_ihl ||
-		    ipv4_mask->hdr.type_of_service ||
-		    ipv4_mask->hdr.total_length ||
-		    ipv4_mask->hdr.packet_id ||
-		    ipv4_mask->hdr.fragment_offset ||
-		    ipv4_mask->hdr.time_to_live ||
-		    ipv4_mask->hdr.next_proto_id ||
-		    ipv4_mask->hdr.hdr_checksum) {
-			memset(rule, 0, sizeof(struct txgbe_fdir_rule));
-			rte_flow_error_set(error, EINVAL,
-				RTE_FLOW_ERROR_TYPE_ITEM,
-				item, "Not supported by fdir filter");
-			return -rte_errno;
+		if (item->mask) {
+			rule->b_mask = TRUE;
+			ipv4_mask = item->mask;
+			if (ipv4_mask->hdr.version_ihl ||
+			    ipv4_mask->hdr.type_of_service ||
+			    ipv4_mask->hdr.total_length ||
+			    ipv4_mask->hdr.packet_id ||
+			    ipv4_mask->hdr.fragment_offset ||
+			    ipv4_mask->hdr.time_to_live ||
+			    ipv4_mask->hdr.next_proto_id ||
+			    ipv4_mask->hdr.hdr_checksum) {
+				memset(rule, 0, sizeof(struct txgbe_fdir_rule));
+				rte_flow_error_set(error, EINVAL,
+					RTE_FLOW_ERROR_TYPE_ITEM,
+					item, "Not supported by fdir filter");
+				return -rte_errno;
+			}
+			rule->mask.dst_ipv4_mask = ipv4_mask->hdr.dst_addr;
+			rule->mask.src_ipv4_mask = ipv4_mask->hdr.src_addr;
 		}
-		rule->mask.dst_ipv4_mask = ipv4_mask->hdr.dst_addr;
-		rule->mask.src_ipv4_mask = ipv4_mask->hdr.src_addr;
 
 		if (item->spec) {
 			rule->b_spec = TRUE;
@@ -1776,16 +1810,14 @@ txgbe_parse_fdir_filter_normal(struct rte_eth_dev *dev __rte_unused,
 		 * as we must have a flow type.
 		 */
 		rule->input.flow_type = TXGBE_ATR_FLOW_TYPE_IPV6;
-		ptype = txgbe_ptype_table[TXGBE_PT_IPV6];
+		rule->mask.pkt_type_mask &= ~TXGBE_ATR_TYPE_MASK_L3P;
 
 		/**
 		 * 1. must signature match
 		 * 2. not support last
-		 * 3. mask must not null
 		 */
 		if (rule->mode != RTE_FDIR_MODE_SIGNATURE ||
-		    item->last ||
-		    !item->mask) {
+		    item->last) {
 			memset(rule, 0, sizeof(struct txgbe_fdir_rule));
 			rte_flow_error_set(error, EINVAL,
 				RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
@@ -1793,42 +1825,44 @@ txgbe_parse_fdir_filter_normal(struct rte_eth_dev *dev __rte_unused,
 			return -rte_errno;
 		}
 
-		rule->b_mask = TRUE;
-		ipv6_mask = item->mask;
-		if (ipv6_mask->hdr.vtc_flow ||
-		    ipv6_mask->hdr.payload_len ||
-		    ipv6_mask->hdr.proto ||
-		    ipv6_mask->hdr.hop_limits) {
-			memset(rule, 0, sizeof(struct txgbe_fdir_rule));
-			rte_flow_error_set(error, EINVAL,
-				RTE_FLOW_ERROR_TYPE_ITEM,
-				item, "Not supported by fdir filter");
-			return -rte_errno;
-		}
-
-		/* check src addr mask */
-		for (j = 0; j < 16; j++) {
-			if (ipv6_mask->hdr.src_addr.a[j] == UINT8_MAX) {
-				rule->mask.src_ipv6_mask |= 1 << j;
-			} else if (ipv6_mask->hdr.src_addr.a[j] != 0) {
+		if (item->mask) {
+			rule->b_mask = TRUE;
+			ipv6_mask = item->mask;
+			if (ipv6_mask->hdr.vtc_flow ||
+			    ipv6_mask->hdr.payload_len ||
+			    ipv6_mask->hdr.proto ||
+			    ipv6_mask->hdr.hop_limits) {
 				memset(rule, 0, sizeof(struct txgbe_fdir_rule));
 				rte_flow_error_set(error, EINVAL,
 					RTE_FLOW_ERROR_TYPE_ITEM,
 					item, "Not supported by fdir filter");
 				return -rte_errno;
 			}
-		}
 
-		/* check dst addr mask */
-		for (j = 0; j < 16; j++) {
-			if (ipv6_mask->hdr.dst_addr.a[j] == UINT8_MAX) {
-				rule->mask.dst_ipv6_mask |= 1 << j;
-			} else if (ipv6_mask->hdr.dst_addr.a[j] != 0) {
-				memset(rule, 0, sizeof(struct txgbe_fdir_rule));
-				rte_flow_error_set(error, EINVAL,
-					RTE_FLOW_ERROR_TYPE_ITEM,
-					item, "Not supported by fdir filter");
-				return -rte_errno;
+			/* check src addr mask */
+			for (j = 0; j < 16; j++) {
+				if (ipv6_mask->hdr.src_addr.a[j] == UINT8_MAX) {
+					rule->mask.src_ipv6_mask |= 1 << j;
+				} else if (ipv6_mask->hdr.src_addr.a[j] != 0) {
+					memset(rule, 0, sizeof(struct txgbe_fdir_rule));
+					rte_flow_error_set(error, EINVAL,
+						RTE_FLOW_ERROR_TYPE_ITEM,
+						item, "Not supported by fdir filter");
+					return -rte_errno;
+				}
+			}
+
+			/* check dst addr mask */
+			for (j = 0; j < 16; j++) {
+				if (ipv6_mask->hdr.dst_addr.a[j] == UINT8_MAX) {
+					rule->mask.dst_ipv6_mask |= 1 << j;
+				} else if (ipv6_mask->hdr.dst_addr.a[j] != 0) {
+					memset(rule, 0, sizeof(struct txgbe_fdir_rule));
+					rte_flow_error_set(error, EINVAL,
+						RTE_FLOW_ERROR_TYPE_ITEM,
+						item, "Not supported by fdir filter");
+					return -rte_errno;
+				}
 			}
 		}
 
@@ -1866,10 +1900,8 @@ txgbe_parse_fdir_filter_normal(struct rte_eth_dev *dev __rte_unused,
 		 * as we must have a flow type.
 		 */
 		rule->input.flow_type |= TXGBE_ATR_L4TYPE_TCP;
-		if (rule->input.flow_type & TXGBE_ATR_FLOW_TYPE_IPV6)
-			ptype = txgbe_ptype_table[TXGBE_PT_IPV6_TCP];
-		else
-			ptype = txgbe_ptype_table[TXGBE_PT_IPV4_TCP];
+		rule->mask.pkt_type_mask &= ~TXGBE_ATR_TYPE_MASK_L4P;
+
 		/*Not supported last point for range*/
 		if (item->last) {
 			rte_flow_error_set(error, EINVAL,
@@ -1933,10 +1965,8 @@ txgbe_parse_fdir_filter_normal(struct rte_eth_dev *dev __rte_unused,
 		 * as we must have a flow type.
 		 */
 		rule->input.flow_type |= TXGBE_ATR_L4TYPE_UDP;
-		if (rule->input.flow_type & TXGBE_ATR_FLOW_TYPE_IPV6)
-			ptype = txgbe_ptype_table[TXGBE_PT_IPV6_UDP];
-		else
-			ptype = txgbe_ptype_table[TXGBE_PT_IPV4_UDP];
+		rule->mask.pkt_type_mask &= ~TXGBE_ATR_TYPE_MASK_L4P;
+
 		/*Not supported last point for range*/
 		if (item->last) {
 			rte_flow_error_set(error, EINVAL,
@@ -1995,10 +2025,8 @@ txgbe_parse_fdir_filter_normal(struct rte_eth_dev *dev __rte_unused,
 		 * as we must have a flow type.
 		 */
 		rule->input.flow_type |= TXGBE_ATR_L4TYPE_SCTP;
-		if (rule->input.flow_type & TXGBE_ATR_FLOW_TYPE_IPV6)
-			ptype = txgbe_ptype_table[TXGBE_PT_IPV6_SCTP];
-		else
-			ptype = txgbe_ptype_table[TXGBE_PT_IPV4_SCTP];
+		rule->mask.pkt_type_mask &= ~TXGBE_ATR_TYPE_MASK_L4P;
+
 		/*Not supported last point for range*/
 		if (item->last) {
 			rte_flow_error_set(error, EINVAL,
@@ -2163,17 +2191,7 @@ txgbe_parse_fdir_filter_normal(struct rte_eth_dev *dev __rte_unused,
 		}
 	}
 
-	rule->input.pkt_type = cpu_to_be16(txgbe_encode_ptype(ptype));
-
-	if (rule->input.flow_type & TXGBE_ATR_FLOW_TYPE_IPV6) {
-		if (rule->input.flow_type & TXGBE_ATR_L4TYPE_MASK)
-			rule->input.pkt_type &= 0xFFFF;
-		else
-			rule->input.pkt_type &= 0xF8FF;
-
-		rule->input.flow_type &= TXGBE_ATR_L3TYPE_MASK |
-					TXGBE_ATR_L4TYPE_MASK;
-	}
+	txgbe_fdir_parse_flow_type(&rule->input, 0, false);
 
 	return txgbe_parse_fdir_act_attr(attr, actions, rule, error);
 }
@@ -2863,6 +2881,8 @@ txgbe_flow_create(struct rte_eth_dev *dev,
 									flex_base);
 				}
 
+				fdir_info->mask.pkt_type_mask =
+					fdir_rule.mask.pkt_type_mask;
 				ret = txgbe_fdir_set_input_mask(dev);
 				if (ret)
 					goto out;
-- 
2.48.1



^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH v3 05/17] net/txgbe: fix to create FDIR filters for SCTP packets
       [not found] ` <20250613084159.22184-1-jiawenwu@trustnetic.com>
                     ` (2 preceding siblings ...)
  2025-06-13  8:41   ` [PATCH v3 04/17] net/txgbe: fix packet type for FDIR filters Jiawen Wu
@ 2025-06-13  8:41   ` Jiawen Wu
  2025-06-13  8:41   ` [PATCH v3 06/17] net/txgbe: fix FDIR perfect mode for IPv6 packets Jiawen Wu
                     ` (9 subsequent siblings)
  13 siblings, 0 replies; 22+ messages in thread
From: Jiawen Wu @ 2025-06-13  8:41 UTC (permalink / raw)
  To: dev; +Cc: zaiyuwang, Jiawen Wu, stable

The check for the mask of SCTP item is repeated and wrong, fix it to
make it work.

Fixes: b973ee26747a ("net/txgbe: parse flow director filter")
Cc: stable@dpdk.org

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
 drivers/net/txgbe/txgbe_flow.c | 13 -------------
 1 file changed, 13 deletions(-)

diff --git a/drivers/net/txgbe/txgbe_flow.c b/drivers/net/txgbe/txgbe_flow.c
index bce88aebd3..c7cbf96a46 100644
--- a/drivers/net/txgbe/txgbe_flow.c
+++ b/drivers/net/txgbe/txgbe_flow.c
@@ -2067,19 +2067,6 @@ txgbe_parse_fdir_filter_normal(struct rte_eth_dev *dev __rte_unused,
 			rule->input.dst_port =
 				sctp_spec->hdr.dst_port;
 		}
-		/* others even sctp port is not supported */
-		sctp_mask = item->mask;
-		if (sctp_mask &&
-			(sctp_mask->hdr.src_port ||
-			 sctp_mask->hdr.dst_port ||
-			 sctp_mask->hdr.tag ||
-			 sctp_mask->hdr.cksum)) {
-			memset(rule, 0, sizeof(struct txgbe_fdir_rule));
-			rte_flow_error_set(error, EINVAL,
-				RTE_FLOW_ERROR_TYPE_ITEM,
-				item, "Not supported by fdir filter");
-			return -rte_errno;
-		}
 
 		item = next_no_fuzzy_pattern(pattern, item);
 		if (item->type != RTE_FLOW_ITEM_TYPE_RAW &&
-- 
2.48.1


^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH v3 06/17] net/txgbe: fix FDIR perfect mode for IPv6 packets
       [not found] ` <20250613084159.22184-1-jiawenwu@trustnetic.com>
                     ` (3 preceding siblings ...)
  2025-06-13  8:41   ` [PATCH v3 05/17] net/txgbe: fix to create FDIR filters for SCTP packets Jiawen Wu
@ 2025-06-13  8:41   ` Jiawen Wu
  2025-06-13  8:41   ` [PATCH v3 07/17] net/txgbe: fix to create FDIR filters for tunnel packets Jiawen Wu
                     ` (8 subsequent siblings)
  13 siblings, 0 replies; 22+ messages in thread
From: Jiawen Wu @ 2025-06-13  8:41 UTC (permalink / raw)
  To: dev; +Cc: zaiyuwang, Jiawen Wu, stable

Perfect mode of FDIR rules to filter IPv6 packets is supported by
hardware. Remove the restriction and fix the setting.

Fixes: b973ee26747a ("net/txgbe: parse flow director filter")
Cc: stable@dpdk.org

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
 drivers/net/txgbe/txgbe_fdir.c | 22 ++++++++--------------
 drivers/net/txgbe/txgbe_flow.c |  7 +------
 2 files changed, 9 insertions(+), 20 deletions(-)

diff --git a/drivers/net/txgbe/txgbe_fdir.c b/drivers/net/txgbe/txgbe_fdir.c
index 0d12fb9a11..0efd43b59a 100644
--- a/drivers/net/txgbe/txgbe_fdir.c
+++ b/drivers/net/txgbe/txgbe_fdir.c
@@ -210,15 +210,12 @@ txgbe_fdir_set_input_mask(struct rte_eth_dev *dev)
 	wr32(hw, TXGBE_FDIRSIP4MSK, ~info->mask.src_ipv4_mask);
 	wr32(hw, TXGBE_FDIRDIP4MSK, ~info->mask.dst_ipv4_mask);
 
-	if (mode == RTE_FDIR_MODE_SIGNATURE) {
-		/*
-		 * Store source and destination IPv6 masks (bit reversed)
-		 */
-		fdiripv6m = TXGBE_FDIRIP6MSK_DST(info->mask.dst_ipv6_mask) |
-			    TXGBE_FDIRIP6MSK_SRC(info->mask.src_ipv6_mask);
-
-		wr32(hw, TXGBE_FDIRIP6MSK, ~fdiripv6m);
-	}
+	/*
+	 * Store source and destination IPv6 masks (bit reversed)
+	 */
+	fdiripv6m = TXGBE_FDIRIP6MSK_DST(info->mask.dst_ipv6_mask) |
+		    TXGBE_FDIRIP6MSK_SRC(info->mask.src_ipv6_mask);
+	wr32(hw, TXGBE_FDIRIP6MSK, ~fdiripv6m);
 
 	return 0;
 }
@@ -642,6 +639,8 @@ fdir_write_perfect_filter(struct txgbe_hw *hw,
 	fdircmd |= TXGBE_FDIRPICMD_QP(queue);
 	fdircmd |= TXGBE_FDIRPICMD_POOL(input->vm_pool);
 
+	if (input->flow_type & TXGBE_ATR_L3TYPE_IPV6)
+		fdircmd |= TXGBE_FDIRPICMD_IP6;
 	wr32(hw, TXGBE_FDIRPICMD, fdircmd);
 
 	PMD_DRV_LOG(DEBUG, "Rx Queue=%x hash=%x", queue, fdirhash);
@@ -810,11 +809,6 @@ txgbe_fdir_filter_program(struct rte_eth_dev *dev,
 		is_perfect = TRUE;
 
 	if (is_perfect) {
-		if (rule->input.flow_type & TXGBE_ATR_L3TYPE_IPV6) {
-			PMD_DRV_LOG(ERR, "IPv6 is not supported in"
-				    " perfect mode!");
-			return -ENOTSUP;
-		}
 		fdirhash = atr_compute_perfect_hash(&rule->input,
 				TXGBE_DEV_FDIR_CONF(dev)->pballoc);
 		fdirhash |= TXGBE_FDIRPIHASH_IDX(rule->soft_id);
diff --git a/drivers/net/txgbe/txgbe_flow.c b/drivers/net/txgbe/txgbe_flow.c
index c7cbf96a46..145ee8a452 100644
--- a/drivers/net/txgbe/txgbe_flow.c
+++ b/drivers/net/txgbe/txgbe_flow.c
@@ -1812,12 +1812,7 @@ txgbe_parse_fdir_filter_normal(struct rte_eth_dev *dev __rte_unused,
 		rule->input.flow_type = TXGBE_ATR_FLOW_TYPE_IPV6;
 		rule->mask.pkt_type_mask &= ~TXGBE_ATR_TYPE_MASK_L3P;
 
-		/**
-		 * 1. must signature match
-		 * 2. not support last
-		 */
-		if (rule->mode != RTE_FDIR_MODE_SIGNATURE ||
-		    item->last) {
+		if (item->last) {
 			memset(rule, 0, sizeof(struct txgbe_fdir_rule));
 			rte_flow_error_set(error, EINVAL,
 				RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
-- 
2.48.1


^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH v3 07/17] net/txgbe: fix to create FDIR filters for tunnel packets
       [not found] ` <20250613084159.22184-1-jiawenwu@trustnetic.com>
                     ` (4 preceding siblings ...)
  2025-06-13  8:41   ` [PATCH v3 06/17] net/txgbe: fix FDIR perfect mode for IPv6 packets Jiawen Wu
@ 2025-06-13  8:41   ` Jiawen Wu
  2025-06-13  8:41   ` [PATCH v3 08/17] net/txgbe: fix reserved extra FDIR headroom Jiawen Wu
                     ` (7 subsequent siblings)
  13 siblings, 0 replies; 22+ messages in thread
From: Jiawen Wu @ 2025-06-13  8:41 UTC (permalink / raw)
  To: dev; +Cc: zaiyuwang, Jiawen Wu, stable

Fix to create FDIR rules for VXLAN/GRE/NVGRE/GENEVE packets, they will
match the rules in the inner layers.

Fixes: b973ee26747a ("net/txgbe: parse flow director filter")
Cc: stable@dpdk.org

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
 doc/guides/nics/features/txgbe.ini |   2 +
 drivers/net/txgbe/txgbe_ethdev.h   |   1 -
 drivers/net/txgbe/txgbe_flow.c     | 585 +++++++++++++++++++++++------
 3 files changed, 478 insertions(+), 110 deletions(-)

diff --git a/doc/guides/nics/features/txgbe.ini b/doc/guides/nics/features/txgbe.ini
index be0af3dfad..20f7cb8db8 100644
--- a/doc/guides/nics/features/txgbe.ini
+++ b/doc/guides/nics/features/txgbe.ini
@@ -67,6 +67,8 @@ tcp                  = Y
 udp                  = Y
 vlan                 = P
 vxlan                = Y
+geneve               = Y
+gre                  = Y
 
 [rte_flow actions]
 drop                 = Y
diff --git a/drivers/net/txgbe/txgbe_ethdev.h b/drivers/net/txgbe/txgbe_ethdev.h
index 01e8a9fc05..c2d0950d2c 100644
--- a/drivers/net/txgbe/txgbe_ethdev.h
+++ b/drivers/net/txgbe/txgbe_ethdev.h
@@ -90,7 +90,6 @@ struct txgbe_hw_fdir_mask {
 	uint16_t src_port_mask;
 	uint16_t dst_port_mask;
 	uint16_t flex_bytes_mask;
-	uint8_t  mac_addr_byte_mask;
 	uint8_t  pkt_type_mask; /* reversed mask for hw */
 };
 
diff --git a/drivers/net/txgbe/txgbe_flow.c b/drivers/net/txgbe/txgbe_flow.c
index 145ee8a452..99a76daca0 100644
--- a/drivers/net/txgbe/txgbe_flow.c
+++ b/drivers/net/txgbe/txgbe_flow.c
@@ -2179,41 +2179,29 @@ txgbe_parse_fdir_filter_normal(struct rte_eth_dev *dev __rte_unused,
 }
 
 /**
- * Parse the rule to see if it is a VxLAN or NVGRE flow director rule.
+ * Parse the rule to see if it is a IP tunnel flow director rule.
  * And get the flow director filter info BTW.
- * VxLAN PATTERN:
- * The first not void item must be ETH.
- * The second not void item must be IPV4/ IPV6.
- * The third not void item must be NVGRE.
- * The next not void item must be END.
- * NVGRE PATTERN:
- * The first not void item must be ETH.
- * The second not void item must be IPV4/ IPV6.
- * The third not void item must be NVGRE.
+ * PATTERN:
+ * The first not void item can be ETH or IPV4 or IPV6 or UDP or tunnel type.
+ * The second not void item must be IPV4 or IPV6 if the first one is ETH.
+ * The next not void item could be UDP or tunnel type.
+ * The next not void item could be a certain inner layer.
  * The next not void item must be END.
  * ACTION:
- * The first not void action should be QUEUE or DROP.
- * The second not void optional action should be MARK,
- * mark_id is a uint32_t number.
+ * The first not void action should be QUEUE.
  * The next not void action should be END.
- * VxLAN pattern example:
+ * pattern example:
  * ITEM		Spec			Mask
  * ETH		NULL			NULL
- * IPV4/IPV6	NULL			NULL
+ * IPV4		NULL			NULL
  * UDP		NULL			NULL
- * VxLAN	vni{0x00, 0x32, 0x54}	{0xFF, 0xFF, 0xFF}
- * MAC VLAN	tci	0x2016		0xEFFF
- * END
- * NEGRV pattern example:
- * ITEM		Spec			Mask
+ * VXLAN	NULL			NULL
  * ETH		NULL			NULL
- * IPV4/IPV6	NULL			NULL
- * NVGRE	protocol	0x6558	0xFFFF
- *		tni{0x00, 0x32, 0x54}	{0xFF, 0xFF, 0xFF}
- * MAC VLAN	tci	0x2016		0xEFFF
+ * IPV4		src_addr 192.168.1.20	0xFFFFFFFF
+ *		dst_addr 192.167.3.50	0xFFFFFFFF
+ * UDP/TCP/SCTP	src_port	80	0xFFFF
+ *		dst_port	80	0xFFFF
  * END
- * other members in mask and spec should set to 0x00.
- * item->last should be NULL.
  */
 static int
 txgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr,
@@ -2224,6 +2212,17 @@ txgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr,
 {
 	const struct rte_flow_item *item;
 	const struct rte_flow_item_eth *eth_mask;
+	const struct rte_flow_item_ipv4 *ipv4_spec;
+	const struct rte_flow_item_ipv4 *ipv4_mask;
+	const struct rte_flow_item_ipv6 *ipv6_spec;
+	const struct rte_flow_item_ipv6 *ipv6_mask;
+	const struct rte_flow_item_tcp *tcp_spec;
+	const struct rte_flow_item_tcp *tcp_mask;
+	const struct rte_flow_item_udp *udp_spec;
+	const struct rte_flow_item_udp *udp_mask;
+	const struct rte_flow_item_sctp *sctp_spec;
+	const struct rte_flow_item_sctp *sctp_mask;
+	u8 ptid = 0;
 	uint32_t j;
 
 	if (!pattern) {
@@ -2252,12 +2251,14 @@ txgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr,
 	 * value. So, we need not do anything for the not provided fields later.
 	 */
 	memset(rule, 0, sizeof(struct txgbe_fdir_rule));
-	memset(&rule->mask, 0xFF, sizeof(struct txgbe_hw_fdir_mask));
-	rule->mask.vlan_tci_mask = 0;
+	memset(&rule->mask, 0, sizeof(struct txgbe_hw_fdir_mask));
+	rule->mask.pkt_type_mask = TXGBE_ATR_TYPE_MASK_TUN_OUTIP |
+				   TXGBE_ATR_TYPE_MASK_L3P |
+				   TXGBE_ATR_TYPE_MASK_L4P;
 
 	/**
 	 * The first not void item should be
-	 * MAC or IPv4 or IPv6 or UDP or VxLAN.
+	 * MAC or IPv4 or IPv6 or UDP or tunnel.
 	 */
 	item = next_no_void_pattern(pattern, NULL);
 	if (item->type != RTE_FLOW_ITEM_TYPE_ETH &&
@@ -2265,7 +2266,9 @@ txgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr,
 	    item->type != RTE_FLOW_ITEM_TYPE_IPV6 &&
 	    item->type != RTE_FLOW_ITEM_TYPE_UDP &&
 	    item->type != RTE_FLOW_ITEM_TYPE_VXLAN &&
-	    item->type != RTE_FLOW_ITEM_TYPE_NVGRE) {
+	    item->type != RTE_FLOW_ITEM_TYPE_NVGRE &&
+	    item->type != RTE_FLOW_ITEM_TYPE_GRE &&
+	    item->type != RTE_FLOW_ITEM_TYPE_GENEVE) {
 		memset(rule, 0, sizeof(struct txgbe_fdir_rule));
 		rte_flow_error_set(error, EINVAL,
 			RTE_FLOW_ERROR_TYPE_ITEM,
@@ -2273,7 +2276,8 @@ txgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr,
 		return -rte_errno;
 	}
 
-	rule->mode = RTE_FDIR_MODE_PERFECT_TUNNEL;
+	rule->mode = RTE_FDIR_MODE_PERFECT;
+	ptid = TXGBE_PTID_PKT_TUN;
 
 	/* Skip MAC. */
 	if (item->type == RTE_FLOW_ITEM_TYPE_ETH) {
@@ -2295,6 +2299,8 @@ txgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr,
 
 		/* Check if the next not void item is IPv4 or IPv6. */
 		item = next_no_void_pattern(pattern, item);
+		if (item->type == RTE_FLOW_ITEM_TYPE_VLAN)
+			item = next_no_fuzzy_pattern(pattern, item);
 		if (item->type != RTE_FLOW_ITEM_TYPE_IPV4 &&
 		    item->type != RTE_FLOW_ITEM_TYPE_IPV6) {
 			memset(rule, 0, sizeof(struct txgbe_fdir_rule));
@@ -2308,6 +2314,8 @@ txgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr,
 	/* Skip IP. */
 	if (item->type == RTE_FLOW_ITEM_TYPE_IPV4 ||
 	    item->type == RTE_FLOW_ITEM_TYPE_IPV6) {
+		rule->mask.pkt_type_mask &= ~TXGBE_ATR_TYPE_MASK_TUN_OUTIP;
+
 		/* Only used to describe the protocol stack. */
 		if (item->spec || item->mask) {
 			memset(rule, 0, sizeof(struct txgbe_fdir_rule));
@@ -2324,10 +2332,17 @@ txgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr,
 			return -rte_errno;
 		}
 
-		/* Check if the next not void item is UDP or NVGRE. */
+		if (item->type == RTE_FLOW_ITEM_TYPE_IPV6)
+			ptid |= TXGBE_PTID_TUN_IPV6;
+
 		item = next_no_void_pattern(pattern, item);
-		if (item->type != RTE_FLOW_ITEM_TYPE_UDP &&
-		    item->type != RTE_FLOW_ITEM_TYPE_NVGRE) {
+		if (item->type != RTE_FLOW_ITEM_TYPE_IPV4 &&
+		    item->type != RTE_FLOW_ITEM_TYPE_IPV6 &&
+		    item->type != RTE_FLOW_ITEM_TYPE_UDP &&
+		    item->type != RTE_FLOW_ITEM_TYPE_VXLAN &&
+		    item->type != RTE_FLOW_ITEM_TYPE_GRE &&
+		    item->type != RTE_FLOW_ITEM_TYPE_NVGRE &&
+		    item->type != RTE_FLOW_ITEM_TYPE_GENEVE) {
 			memset(rule, 0, sizeof(struct txgbe_fdir_rule));
 			rte_flow_error_set(error, EINVAL,
 				RTE_FLOW_ERROR_TYPE_ITEM,
@@ -2338,6 +2353,31 @@ txgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr,
 
 	/* Skip UDP. */
 	if (item->type == RTE_FLOW_ITEM_TYPE_UDP) {
+		/*Not supported last point for range*/
+		if (item->last) {
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				item, "Not supported last point for range");
+			return -rte_errno;
+		}
+
+		/* Check if the next not void item is VxLAN or GENEVE. */
+		item = next_no_void_pattern(pattern, item);
+		if (item->type != RTE_FLOW_ITEM_TYPE_VXLAN &&
+		    item->type != RTE_FLOW_ITEM_TYPE_GENEVE) {
+			memset(rule, 0, sizeof(struct txgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+	}
+
+	/* Skip tunnel. */
+	if (item->type == RTE_FLOW_ITEM_TYPE_VXLAN ||
+	    item->type == RTE_FLOW_ITEM_TYPE_GRE ||
+	    item->type == RTE_FLOW_ITEM_TYPE_NVGRE ||
+	    item->type == RTE_FLOW_ITEM_TYPE_GENEVE) {
 		/* Only used to describe the protocol stack. */
 		if (item->spec || item->mask) {
 			memset(rule, 0, sizeof(struct txgbe_fdir_rule));
@@ -2354,9 +2394,15 @@ txgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr,
 			return -rte_errno;
 		}
 
-		/* Check if the next not void item is VxLAN. */
+		if (item->type == RTE_FLOW_ITEM_TYPE_GRE)
+			ptid |= TXGBE_PTID_TUN_EIG;
+		else
+			ptid |= TXGBE_PTID_TUN_EIGM;
+
 		item = next_no_void_pattern(pattern, item);
-		if (item->type != RTE_FLOW_ITEM_TYPE_VXLAN) {
+		if (item->type != RTE_FLOW_ITEM_TYPE_ETH &&
+		    item->type != RTE_FLOW_ITEM_TYPE_IPV4 &&
+		    item->type != RTE_FLOW_ITEM_TYPE_IPV6) {
 			memset(rule, 0, sizeof(struct txgbe_fdir_rule));
 			rte_flow_error_set(error, EINVAL,
 				RTE_FLOW_ERROR_TYPE_ITEM,
@@ -2365,100 +2411,421 @@ txgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr,
 		}
 	}
 
-	/* check if the next not void item is MAC */
-	item = next_no_void_pattern(pattern, item);
-	if (item->type != RTE_FLOW_ITEM_TYPE_ETH) {
-		memset(rule, 0, sizeof(struct txgbe_fdir_rule));
-		rte_flow_error_set(error, EINVAL,
-			RTE_FLOW_ERROR_TYPE_ITEM,
-			item, "Not supported by fdir filter");
-		return -rte_errno;
-	}
+	/* Get the MAC info. */
+	if (item->type == RTE_FLOW_ITEM_TYPE_ETH) {
+		/**
+		 * Only support vlan and dst MAC address,
+		 * others should be masked.
+		 */
+		if (item->spec && !item->mask) {
+			memset(rule, 0, sizeof(struct txgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+					   RTE_FLOW_ERROR_TYPE_ITEM,
+					   item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
 
-	/**
-	 * Only support vlan and dst MAC address,
-	 * others should be masked.
-	 */
+		if (item->mask) {
+			rule->b_mask = TRUE;
+			eth_mask = item->mask;
 
-	if (!item->mask) {
-		memset(rule, 0, sizeof(struct txgbe_fdir_rule));
-		rte_flow_error_set(error, EINVAL,
-			RTE_FLOW_ERROR_TYPE_ITEM,
-			item, "Not supported by fdir filter");
-		return -rte_errno;
+			/* Ether type should be masked. */
+			if (eth_mask->hdr.ether_type) {
+				memset(rule, 0, sizeof(struct txgbe_fdir_rule));
+				rte_flow_error_set(error, EINVAL,
+						   RTE_FLOW_ERROR_TYPE_ITEM,
+						   item, "Not supported by fdir filter");
+				return -rte_errno;
+			}
+
+			/**
+			 * src MAC address must be masked,
+			 * and don't support dst MAC address mask.
+			 */
+			for (j = 0; j < RTE_ETHER_ADDR_LEN; j++) {
+				if (eth_mask->hdr.src_addr.addr_bytes[j] ||
+				    eth_mask->hdr.dst_addr.addr_bytes[j] != 0xFF) {
+					memset(rule, 0,
+					       sizeof(struct txgbe_fdir_rule));
+					rte_flow_error_set(error, EINVAL,
+							   RTE_FLOW_ERROR_TYPE_ITEM,
+							   item, "Not supported by fdir filter");
+					return -rte_errno;
+				}
+			}
+
+			/* When no VLAN, considered as full mask. */
+			rule->mask.vlan_tci_mask = rte_cpu_to_be_16(0xEFFF);
+		}
+
+		item = next_no_fuzzy_pattern(pattern, item);
+		if (rule->mask.vlan_tci_mask) {
+			if (item->type != RTE_FLOW_ITEM_TYPE_VLAN) {
+				memset(rule, 0, sizeof(struct txgbe_fdir_rule));
+				rte_flow_error_set(error, EINVAL,
+						   RTE_FLOW_ERROR_TYPE_ITEM,
+						   item, "Not supported by fdir filter");
+				return -rte_errno;
+			}
+		} else {
+			if (item->type != RTE_FLOW_ITEM_TYPE_IPV4 &&
+			    item->type != RTE_FLOW_ITEM_TYPE_IPV6 &&
+			    item->type != RTE_FLOW_ITEM_TYPE_VLAN) {
+				memset(rule, 0, sizeof(struct txgbe_fdir_rule));
+				rte_flow_error_set(error, EINVAL,
+						   RTE_FLOW_ERROR_TYPE_ITEM,
+						   item, "Not supported by fdir filter");
+				return -rte_errno;
+			}
+		}
+		if (item->type == RTE_FLOW_ITEM_TYPE_VLAN) {
+			ptid |= TXGBE_PTID_TUN_EIGMV;
+			item = next_no_fuzzy_pattern(pattern, item);
+		}
 	}
-	/*Not supported last point for range*/
-	if (item->last) {
-		rte_flow_error_set(error, EINVAL,
-			RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
-			item, "Not supported last point for range");
-		return -rte_errno;
+
+	/* Get the IPV4 info. */
+	if (item->type == RTE_FLOW_ITEM_TYPE_IPV4) {
+		/**
+		 * Set the flow type even if there's no content
+		 * as we must have a flow type.
+		 */
+		rule->input.flow_type = TXGBE_ATR_FLOW_TYPE_IPV4;
+		rule->mask.pkt_type_mask &= ~TXGBE_ATR_TYPE_MASK_L3P;
+
+		/*Not supported last point for range*/
+		if (item->last) {
+			rte_flow_error_set(error, EINVAL,
+					   RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+					   item, "Not supported last point for range");
+			return -rte_errno;
+		}
+		/**
+		 * Only care about src & dst addresses,
+		 * others should be masked.
+		 */
+		if (item->spec && !item->mask) {
+			memset(rule, 0, sizeof(struct txgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+					   RTE_FLOW_ERROR_TYPE_ITEM,
+					   item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+		if (item->mask) {
+			rule->b_mask = TRUE;
+			ipv4_mask = item->mask;
+			if (ipv4_mask->hdr.version_ihl ||
+			    ipv4_mask->hdr.type_of_service ||
+			    ipv4_mask->hdr.total_length ||
+			    ipv4_mask->hdr.packet_id ||
+			    ipv4_mask->hdr.fragment_offset ||
+			    ipv4_mask->hdr.time_to_live ||
+			    ipv4_mask->hdr.next_proto_id ||
+			    ipv4_mask->hdr.hdr_checksum) {
+				memset(rule, 0, sizeof(struct txgbe_fdir_rule));
+				rte_flow_error_set(error, EINVAL,
+						   RTE_FLOW_ERROR_TYPE_ITEM,
+						   item, "Not supported by fdir filter");
+				return -rte_errno;
+			}
+			rule->mask.dst_ipv4_mask = ipv4_mask->hdr.dst_addr;
+			rule->mask.src_ipv4_mask = ipv4_mask->hdr.src_addr;
+		}
+		if (item->spec) {
+			rule->b_spec = TRUE;
+			ipv4_spec = item->spec;
+			rule->input.dst_ip[0] =
+				ipv4_spec->hdr.dst_addr;
+			rule->input.src_ip[0] =
+				ipv4_spec->hdr.src_addr;
+		}
+
+		/**
+		 * Check if the next not void item is
+		 * TCP or UDP or SCTP or END.
+		 */
+		item = next_no_fuzzy_pattern(pattern, item);
+		if (item->type != RTE_FLOW_ITEM_TYPE_TCP &&
+		    item->type != RTE_FLOW_ITEM_TYPE_UDP &&
+		    item->type != RTE_FLOW_ITEM_TYPE_SCTP &&
+		    item->type != RTE_FLOW_ITEM_TYPE_END) {
+			memset(rule, 0, sizeof(struct txgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+					   RTE_FLOW_ERROR_TYPE_ITEM,
+					   item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
 	}
-	rule->b_mask = TRUE;
-	eth_mask = item->mask;
 
-	/* Ether type should be masked. */
-	if (eth_mask->hdr.ether_type) {
-		memset(rule, 0, sizeof(struct txgbe_fdir_rule));
-		rte_flow_error_set(error, EINVAL,
-			RTE_FLOW_ERROR_TYPE_ITEM,
-			item, "Not supported by fdir filter");
-		return -rte_errno;
+	/* Get the IPV6 info. */
+	if (item->type == RTE_FLOW_ITEM_TYPE_IPV6) {
+		/**
+		 * Set the flow type even if there's no content
+		 * as we must have a flow type.
+		 */
+		rule->input.flow_type = TXGBE_ATR_FLOW_TYPE_IPV6;
+		rule->mask.pkt_type_mask &= ~TXGBE_ATR_TYPE_MASK_L3P;
+
+		if (item->last) {
+			memset(rule, 0, sizeof(struct txgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+					   RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+					   item, "Not supported last point for range");
+			return -rte_errno;
+		}
+		if (item->spec && !item->mask) {
+			memset(rule, 0, sizeof(struct txgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+					   RTE_FLOW_ERROR_TYPE_ITEM,
+					   item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+		if (item->mask) {
+			rule->b_mask = TRUE;
+			ipv6_mask = item->mask;
+			if (ipv6_mask->hdr.vtc_flow ||
+			    ipv6_mask->hdr.payload_len ||
+			    ipv6_mask->hdr.proto ||
+			    ipv6_mask->hdr.hop_limits) {
+				memset(rule, 0, sizeof(struct txgbe_fdir_rule));
+				rte_flow_error_set(error, EINVAL,
+						   RTE_FLOW_ERROR_TYPE_ITEM,
+						   item, "Not supported by fdir filter");
+				return -rte_errno;
+			}
+
+			/* check src addr mask */
+			for (j = 0; j < 16; j++) {
+				if (ipv6_mask->hdr.src_addr.a[j] == UINT8_MAX) {
+					rule->mask.src_ipv6_mask |= 1 << j;
+				} else if (ipv6_mask->hdr.src_addr.a[j] != 0) {
+					memset(rule, 0, sizeof(struct txgbe_fdir_rule));
+					rte_flow_error_set(error, EINVAL,
+							   RTE_FLOW_ERROR_TYPE_ITEM,
+							   item, "Not supported by fdir filter");
+					return -rte_errno;
+				}
+			}
+
+			/* check dst addr mask */
+			for (j = 0; j < 16; j++) {
+				if (ipv6_mask->hdr.dst_addr.a[j] == UINT8_MAX) {
+					rule->mask.dst_ipv6_mask |= 1 << j;
+				} else if (ipv6_mask->hdr.dst_addr.a[j] != 0) {
+					memset(rule, 0, sizeof(struct txgbe_fdir_rule));
+					rte_flow_error_set(error, EINVAL,
+							   RTE_FLOW_ERROR_TYPE_ITEM,
+							   item, "Not supported by fdir filter");
+					return -rte_errno;
+				}
+			}
+		}
+		if (item->spec) {
+			rule->b_spec = TRUE;
+			ipv6_spec = item->spec;
+			rte_memcpy(rule->input.src_ip,
+				   &ipv6_spec->hdr.src_addr, 16);
+			rte_memcpy(rule->input.dst_ip,
+				   &ipv6_spec->hdr.dst_addr, 16);
+		}
+
+		/**
+		 * Check if the next not void item is
+		 * TCP or UDP or SCTP or END.
+		 */
+		item = next_no_fuzzy_pattern(pattern, item);
+		if (item->type != RTE_FLOW_ITEM_TYPE_TCP &&
+		    item->type != RTE_FLOW_ITEM_TYPE_UDP &&
+		    item->type != RTE_FLOW_ITEM_TYPE_SCTP &&
+		    item->type != RTE_FLOW_ITEM_TYPE_END) {
+			memset(rule, 0, sizeof(struct txgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+					   RTE_FLOW_ERROR_TYPE_ITEM,
+					   item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
 	}
 
-	/* src MAC address should be masked. */
-	for (j = 0; j < RTE_ETHER_ADDR_LEN; j++) {
-		if (eth_mask->hdr.src_addr.addr_bytes[j]) {
-			memset(rule, 0,
-			       sizeof(struct txgbe_fdir_rule));
+	/* Get the TCP info. */
+	if (item->type == RTE_FLOW_ITEM_TYPE_TCP) {
+		/**
+		 * Set the flow type even if there's no content
+		 * as we must have a flow type.
+		 */
+		rule->input.flow_type |= TXGBE_ATR_L4TYPE_TCP;
+		rule->mask.pkt_type_mask &= ~TXGBE_ATR_TYPE_MASK_L4P;
+
+		/*Not supported last point for range*/
+		if (item->last) {
 			rte_flow_error_set(error, EINVAL,
-				RTE_FLOW_ERROR_TYPE_ITEM,
-				item, "Not supported by fdir filter");
+					   RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+					   item, "Not supported last point for range");
+			return -rte_errno;
+		}
+		/**
+		 * Only care about src & dst ports,
+		 * others should be masked.
+		 */
+		if (!item->mask) {
+			memset(rule, 0, sizeof(struct txgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+					   RTE_FLOW_ERROR_TYPE_ITEM,
+					   item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+		rule->b_mask = TRUE;
+		tcp_mask = item->mask;
+		if (tcp_mask->hdr.sent_seq ||
+		    tcp_mask->hdr.recv_ack ||
+		    tcp_mask->hdr.data_off ||
+		    tcp_mask->hdr.tcp_flags ||
+		    tcp_mask->hdr.rx_win ||
+		    tcp_mask->hdr.cksum ||
+		    tcp_mask->hdr.tcp_urp) {
+			memset(rule, 0, sizeof(struct txgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+					   RTE_FLOW_ERROR_TYPE_ITEM,
+					   item, "Not supported by fdir filter");
 			return -rte_errno;
 		}
+		rule->mask.src_port_mask = tcp_mask->hdr.src_port;
+		rule->mask.dst_port_mask = tcp_mask->hdr.dst_port;
+
+		if (item->spec) {
+			rule->b_spec = TRUE;
+			tcp_spec = item->spec;
+			rule->input.src_port =
+				tcp_spec->hdr.src_port;
+			rule->input.dst_port =
+				tcp_spec->hdr.dst_port;
+		}
 	}
-	rule->mask.mac_addr_byte_mask = 0;
-	for (j = 0; j < ETH_ADDR_LEN; j++) {
-		/* It's a per byte mask. */
-		if (eth_mask->hdr.dst_addr.addr_bytes[j] == 0xFF) {
-			rule->mask.mac_addr_byte_mask |= 0x1 << j;
-		} else if (eth_mask->hdr.dst_addr.addr_bytes[j]) {
+
+	/* Get the UDP info */
+	if (item->type == RTE_FLOW_ITEM_TYPE_UDP) {
+		/**
+		 * Set the flow type even if there's no content
+		 * as we must have a flow type.
+		 */
+		rule->input.flow_type |= TXGBE_ATR_L4TYPE_UDP;
+		rule->mask.pkt_type_mask &= ~TXGBE_ATR_TYPE_MASK_L4P;
+		/*Not supported last point for range*/
+		if (item->last) {
+			rte_flow_error_set(error, EINVAL,
+					   RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+					   item, "Not supported last point for range");
+			return -rte_errno;
+		}
+		/**
+		 * Only care about src & dst ports,
+		 * others should be masked.
+		 */
+		if (!item->mask) {
 			memset(rule, 0, sizeof(struct txgbe_fdir_rule));
 			rte_flow_error_set(error, EINVAL,
-				RTE_FLOW_ERROR_TYPE_ITEM,
-				item, "Not supported by fdir filter");
+					   RTE_FLOW_ERROR_TYPE_ITEM,
+					   item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+		rule->b_mask = TRUE;
+		udp_mask = item->mask;
+		if (udp_mask->hdr.dgram_len ||
+		    udp_mask->hdr.dgram_cksum) {
+			memset(rule, 0, sizeof(struct txgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+					   RTE_FLOW_ERROR_TYPE_ITEM,
+					   item, "Not supported by fdir filter");
 			return -rte_errno;
 		}
+		rule->mask.src_port_mask = udp_mask->hdr.src_port;
+		rule->mask.dst_port_mask = udp_mask->hdr.dst_port;
+
+		if (item->spec) {
+			rule->b_spec = TRUE;
+			udp_spec = item->spec;
+			rule->input.src_port =
+				udp_spec->hdr.src_port;
+			rule->input.dst_port =
+				udp_spec->hdr.dst_port;
+		}
 	}
 
-	/* When no vlan, considered as full mask. */
-	rule->mask.vlan_tci_mask = rte_cpu_to_be_16(0xEFFF);
+	/* Get the SCTP info */
+	if (item->type == RTE_FLOW_ITEM_TYPE_SCTP) {
+		/**
+		 * Set the flow type even if there's no content
+		 * as we must have a flow type.
+		 */
+		rule->input.flow_type |= TXGBE_ATR_L4TYPE_SCTP;
+		rule->mask.pkt_type_mask &= ~TXGBE_ATR_TYPE_MASK_L4P;
 
-	/**
-	 * Check if the next not void item is vlan or ipv4.
-	 * IPv6 is not supported.
-	 */
-	item = next_no_void_pattern(pattern, item);
-	if (item->type != RTE_FLOW_ITEM_TYPE_VLAN &&
-		item->type != RTE_FLOW_ITEM_TYPE_IPV4) {
-		memset(rule, 0, sizeof(struct txgbe_fdir_rule));
-		rte_flow_error_set(error, EINVAL,
-			RTE_FLOW_ERROR_TYPE_ITEM,
-			item, "Not supported by fdir filter");
-		return -rte_errno;
+		/*Not supported last point for range*/
+		if (item->last) {
+			rte_flow_error_set(error, EINVAL,
+					   RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+					   item, "Not supported last point for range");
+			return -rte_errno;
+		}
+
+		/**
+		 * Only care about src & dst ports,
+		 * others should be masked.
+		 */
+		if (!item->mask) {
+			memset(rule, 0, sizeof(struct txgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+					   RTE_FLOW_ERROR_TYPE_ITEM,
+					   item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+		rule->b_mask = TRUE;
+		sctp_mask = item->mask;
+		if (sctp_mask->hdr.tag || sctp_mask->hdr.cksum) {
+			memset(rule, 0, sizeof(struct txgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+					   RTE_FLOW_ERROR_TYPE_ITEM,
+					   item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
+		rule->mask.src_port_mask = sctp_mask->hdr.src_port;
+		rule->mask.dst_port_mask = sctp_mask->hdr.dst_port;
+
+		if (item->spec) {
+			rule->b_spec = TRUE;
+			sctp_spec = item->spec;
+			rule->input.src_port =
+				sctp_spec->hdr.src_port;
+			rule->input.dst_port =
+				sctp_spec->hdr.dst_port;
+		}
+		/* others even sctp port is not supported */
+		sctp_mask = item->mask;
+		if (sctp_mask &&
+		    (sctp_mask->hdr.src_port ||
+		     sctp_mask->hdr.dst_port ||
+		     sctp_mask->hdr.tag ||
+		     sctp_mask->hdr.cksum)) {
+			memset(rule, 0, sizeof(struct txgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+					   RTE_FLOW_ERROR_TYPE_ITEM,
+					   item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
 	}
-	/*Not supported last point for range*/
-	if (item->last) {
-		rte_flow_error_set(error, EINVAL,
-			RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
-			item, "Not supported last point for range");
-		return -rte_errno;
+
+	if (item->type != RTE_FLOW_ITEM_TYPE_END) {
+		/* check if the next not void item is END */
+		item = next_no_fuzzy_pattern(pattern, item);
+		if (item->type != RTE_FLOW_ITEM_TYPE_END) {
+			memset(rule, 0, sizeof(struct txgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+					   RTE_FLOW_ERROR_TYPE_ITEM,
+					   item, "Not supported by fdir filter");
+			return -rte_errno;
+		}
 	}
 
-	/**
-	 * If the tags is 0, it means don't care about the VLAN.
-	 * Do nothing.
-	 */
+	txgbe_fdir_parse_flow_type(&rule->input, ptid, true);
 
 	return txgbe_parse_fdir_act_attr(attr, actions, rule, error);
 }
-- 
2.48.1


^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH v3 08/17] net/txgbe: fix reserved extra FDIR headroom
       [not found] ` <20250613084159.22184-1-jiawenwu@trustnetic.com>
                     ` (5 preceding siblings ...)
  2025-06-13  8:41   ` [PATCH v3 07/17] net/txgbe: fix to create FDIR filters for tunnel packets Jiawen Wu
@ 2025-06-13  8:41   ` Jiawen Wu
  2025-06-13  8:41   ` [PATCH v3 11/17] net/txgbe: fix MAC control frame forwarding Jiawen Wu
                     ` (6 subsequent siblings)
  13 siblings, 0 replies; 22+ messages in thread
From: Jiawen Wu @ 2025-06-13  8:41 UTC (permalink / raw)
  To: dev; +Cc: zaiyuwang, Jiawen Wu, stable

Remove redundant 256KB FDIR headroom reservation. FDIR headroom was
already allocated in txgbe_fdir_configure() when FDIR is enabled, the
second reservation resulted in 256KB less available RX packet buffer than
the theoretical size.

Fixes: 8bdc7882f376 ("net/txgbe: support DCB")
Cc: stable@dpdk.org

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
 drivers/net/txgbe/base/txgbe_hw.c | 4 +---
 1 file changed, 1 insertion(+), 3 deletions(-)

diff --git a/drivers/net/txgbe/base/txgbe_hw.c b/drivers/net/txgbe/base/txgbe_hw.c
index ae2ad87c83..76b9ee3c0a 100644
--- a/drivers/net/txgbe/base/txgbe_hw.c
+++ b/drivers/net/txgbe/base/txgbe_hw.c
@@ -2106,9 +2106,7 @@ void txgbe_set_pba(struct txgbe_hw *hw, int num_pb, u32 headroom,
 	u32 rxpktsize, txpktsize, txpbthresh;
 
 	UNREFERENCED_PARAMETER(hw);
-
-	/* Reserve headroom */
-	pbsize -= headroom;
+	UNREFERENCED_PARAMETER(headroom);
 
 	if (!num_pb)
 		num_pb = 1;
-- 
2.48.1


^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH v3 11/17] net/txgbe: fix MAC control frame forwarding
       [not found] ` <20250613084159.22184-1-jiawenwu@trustnetic.com>
                     ` (6 preceding siblings ...)
  2025-06-13  8:41   ` [PATCH v3 08/17] net/txgbe: fix reserved extra FDIR headroom Jiawen Wu
@ 2025-06-13  8:41   ` Jiawen Wu
  2025-06-13  8:41   ` [PATCH v3 12/17] net/ngbe: " Jiawen Wu
                     ` (5 subsequent siblings)
  13 siblings, 0 replies; 22+ messages in thread
From: Jiawen Wu @ 2025-06-13  8:41 UTC (permalink / raw)
  To: dev; +Cc: zaiyuwang, Jiawen Wu, stable

Test Failure on the case "test_pause_fwd_port_stop_start", which expect
MAC control frame forwarding setting still working after port stop/start.
Fix the bug to pass the test case.

Fixes: 69ce8c8a4ce3 ("net/txgbe: support flow control")
Cc: stable@dpdk.org

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
 drivers/net/txgbe/base/txgbe_hw.c   | 9 +++++++++
 drivers/net/txgbe/base/txgbe_type.h | 1 +
 drivers/net/txgbe/txgbe_ethdev.c    | 1 +
 3 files changed, 11 insertions(+)

diff --git a/drivers/net/txgbe/base/txgbe_hw.c b/drivers/net/txgbe/base/txgbe_hw.c
index 76b9ee3c0a..42cd0e0e2c 100644
--- a/drivers/net/txgbe/base/txgbe_hw.c
+++ b/drivers/net/txgbe/base/txgbe_hw.c
@@ -226,6 +226,15 @@ s32 txgbe_setup_fc(struct txgbe_hw *hw)
 				      TXGBE_MD_DEV_AUTO_NEG, reg_cu);
 	}
 
+	/*
+	 * Reconfig mac ctrl frame fwd rule to make sure it still
+	 * working after port stop/start.
+	 */
+	wr32m(hw, TXGBE_MACRXFLT, TXGBE_MACRXFLT_CTL_MASK,
+	      (hw->fc.mac_ctrl_frame_fwd ?
+	       TXGBE_MACRXFLT_CTL_NOPS : TXGBE_MACRXFLT_CTL_DROP));
+	txgbe_flush(hw);
+
 	DEBUGOUT("Set up FC; reg = 0x%08X", reg);
 out:
 	return err;
diff --git a/drivers/net/txgbe/base/txgbe_type.h b/drivers/net/txgbe/base/txgbe_type.h
index 383438ea3c..65527a22e7 100644
--- a/drivers/net/txgbe/base/txgbe_type.h
+++ b/drivers/net/txgbe/base/txgbe_type.h
@@ -299,6 +299,7 @@ struct txgbe_fc_info {
 	u32 high_water[TXGBE_DCB_TC_MAX]; /* Flow Ctrl High-water */
 	u32 low_water[TXGBE_DCB_TC_MAX]; /* Flow Ctrl Low-water */
 	u16 pause_time; /* Flow Control Pause timer */
+	u8 mac_ctrl_frame_fwd; /* Forward MAC control frames */
 	bool send_xon; /* Flow control send XON */
 	bool strict_ieee; /* Strict IEEE mode */
 	bool disable_fc_autoneg; /* Do not autonegotiate FC */
diff --git a/drivers/net/txgbe/txgbe_ethdev.c b/drivers/net/txgbe/txgbe_ethdev.c
index e5736bf387..b68a0557be 100644
--- a/drivers/net/txgbe/txgbe_ethdev.c
+++ b/drivers/net/txgbe/txgbe_ethdev.c
@@ -3586,6 +3586,7 @@ txgbe_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 	hw->fc.low_water[0]   = fc_conf->low_water;
 	hw->fc.send_xon       = fc_conf->send_xon;
 	hw->fc.disable_fc_autoneg = !fc_conf->autoneg;
+	hw->fc.mac_ctrl_frame_fwd = fc_conf->mac_ctrl_frame_fwd;
 
 	err = txgbe_fc_enable(hw);
 
-- 
2.48.1


^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH v3 12/17] net/ngbe: fix MAC control frame forwarding
       [not found] ` <20250613084159.22184-1-jiawenwu@trustnetic.com>
                     ` (7 preceding siblings ...)
  2025-06-13  8:41   ` [PATCH v3 11/17] net/txgbe: fix MAC control frame forwarding Jiawen Wu
@ 2025-06-13  8:41   ` Jiawen Wu
  2025-06-13  8:41   ` [PATCH v3 13/17] net/txgbe: fix incorrect device statistics Jiawen Wu
                     ` (4 subsequent siblings)
  13 siblings, 0 replies; 22+ messages in thread
From: Jiawen Wu @ 2025-06-13  8:41 UTC (permalink / raw)
  To: dev; +Cc: zaiyuwang, Jiawen Wu, stable

Test failure on the case "test_pause_fwd_port_stop_start", which expect
MAC control frame forwarding setting still working after port stop/start.
Fix the bug to pass the test case.

Fixes: f40e9f0e2278 ("net/ngbe: support flow control")
Cc: stable@dpdk.org

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
 drivers/net/ngbe/base/ngbe_hw.c   | 9 +++++++++
 drivers/net/ngbe/base/ngbe_type.h | 1 +
 drivers/net/ngbe/ngbe_ethdev.c    | 1 +
 3 files changed, 11 insertions(+)

diff --git a/drivers/net/ngbe/base/ngbe_hw.c b/drivers/net/ngbe/base/ngbe_hw.c
index 6688ae6a31..bf09f8a817 100644
--- a/drivers/net/ngbe/base/ngbe_hw.c
+++ b/drivers/net/ngbe/base/ngbe_hw.c
@@ -865,6 +865,15 @@ s32 ngbe_setup_fc_em(struct ngbe_hw *hw)
 		goto out;
 	}
 
+	/*
+	 * Reconfig mac ctrl frame fwd rule to make sure it still
+	 * working after port stop/start.
+	 */
+	wr32m(hw, NGBE_MACRXFLT, NGBE_MACRXFLT_CTL_MASK,
+	      (hw->fc.mac_ctrl_frame_fwd ?
+	       NGBE_MACRXFLT_CTL_NOPS : NGBE_MACRXFLT_CTL_DROP));
+	ngbe_flush(hw);
+
 	err = hw->phy.set_pause_adv(hw, reg_cu);
 
 out:
diff --git a/drivers/net/ngbe/base/ngbe_type.h b/drivers/net/ngbe/base/ngbe_type.h
index 7a3b52ffd4..fc571c7457 100644
--- a/drivers/net/ngbe/base/ngbe_type.h
+++ b/drivers/net/ngbe/base/ngbe_type.h
@@ -112,6 +112,7 @@ struct ngbe_fc_info {
 	u32 high_water; /* Flow Ctrl High-water */
 	u32 low_water; /* Flow Ctrl Low-water */
 	u16 pause_time; /* Flow Control Pause timer */
+	u8 mac_ctrl_frame_fwd; /* Forward MAC control frames */
 	bool send_xon; /* Flow control send XON */
 	bool strict_ieee; /* Strict IEEE mode */
 	bool disable_fc_autoneg; /* Do not autonegotiate FC */
diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c
index 08e87471f6..a8f847de8d 100644
--- a/drivers/net/ngbe/ngbe_ethdev.c
+++ b/drivers/net/ngbe/ngbe_ethdev.c
@@ -2420,6 +2420,7 @@ ngbe_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 	hw->fc.low_water      = fc_conf->low_water;
 	hw->fc.send_xon       = fc_conf->send_xon;
 	hw->fc.disable_fc_autoneg = !fc_conf->autoneg;
+	hw->fc.mac_ctrl_frame_fwd = fc_conf->mac_ctrl_frame_fwd;
 
 	err = hw->mac.fc_enable(hw);
 
-- 
2.48.1


^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH v3 13/17] net/txgbe: fix incorrect device statistics
       [not found] ` <20250613084159.22184-1-jiawenwu@trustnetic.com>
                     ` (8 preceding siblings ...)
  2025-06-13  8:41   ` [PATCH v3 12/17] net/ngbe: " Jiawen Wu
@ 2025-06-13  8:41   ` Jiawen Wu
  2025-06-13  8:41   ` [PATCH v3 14/17] net/ngbe: " Jiawen Wu
                     ` (3 subsequent siblings)
  13 siblings, 0 replies; 22+ messages in thread
From: Jiawen Wu @ 2025-06-13  8:41 UTC (permalink / raw)
  To: dev; +Cc: zaiyuwang, Jiawen Wu, stable

The extend statistic "rx_undersize_errors" is incorrectly read as the
counter of frames received with a length error, which names
"rx_length_error". And "rx_undersize_errors" is the counter of
shorter-than-64B frames received without any errors.

In addition, "tx_broadcast_packets" should use rd64() to get the full
count on the low and high registers.

Fixes: c9bb590d4295 ("net/txgbe: support device statistics")
Cc: stable@dpdk.org

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
 drivers/net/txgbe/txgbe_ethdev.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/drivers/net/txgbe/txgbe_ethdev.c b/drivers/net/txgbe/txgbe_ethdev.c
index b68a0557be..580579094b 100644
--- a/drivers/net/txgbe/txgbe_ethdev.c
+++ b/drivers/net/txgbe/txgbe_ethdev.c
@@ -2250,7 +2250,7 @@ txgbe_read_stats_registers(struct txgbe_hw *hw,
 	hw_stats->rx_total_bytes += rd64(hw, TXGBE_MACRXGBOCTL);
 
 	hw_stats->rx_broadcast_packets += rd64(hw, TXGBE_MACRXOCTL);
-	hw_stats->tx_broadcast_packets += rd32(hw, TXGBE_MACTXOCTL);
+	hw_stats->tx_broadcast_packets += rd64(hw, TXGBE_MACTXOCTL);
 
 	hw_stats->rx_size_64_packets += rd64(hw, TXGBE_MACRX1TO64L);
 	hw_stats->rx_size_65_to_127_packets += rd64(hw, TXGBE_MACRX65TO127L);
@@ -2269,7 +2269,8 @@ txgbe_read_stats_registers(struct txgbe_hw *hw,
 	hw_stats->tx_size_1024_to_max_packets +=
 			rd64(hw, TXGBE_MACTX1024TOMAXL);
 
-	hw_stats->rx_undersize_errors += rd64(hw, TXGBE_MACRXERRLENL);
+	hw_stats->rx_length_errors += rd64(hw, TXGBE_MACRXERRLENL);
+	hw_stats->rx_undersize_errors += rd32(hw, TXGBE_MACRXUNDERSIZE);
 	hw_stats->rx_oversize_cnt += rd32(hw, TXGBE_MACRXOVERSIZE);
 	hw_stats->rx_jabber_errors += rd32(hw, TXGBE_MACRXJABBER);
 
-- 
2.48.1


^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH v3 14/17] net/ngbe: fix incorrect device statistics
       [not found] ` <20250613084159.22184-1-jiawenwu@trustnetic.com>
                     ` (9 preceding siblings ...)
  2025-06-13  8:41   ` [PATCH v3 13/17] net/txgbe: fix incorrect device statistics Jiawen Wu
@ 2025-06-13  8:41   ` Jiawen Wu
  2025-06-13  8:41   ` [PATCH v3 15/17] net/txgbe: restrict VLAN strip configuration on VF Jiawen Wu
                     ` (2 subsequent siblings)
  13 siblings, 0 replies; 22+ messages in thread
From: Jiawen Wu @ 2025-06-13  8:41 UTC (permalink / raw)
  To: dev; +Cc: zaiyuwang, Jiawen Wu, stable

The extend statistic "rx_undersize_errors" is incorrectly read as the
counter of frames received with a length error, which names
"rx_length_error". And "rx_undersize_errors" is the counter of
shorter-than-64B frames received without any errors.

In addition, "tx_broadcast_packets" should use rd64() to get the full
count on the low and high registers.

Fixes: fdb1e851975a ("net/ngbe: support basic statistics")
Cc: stable@dpdk.org

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
 drivers/net/ngbe/ngbe_ethdev.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c
index a8f847de8d..d3ac40299f 100644
--- a/drivers/net/ngbe/ngbe_ethdev.c
+++ b/drivers/net/ngbe/ngbe_ethdev.c
@@ -1429,7 +1429,7 @@ ngbe_read_stats_registers(struct ngbe_hw *hw,
 	hw_stats->rx_total_bytes += rd64(hw, NGBE_MACRXGBOCTL);
 
 	hw_stats->rx_broadcast_packets += rd64(hw, NGBE_MACRXOCTL);
-	hw_stats->tx_broadcast_packets += rd32(hw, NGBE_MACTXOCTL);
+	hw_stats->tx_broadcast_packets += rd64(hw, NGBE_MACTXOCTL);
 
 	hw_stats->rx_size_64_packets += rd64(hw, NGBE_MACRX1TO64L);
 	hw_stats->rx_size_65_to_127_packets += rd64(hw, NGBE_MACRX65TO127L);
@@ -1448,7 +1448,8 @@ ngbe_read_stats_registers(struct ngbe_hw *hw,
 	hw_stats->tx_size_1024_to_max_packets +=
 			rd64(hw, NGBE_MACTX1024TOMAXL);
 
-	hw_stats->rx_undersize_errors += rd64(hw, NGBE_MACRXERRLENL);
+	hw_stats->rx_length_errors += rd64(hw, NGBE_MACRXERRLENL);
+	hw_stats->rx_undersize_errors += rd32(hw, NGBE_MACRXUNDERSIZE);
 	hw_stats->rx_oversize_cnt += rd32(hw, NGBE_MACRXOVERSIZE);
 	hw_stats->rx_jabber_errors += rd32(hw, NGBE_MACRXJABBER);
 
-- 
2.48.1


^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH v3 15/17] net/txgbe: restrict VLAN strip configuration on VF
       [not found] ` <20250613084159.22184-1-jiawenwu@trustnetic.com>
                     ` (10 preceding siblings ...)
  2025-06-13  8:41   ` [PATCH v3 14/17] net/ngbe: " Jiawen Wu
@ 2025-06-13  8:41   ` Jiawen Wu
  2025-06-13  8:41   ` [PATCH v3 16/17] net/ngbe: " Jiawen Wu
  2025-06-13  8:41   ` [PATCH v3 17/17] net/txgbe: add missing LRO flag in mbuf when LRO enabled Jiawen Wu
  13 siblings, 0 replies; 22+ messages in thread
From: Jiawen Wu @ 2025-06-13  8:41 UTC (permalink / raw)
  To: dev; +Cc: zaiyuwang, Jiawen Wu, stable

Fix the same issue as PF in commit 66364efcf958 ("net/txgbe: restrict
configuration of VLAN strip offload").

There is a hardware limitation that Rx ring config register is not
writable when Rx ring is enabled, i.e. the TXGBE_RXCFG_ENA bit is set.
But disabling the ring when there is traffic will cause ring get stuck.
So restrict the configuration of VLAN strip offload only if device is
started.

Fixes: aa1ae7941e71 ("net/txgbe: support VF VLAN")
Cc: stable@dpdk.org

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
 drivers/net/txgbe/txgbe_ethdev_vf.c | 31 +++++++++++++++++++++--------
 1 file changed, 23 insertions(+), 8 deletions(-)

diff --git a/drivers/net/txgbe/txgbe_ethdev_vf.c b/drivers/net/txgbe/txgbe_ethdev_vf.c
index c0d8aa15b2..847febf8c3 100644
--- a/drivers/net/txgbe/txgbe_ethdev_vf.c
+++ b/drivers/net/txgbe/txgbe_ethdev_vf.c
@@ -935,7 +935,7 @@ txgbevf_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on)
 }
 
 static void
-txgbevf_vlan_strip_queue_set(struct rte_eth_dev *dev, uint16_t queue, int on)
+txgbevf_vlan_strip_q_set(struct rte_eth_dev *dev, uint16_t queue, int on)
 {
 	struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
 	uint32_t ctrl;
@@ -946,20 +946,28 @@ txgbevf_vlan_strip_queue_set(struct rte_eth_dev *dev, uint16_t queue, int on)
 		return;
 
 	ctrl = rd32(hw, TXGBE_RXCFG(queue));
-	txgbe_dev_save_rx_queue(hw, queue);
 	if (on)
 		ctrl |= TXGBE_RXCFG_VLAN;
 	else
 		ctrl &= ~TXGBE_RXCFG_VLAN;
-	wr32(hw, TXGBE_RXCFG(queue), 0);
-	msec_delay(100);
-	txgbe_dev_store_rx_queue(hw, queue);
-	wr32m(hw, TXGBE_RXCFG(queue),
-		TXGBE_RXCFG_VLAN | TXGBE_RXCFG_ENA, ctrl);
+	wr32(hw, TXGBE_RXCFG(queue), ctrl);
 
 	txgbe_vlan_hw_strip_bitmap_set(dev, queue, on);
 }
 
+static void
+txgbevf_vlan_strip_queue_set(struct rte_eth_dev *dev, uint16_t queue, int on)
+{
+	struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
+
+	if (!hw->adapter_stopped) {
+		PMD_DRV_LOG(ERR, "Please stop port first");
+		return;
+	}
+
+	txgbevf_vlan_strip_q_set(dev, queue, on);
+}
+
 static int
 txgbevf_vlan_offload_config(struct rte_eth_dev *dev, int mask)
 {
@@ -972,7 +980,7 @@ txgbevf_vlan_offload_config(struct rte_eth_dev *dev, int mask)
 		for (i = 0; i < dev->data->nb_rx_queues; i++) {
 			rxq = dev->data->rx_queues[i];
 			on = !!(rxq->offloads &	RTE_ETH_RX_OFFLOAD_VLAN_STRIP);
-			txgbevf_vlan_strip_queue_set(dev, i, on);
+			txgbevf_vlan_strip_q_set(dev, i, on);
 		}
 	}
 
@@ -982,6 +990,13 @@ txgbevf_vlan_offload_config(struct rte_eth_dev *dev, int mask)
 static int
 txgbevf_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 {
+	struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
+
+	if (!hw->adapter_stopped && (mask & RTE_ETH_VLAN_STRIP_MASK)) {
+		PMD_DRV_LOG(ERR, "Please stop port first");
+		return -EPERM;
+	}
+
 	txgbe_config_vlan_strip_on_all_queues(dev, mask);
 
 	txgbevf_vlan_offload_config(dev, mask);
-- 
2.48.1


^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH v3 16/17] net/ngbe: restrict VLAN strip configuration on VF
       [not found] ` <20250613084159.22184-1-jiawenwu@trustnetic.com>
                     ` (11 preceding siblings ...)
  2025-06-13  8:41   ` [PATCH v3 15/17] net/txgbe: restrict VLAN strip configuration on VF Jiawen Wu
@ 2025-06-13  8:41   ` Jiawen Wu
  2025-06-13  8:41   ` [PATCH v3 17/17] net/txgbe: add missing LRO flag in mbuf when LRO enabled Jiawen Wu
  13 siblings, 0 replies; 22+ messages in thread
From: Jiawen Wu @ 2025-06-13  8:41 UTC (permalink / raw)
  To: dev; +Cc: zaiyuwang, Jiawen Wu, stable

Fix the same issue as PF in commit baca8ec066dc ("net/ngbe: restrict
configuration of VLAN strip offload").

There is a hardware limitation that Rx ring config register is not
writable when Rx ring is enabled, i.e. the TXGBE_RXCFG_ENA bit is set.
But disabling the ring when there is traffic will cause ring get stuck.
So restrict the configuration of VLAN strip offload only if device is
started.

Fixes: f47dc03c706f ("net/ngbe: add VLAN ops for VF device")
Cc: stable@dpdk.org

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
 drivers/net/ngbe/ngbe_ethdev_vf.c | 24 ++++++++++++++++++++++--
 1 file changed, 22 insertions(+), 2 deletions(-)

diff --git a/drivers/net/ngbe/ngbe_ethdev_vf.c b/drivers/net/ngbe/ngbe_ethdev_vf.c
index 5d68f1602d..846bc981f6 100644
--- a/drivers/net/ngbe/ngbe_ethdev_vf.c
+++ b/drivers/net/ngbe/ngbe_ethdev_vf.c
@@ -828,7 +828,7 @@ ngbevf_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on)
 }
 
 static void
-ngbevf_vlan_strip_queue_set(struct rte_eth_dev *dev, uint16_t queue, int on)
+ngbevf_vlan_strip_q_set(struct rte_eth_dev *dev, uint16_t queue, int on)
 {
 	struct ngbe_hw *hw = ngbe_dev_hw(dev);
 	uint32_t ctrl;
@@ -848,6 +848,19 @@ ngbevf_vlan_strip_queue_set(struct rte_eth_dev *dev, uint16_t queue, int on)
 	ngbe_vlan_hw_strip_bitmap_set(dev, queue, on);
 }
 
+static void
+ngbevf_vlan_strip_queue_set(struct rte_eth_dev *dev, uint16_t queue, int on)
+{
+	struct ngbe_hw *hw = ngbe_dev_hw(dev);
+
+	if (!hw->adapter_stopped) {
+		PMD_DRV_LOG(ERR, "Please stop port first");
+		return;
+	}
+
+	ngbevf_vlan_strip_q_set(dev, queue, on);
+}
+
 static int
 ngbevf_vlan_offload_config(struct rte_eth_dev *dev, int mask)
 {
@@ -860,7 +873,7 @@ ngbevf_vlan_offload_config(struct rte_eth_dev *dev, int mask)
 		for (i = 0; i < dev->data->nb_rx_queues; i++) {
 			rxq = dev->data->rx_queues[i];
 			on = !!(rxq->offloads &	RTE_ETH_RX_OFFLOAD_VLAN_STRIP);
-			ngbevf_vlan_strip_queue_set(dev, i, on);
+			ngbevf_vlan_strip_q_set(dev, i, on);
 		}
 	}
 
@@ -870,6 +883,13 @@ ngbevf_vlan_offload_config(struct rte_eth_dev *dev, int mask)
 static int
 ngbevf_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 {
+	struct ngbe_hw *hw = ngbe_dev_hw(dev);
+
+	if (!hw->adapter_stopped && (mask & RTE_ETH_VLAN_STRIP_MASK)) {
+		PMD_DRV_LOG(ERR, "Please stop port first");
+		return -EPERM;
+	}
+
 	ngbe_config_vlan_strip_on_all_queues(dev, mask);
 
 	ngbevf_vlan_offload_config(dev, mask);
-- 
2.48.1


^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH v3 17/17] net/txgbe: add missing LRO flag in mbuf when LRO enabled
       [not found] ` <20250613084159.22184-1-jiawenwu@trustnetic.com>
                     ` (12 preceding siblings ...)
  2025-06-13  8:41   ` [PATCH v3 16/17] net/ngbe: " Jiawen Wu
@ 2025-06-13  8:41   ` Jiawen Wu
  13 siblings, 0 replies; 22+ messages in thread
From: Jiawen Wu @ 2025-06-13  8:41 UTC (permalink / raw)
  To: dev; +Cc: zaiyuwang, Jiawen Wu, stable

When LRO is enabled, the driver must set the LRO flag in received
aggregated packets to indicate LRO processing to upper-layer
applications. Add the missing LRO flag into the ol_flags field of mbuf
to fix it.

Fixes: 0e484278c85f ("net/txgbe: support Rx")
Cc: stable@dpdk.org

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
 drivers/net/txgbe/txgbe_rxtx.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/drivers/net/txgbe/txgbe_rxtx.c b/drivers/net/txgbe/txgbe_rxtx.c
index a85d417ff6..e6f33739c4 100644
--- a/drivers/net/txgbe/txgbe_rxtx.c
+++ b/drivers/net/txgbe/txgbe_rxtx.c
@@ -1793,6 +1793,8 @@ txgbe_fill_cluster_head_buf(struct rte_mbuf *head, struct txgbe_rx_desc *desc,
 	pkt_flags = rx_desc_status_to_pkt_flags(staterr, rxq->vlan_flags);
 	pkt_flags |= rx_desc_error_to_pkt_flags(staterr);
 	pkt_flags |= txgbe_rxd_pkt_info_to_pkt_flags(pkt_info);
+	if (TXGBE_RXD_RSCCNT(desc->qw0.dw0))
+		pkt_flags |= RTE_MBUF_F_RX_LRO;
 	head->ol_flags = pkt_flags;
 	head->packet_type = txgbe_rxd_pkt_info_to_pkt_type(pkt_info,
 						rxq->pkt_type_mask);
-- 
2.48.1


^ permalink raw reply	[flat|nested] 22+ messages in thread

end of thread, other threads:[~2025-06-13  8:43 UTC | newest]

Thread overview: 22+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <00DEAE896AFE0D2D+20250606080117.183198-1-jiawenwu@trustnetic.com>
     [not found] ` <20250609070454.223387-1-jiawenwu@trustnetic.com>
2025-06-09  7:04   ` [PATCH v2 03/12] net/txgbe: fix reserved extra FDIR headroom Jiawen Wu
2025-06-09  7:04   ` [PATCH v2 06/12] net/txgbe: fix MAC control frame forwarding Jiawen Wu
2025-06-09  7:04   ` [PATCH v2 07/12] net/ngbe: " Jiawen Wu
2025-06-09  7:04   ` [PATCH v2 08/12] net/txgbe: fix incorrect device statistics Jiawen Wu
2025-06-09  7:04   ` [PATCH v2 09/12] net/ngbe: " Jiawen Wu
2025-06-09  7:04   ` [PATCH v2 10/12] net/txgbe: restrict VLAN strip configuration on VF Jiawen Wu
2025-06-09  7:04   ` [PATCH v2 11/12] net/ngbe: " Jiawen Wu
2025-06-09  7:04   ` [PATCH v2 12/12] net/txgbe: add missing LRO flag in mbuf when LRO enabled Jiawen Wu
     [not found] ` <20250613084159.22184-1-jiawenwu@trustnetic.com>
2025-06-13  8:41   ` [PATCH v3 02/17] net/txgbe: fix incorrect parsing to ntuple filter Jiawen Wu
2025-06-13  8:41   ` [PATCH v3 03/17] net/txgbe: fix raw pattern match for FDIR rules Jiawen Wu
2025-06-13  8:41   ` [PATCH v3 04/17] net/txgbe: fix packet type for FDIR filters Jiawen Wu
2025-06-13  8:41   ` [PATCH v3 05/17] net/txgbe: fix to create FDIR filters for SCTP packets Jiawen Wu
2025-06-13  8:41   ` [PATCH v3 06/17] net/txgbe: fix FDIR perfect mode for IPv6 packets Jiawen Wu
2025-06-13  8:41   ` [PATCH v3 07/17] net/txgbe: fix to create FDIR filters for tunnel packets Jiawen Wu
2025-06-13  8:41   ` [PATCH v3 08/17] net/txgbe: fix reserved extra FDIR headroom Jiawen Wu
2025-06-13  8:41   ` [PATCH v3 11/17] net/txgbe: fix MAC control frame forwarding Jiawen Wu
2025-06-13  8:41   ` [PATCH v3 12/17] net/ngbe: " Jiawen Wu
2025-06-13  8:41   ` [PATCH v3 13/17] net/txgbe: fix incorrect device statistics Jiawen Wu
2025-06-13  8:41   ` [PATCH v3 14/17] net/ngbe: " Jiawen Wu
2025-06-13  8:41   ` [PATCH v3 15/17] net/txgbe: restrict VLAN strip configuration on VF Jiawen Wu
2025-06-13  8:41   ` [PATCH v3 16/17] net/ngbe: " Jiawen Wu
2025-06-13  8:41   ` [PATCH v3 17/17] net/txgbe: add missing LRO flag in mbuf when LRO enabled Jiawen Wu

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).