* [PATCH 1/8] net/txgbe: fix Rx buffer size in configure register
[not found] <20230118060039.3074016-1-jiawenwu@trustnetic.com>
@ 2023-01-18 6:00 ` Jiawen Wu
2023-01-27 15:36 ` Ferruh Yigit
2023-01-18 6:00 ` [PATCH 2/8] net/txgbe: fix default signal quality value for KX/KX4 Jiawen Wu
` (5 subsequent siblings)
6 siblings, 1 reply; 27+ messages in thread
From: Jiawen Wu @ 2023-01-18 6:00 UTC (permalink / raw)
To: dev; +Cc: Jiawen Wu, stable
When round up buffer size to 1K, to configure the register, hardware will
receive packets exceeding the buffer size in LRO mode. It will cause a
segment fault in the receive function.
Fixes: be797cbf4582 ("net/txgbe: add Rx and Tx init")
Cc: stable@dpdk.org
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
drivers/net/txgbe/txgbe_rxtx.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/txgbe/txgbe_rxtx.c b/drivers/net/txgbe/txgbe_rxtx.c
index ac1bba08a3..ae70ca3beb 100644
--- a/drivers/net/txgbe/txgbe_rxtx.c
+++ b/drivers/net/txgbe/txgbe_rxtx.c
@@ -4382,7 +4382,7 @@ txgbe_dev_rx_init(struct rte_eth_dev *dev)
*/
buf_size = (uint16_t)(rte_pktmbuf_data_room_size(rxq->mb_pool) -
RTE_PKTMBUF_HEADROOM);
- buf_size = ROUND_UP(buf_size, 0x1 << 10);
+ buf_size = ROUND_DOWN(buf_size, 0x1 << 10);
srrctl |= TXGBE_RXCFG_PKTLEN(buf_size);
wr32(hw, TXGBE_RXCFG(rxq->reg_idx), srrctl);
--
2.27.0
^ permalink raw reply [flat|nested] 27+ messages in thread
* [PATCH 2/8] net/txgbe: fix default signal quality value for KX/KX4
[not found] <20230118060039.3074016-1-jiawenwu@trustnetic.com>
2023-01-18 6:00 ` [PATCH 1/8] net/txgbe: fix Rx buffer size in configure register Jiawen Wu
@ 2023-01-18 6:00 ` Jiawen Wu
2023-01-18 6:00 ` [PATCH 3/8] net/txgbe: fix packet type to parse from offload flags Jiawen Wu
` (4 subsequent siblings)
6 siblings, 0 replies; 27+ messages in thread
From: Jiawen Wu @ 2023-01-18 6:00 UTC (permalink / raw)
To: dev; +Cc: Jiawen Wu, stable
On old firmware versions, the default value of signal quality(TX_EQ) is
configured by the driver. Fix it for KX/KX4 mode.
Fixes: 01c3cf5c85a7 ("net/txgbe: add autoneg control read and write")
Cc: stable@dpdk.org
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
drivers/net/txgbe/base/txgbe_phy.c | 7 ++++---
1 file changed, 4 insertions(+), 3 deletions(-)
diff --git a/drivers/net/txgbe/base/txgbe_phy.c b/drivers/net/txgbe/base/txgbe_phy.c
index 9f46d5bdb0..87935abdaa 100644
--- a/drivers/net/txgbe/base/txgbe_phy.c
+++ b/drivers/net/txgbe/base/txgbe_phy.c
@@ -1693,9 +1693,10 @@ txgbe_set_link_to_kx4(struct txgbe_hw *hw, bool autoneg)
wr32_epcs(hw, TXGBE_PHY_TX_EQ_CTL1, value);
} else if (hw->fw_version <= TXGBE_FW_N_TXEQ) {
value = (0x1804 & ~0x3F3F);
+ value |= 40 << 8;
wr32_epcs(hw, TXGBE_PHY_TX_EQ_CTL0, value);
- value = (0x50 & ~0x7F) | 40 | (1 << 6);
+ value = (0x50 & ~0x7F) | (1 << 6);
wr32_epcs(hw, TXGBE_PHY_TX_EQ_CTL1, value);
}
out:
@@ -1907,10 +1908,10 @@ txgbe_set_link_to_kx(struct txgbe_hw *hw,
value |= hw->phy.ffe_post | (1 << 6);
wr32_epcs(hw, TXGBE_PHY_TX_EQ_CTL1, value);
} else if (hw->fw_version <= TXGBE_FW_N_TXEQ) {
- value = (0x1804 & ~0x3F3F) | (24 << 8) | 4;
+ value = (0x1804 & ~0x3F3F) | (40 << 8);
wr32_epcs(hw, TXGBE_PHY_TX_EQ_CTL0, value);
- value = (0x50 & ~0x7F) | 16 | (1 << 6);
+ value = (0x50 & ~0x7F) | (1 << 6);
wr32_epcs(hw, TXGBE_PHY_TX_EQ_CTL1, value);
}
out:
--
2.27.0
^ permalink raw reply [flat|nested] 27+ messages in thread
* [PATCH 3/8] net/txgbe: fix packet type to parse from offload flags
[not found] <20230118060039.3074016-1-jiawenwu@trustnetic.com>
2023-01-18 6:00 ` [PATCH 1/8] net/txgbe: fix Rx buffer size in configure register Jiawen Wu
2023-01-18 6:00 ` [PATCH 2/8] net/txgbe: fix default signal quality value for KX/KX4 Jiawen Wu
@ 2023-01-18 6:00 ` Jiawen Wu
2023-01-27 15:36 ` Ferruh Yigit
2023-01-18 6:00 ` [PATCH 4/8] net/ngbe: " Jiawen Wu
` (3 subsequent siblings)
6 siblings, 1 reply; 27+ messages in thread
From: Jiawen Wu @ 2023-01-18 6:00 UTC (permalink / raw)
To: dev; +Cc: Jiawen Wu, stable
In some external applications, developers may fill in wrong packet_type
in rte_mbuf for transmission. It will result in Tx ring hang when Tx
checksum offload is on. So change it to parse from ol_flags.
Fixes: ca46fcd753b1 ("net/txgbe: support Tx with hardware offload")
Cc: stable@dpdk.org
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
drivers/net/txgbe/txgbe_rxtx.c | 19 ++++++++++++++-----
1 file changed, 14 insertions(+), 5 deletions(-)
diff --git a/drivers/net/txgbe/txgbe_rxtx.c b/drivers/net/txgbe/txgbe_rxtx.c
index ae70ca3beb..e91aaf60ef 100644
--- a/drivers/net/txgbe/txgbe_rxtx.c
+++ b/drivers/net/txgbe/txgbe_rxtx.c
@@ -516,20 +516,21 @@ tx_desc_ol_flags_to_cmdtype(uint64_t ol_flags)
return cmdtype;
}
-static inline uint8_t
-tx_desc_ol_flags_to_ptid(uint64_t oflags, uint32_t ptype)
+static inline uint32_t
+tx_desc_ol_flags_to_ptype(uint64_t oflags)
{
+ uint32_t ptype;
bool tun;
- if (ptype)
- return txgbe_encode_ptype(ptype);
-
/* Only support flags in TXGBE_TX_OFFLOAD_MASK */
tun = !!(oflags & RTE_MBUF_F_TX_TUNNEL_MASK);
/* L2 level */
ptype = RTE_PTYPE_L2_ETHER;
if (oflags & RTE_MBUF_F_TX_VLAN)
+ ptype |= (tun ? RTE_PTYPE_INNER_L2_ETHER_VLAN : RTE_PTYPE_L2_ETHER_VLAN);
+
+ if (oflags & RTE_MBUF_F_TX_QINQ) //tun + qinq is not supported
ptype |= RTE_PTYPE_L2_ETHER_VLAN;
/* L3 level */
@@ -587,6 +588,14 @@ tx_desc_ol_flags_to_ptid(uint64_t oflags, uint32_t ptype)
break;
}
+ return ptype;
+}
+
+static inline uint8_t
+tx_desc_ol_flags_to_ptid(uint64_t oflags, uint32_t ptype)
+{
+ ptype = tx_desc_ol_flags_to_ptype(oflags);
+
return txgbe_encode_ptype(ptype);
}
--
2.27.0
^ permalink raw reply [flat|nested] 27+ messages in thread
* [PATCH 4/8] net/ngbe: fix packet type to parse from offload flags
[not found] <20230118060039.3074016-1-jiawenwu@trustnetic.com>
` (2 preceding siblings ...)
2023-01-18 6:00 ` [PATCH 3/8] net/txgbe: fix packet type to parse from offload flags Jiawen Wu
@ 2023-01-18 6:00 ` Jiawen Wu
2023-01-27 15:37 ` Ferruh Yigit
2023-01-18 6:00 ` [PATCH 5/8] net/ngbe: add spinlock protection on YT PHY Jiawen Wu
` (2 subsequent siblings)
6 siblings, 1 reply; 27+ messages in thread
From: Jiawen Wu @ 2023-01-18 6:00 UTC (permalink / raw)
To: dev; +Cc: Jiawen Wu, stable
In some external applications, developers may fill in wrong packet_type
in rte_mbuf for transmission. It will result in Tx ring hang when Tx
checksum offload is on. So change it to parse from ol_flags. And remove
redundant tunnel type since the NIC does not support it.
Fixes: 9f3206140274 ("net/ngbe: support TSO")
Cc: stable@dpdk.org
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
drivers/net/ngbe/ngbe_rxtx.c | 87 +++++++++---------------------------
1 file changed, 20 insertions(+), 67 deletions(-)
diff --git a/drivers/net/ngbe/ngbe_rxtx.c b/drivers/net/ngbe/ngbe_rxtx.c
index 9fd24fa444..09312cf40d 100644
--- a/drivers/net/ngbe/ngbe_rxtx.c
+++ b/drivers/net/ngbe/ngbe_rxtx.c
@@ -24,15 +24,11 @@
/* Bit Mask to indicate what bits required for building Tx context */
static const u64 NGBE_TX_OFFLOAD_MASK = (RTE_MBUF_F_TX_IP_CKSUM |
- RTE_MBUF_F_TX_OUTER_IPV6 |
- RTE_MBUF_F_TX_OUTER_IPV4 |
RTE_MBUF_F_TX_IPV6 |
RTE_MBUF_F_TX_IPV4 |
RTE_MBUF_F_TX_VLAN |
RTE_MBUF_F_TX_L4_MASK |
RTE_MBUF_F_TX_TCP_SEG |
- RTE_MBUF_F_TX_TUNNEL_MASK |
- RTE_MBUF_F_TX_OUTER_IP_CKSUM |
NGBE_TX_IEEE1588_TMST);
#define NGBE_TX_OFFLOAD_NOTSUP_MASK \
@@ -333,34 +329,15 @@ ngbe_set_xmit_ctx(struct ngbe_tx_queue *txq,
}
vlan_macip_lens = NGBE_TXD_IPLEN(tx_offload.l3_len >> 1);
-
- if (ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) {
- tx_offload_mask.outer_tun_len |= ~0;
- tx_offload_mask.outer_l2_len |= ~0;
- tx_offload_mask.outer_l3_len |= ~0;
- tx_offload_mask.l2_len |= ~0;
- tunnel_seed = NGBE_TXD_ETUNLEN(tx_offload.outer_tun_len >> 1);
- tunnel_seed |= NGBE_TXD_EIPLEN(tx_offload.outer_l3_len >> 2);
-
- switch (ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) {
- case RTE_MBUF_F_TX_TUNNEL_IPIP:
- /* for non UDP / GRE tunneling, set to 0b */
- break;
- default:
- PMD_TX_LOG(ERR, "Tunnel type not supported");
- return;
- }
- vlan_macip_lens |= NGBE_TXD_MACLEN(tx_offload.outer_l2_len);
- } else {
- tunnel_seed = 0;
- vlan_macip_lens |= NGBE_TXD_MACLEN(tx_offload.l2_len);
- }
+ vlan_macip_lens |= NGBE_TXD_MACLEN(tx_offload.l2_len);
if (ol_flags & RTE_MBUF_F_TX_VLAN) {
tx_offload_mask.vlan_tci |= ~0;
vlan_macip_lens |= NGBE_TXD_VLAN(tx_offload.vlan_tci);
}
+ tunnel_seed = 0;
+
txq->ctx_cache[ctx_idx].flags = ol_flags;
txq->ctx_cache[ctx_idx].tx_offload.data[0] =
tx_offload_mask.data[0] & tx_offload.data[0];
@@ -449,16 +426,10 @@ tx_desc_ol_flags_to_cmdtype(uint64_t ol_flags)
return cmdtype;
}
-static inline uint8_t
-tx_desc_ol_flags_to_ptid(uint64_t oflags, uint32_t ptype)
+static inline uint32_t
+tx_desc_ol_flags_to_ptype(uint64_t oflags)
{
- bool tun;
-
- if (ptype)
- return ngbe_encode_ptype(ptype);
-
- /* Only support flags in NGBE_TX_OFFLOAD_MASK */
- tun = !!(oflags & RTE_MBUF_F_TX_TUNNEL_MASK);
+ uint32_t ptype;
/* L2 level */
ptype = RTE_PTYPE_L2_ETHER;
@@ -466,41 +437,34 @@ tx_desc_ol_flags_to_ptid(uint64_t oflags, uint32_t ptype)
ptype |= RTE_PTYPE_L2_ETHER_VLAN;
/* L3 level */
- if (oflags & (RTE_MBUF_F_TX_OUTER_IPV4 | RTE_MBUF_F_TX_OUTER_IP_CKSUM))
- ptype |= RTE_PTYPE_L3_IPV4;
- else if (oflags & (RTE_MBUF_F_TX_OUTER_IPV6))
- ptype |= RTE_PTYPE_L3_IPV6;
-
if (oflags & (RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CKSUM))
- ptype |= (tun ? RTE_PTYPE_INNER_L3_IPV4 : RTE_PTYPE_L3_IPV4);
+ ptype |= RTE_PTYPE_L3_IPV4;
else if (oflags & (RTE_MBUF_F_TX_IPV6))
- ptype |= (tun ? RTE_PTYPE_INNER_L3_IPV6 : RTE_PTYPE_L3_IPV6);
+ ptype |= RTE_PTYPE_L3_IPV6;
/* L4 level */
switch (oflags & (RTE_MBUF_F_TX_L4_MASK)) {
case RTE_MBUF_F_TX_TCP_CKSUM:
- ptype |= (tun ? RTE_PTYPE_INNER_L4_TCP : RTE_PTYPE_L4_TCP);
+ ptype |= RTE_PTYPE_L4_TCP;
break;
case RTE_MBUF_F_TX_UDP_CKSUM:
- ptype |= (tun ? RTE_PTYPE_INNER_L4_UDP : RTE_PTYPE_L4_UDP);
+ ptype |= RTE_PTYPE_L4_UDP;
break;
case RTE_MBUF_F_TX_SCTP_CKSUM:
- ptype |= (tun ? RTE_PTYPE_INNER_L4_SCTP : RTE_PTYPE_L4_SCTP);
+ ptype |= RTE_PTYPE_L4_SCTP;
break;
}
if (oflags & RTE_MBUF_F_TX_TCP_SEG)
- ptype |= (tun ? RTE_PTYPE_INNER_L4_TCP : RTE_PTYPE_L4_TCP);
-
- /* Tunnel */
- switch (oflags & RTE_MBUF_F_TX_TUNNEL_MASK) {
- case RTE_MBUF_F_TX_TUNNEL_IPIP:
- case RTE_MBUF_F_TX_TUNNEL_IP:
- ptype |= RTE_PTYPE_L2_ETHER |
- RTE_PTYPE_L3_IPV4 |
- RTE_PTYPE_TUNNEL_IP;
- break;
- }
+ ptype |= RTE_PTYPE_L4_TCP;
+
+ return ptype;
+}
+
+static inline uint8_t
+tx_desc_ol_flags_to_ptid(uint64_t oflags, uint32_t ptype)
+{
+ ptype = tx_desc_ol_flags_to_ptype(oflags);
return ngbe_encode_ptype(ptype);
}
@@ -629,9 +593,6 @@ ngbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_offload.l4_len = tx_pkt->l4_len;
tx_offload.vlan_tci = tx_pkt->vlan_tci;
tx_offload.tso_segsz = tx_pkt->tso_segsz;
- tx_offload.outer_l2_len = tx_pkt->outer_l2_len;
- tx_offload.outer_l3_len = tx_pkt->outer_l3_len;
- tx_offload.outer_tun_len = 0;
/* If new context need be built or reuse the exist ctx*/
ctx = what_ctx_update(txq, tx_ol_req, tx_offload);
@@ -752,10 +713,6 @@ ngbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
*/
pkt_len -= (tx_offload.l2_len +
tx_offload.l3_len + tx_offload.l4_len);
- pkt_len -=
- (tx_pkt->ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK)
- ? tx_offload.outer_l2_len +
- tx_offload.outer_l3_len : 0;
}
/*
@@ -1939,12 +1896,8 @@ ngbe_get_tx_port_offloads(struct rte_eth_dev *dev)
RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
- RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
RTE_ETH_TX_OFFLOAD_TCP_TSO |
RTE_ETH_TX_OFFLOAD_UDP_TSO |
- RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO |
- RTE_ETH_TX_OFFLOAD_IP_TNL_TSO |
- RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO |
RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
if (hw->is_pf)
--
2.27.0
^ permalink raw reply [flat|nested] 27+ messages in thread
* [PATCH 5/8] net/ngbe: add spinlock protection on YT PHY
[not found] <20230118060039.3074016-1-jiawenwu@trustnetic.com>
` (3 preceding siblings ...)
2023-01-18 6:00 ` [PATCH 4/8] net/ngbe: " Jiawen Wu
@ 2023-01-18 6:00 ` Jiawen Wu
[not found] ` <20230202092132.3271910-1-jiawenwu@trustnetic.com>
2023-02-15 2:00 ` [PATCH v3] net/txgbe: fix Rx buffer size in configure register Jiawen Wu
6 siblings, 0 replies; 27+ messages in thread
From: Jiawen Wu @ 2023-01-18 6:00 UTC (permalink / raw)
To: dev; +Cc: Jiawen Wu, stable
For yt8521s/yt8531s PHY, if other registers are accessing between
reads/writes of ext field registers, the value of ext filed registers
will get weird for unknown reasons. So it's protected when all of ext
field registers accessing.
Fixes: 44e97550ca68 ("net/ngbe: identify and reset PHY")
Cc: stable@dpdk.org
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
drivers/net/ngbe/base/ngbe_phy_yt.c | 36 +++++++++++++++++++++++++++++
drivers/net/ngbe/base/ngbe_type.h | 1 +
2 files changed, 37 insertions(+)
diff --git a/drivers/net/ngbe/base/ngbe_phy_yt.c b/drivers/net/ngbe/base/ngbe_phy_yt.c
index c88946f7c3..726d6c8ef5 100644
--- a/drivers/net/ngbe/base/ngbe_phy_yt.c
+++ b/drivers/net/ngbe/base/ngbe_phy_yt.c
@@ -100,11 +100,15 @@ s32 ngbe_write_phy_reg_sds_ext_yt(struct ngbe_hw *hw,
s32 ngbe_init_phy_yt(struct ngbe_hw *hw)
{
+ rte_spinlock_init(&hw->phy_lock);
+
+ rte_spinlock_lock(&hw->phy_lock);
/* close sds area register */
ngbe_write_phy_reg_ext_yt(hw, YT_SMI_PHY, 0, 0);
/* enable interrupts */
ngbe_write_phy_reg_mdi(hw, YT_INTR, 0,
YT_INTR_ENA_MASK | YT_SDS_INTR_ENA_MASK);
+ rte_spinlock_unlock(&hw->phy_lock);
hw->phy.set_phy_power(hw, false);
@@ -123,7 +127,9 @@ s32 ngbe_setup_phy_link_yt(struct ngbe_hw *hw, u32 speed,
hw->phy.autoneg_advertised = 0;
/* check chip_mode first */
+ rte_spinlock_lock(&hw->phy_lock);
ngbe_read_phy_reg_ext_yt(hw, YT_CHIP, 0, &value);
+ rte_spinlock_unlock(&hw->phy_lock);
if ((value & YT_CHIP_MODE_MASK) == YT_CHIP_MODE_SEL(0)) {
/* UTP to rgmii */
if (!hw->mac.autoneg) {
@@ -146,11 +152,14 @@ s32 ngbe_setup_phy_link_yt(struct ngbe_hw *hw, u32 speed,
}
/* duplex full */
value |= YT_BCR_DUPLEX | YT_BCR_RESET;
+ rte_spinlock_lock(&hw->phy_lock);
ngbe_write_phy_reg_mdi(hw, YT_BCR, 0, value);
+ rte_spinlock_unlock(&hw->phy_lock);
goto skip_an;
}
+ rte_spinlock_lock(&hw->phy_lock);
/*disable 100/10base-T Self-negotiation ability*/
ngbe_read_phy_reg_mdi(hw, YT_ANA, 0, &value);
value &= ~(YT_ANA_100BASET_FULL | YT_ANA_100BASET_HALF |
@@ -189,6 +198,7 @@ s32 ngbe_setup_phy_link_yt(struct ngbe_hw *hw, u32 speed,
ngbe_read_phy_reg_mdi(hw, YT_BCR, 0, &value);
value |= YT_BCR_RESET | YT_BCR_ANE | YT_BCR_RESTART_AN;
ngbe_write_phy_reg_mdi(hw, YT_BCR, 0, value);
+ rte_spinlock_unlock(&hw->phy_lock);
skip_an:
hw->phy.set_phy_power(hw, true);
} else if ((value & YT_CHIP_MODE_MASK) == YT_CHIP_MODE_SEL(1)) {
@@ -199,6 +209,7 @@ s32 ngbe_setup_phy_link_yt(struct ngbe_hw *hw, u32 speed,
value = YT_RGMII_CONF1_RXDELAY |
YT_RGMII_CONF1_TXDELAY_FE |
YT_RGMII_CONF1_TXDELAY;
+ rte_spinlock_lock(&hw->phy_lock);
ngbe_write_phy_reg_ext_yt(hw, YT_RGMII_CONF1, 0, value);
value = YT_CHIP_MODE_SEL(1) |
YT_CHIP_SW_LDO_EN |
@@ -225,17 +236,21 @@ s32 ngbe_setup_phy_link_yt(struct ngbe_hw *hw, u32 speed,
value = YT_BCR_RESET | YT_BCR_DUPLEX |
YT_BCR_SPEED_SELECT1;
hw->phy.write_reg(hw, YT_BCR, 0, value);
+ rte_spinlock_unlock(&hw->phy_lock);
hw->phy.set_phy_power(hw, true);
} else if ((value & YT_CHIP_MODE_MASK) == YT_CHIP_MODE_SEL(2)) {
hw->phy.set_phy_power(hw, true);
+ rte_spinlock_lock(&hw->phy_lock);
hw->phy.read_reg(hw, YT_SPST, 0, &value);
+ rte_spinlock_unlock(&hw->phy_lock);
if (value & YT_SPST_LINK) {
/* fiber up */
hw->phy.autoneg_advertised |= NGBE_LINK_SPEED_1GB_FULL;
} else {
/* utp up */
+ rte_spinlock_lock(&hw->phy_lock);
/*disable 100/10base-T Self-negotiation ability*/
ngbe_read_phy_reg_mdi(hw, YT_ANA, 0, &value);
value &= ~(YT_ANA_100BASET_FULL | YT_ANA_100BASET_HALF |
@@ -279,10 +294,12 @@ s32 ngbe_setup_phy_link_yt(struct ngbe_hw *hw, u32 speed,
ngbe_read_phy_reg_mdi(hw, YT_BCR, 0, &value);
value |= YT_BCR_RESET;
ngbe_write_phy_reg_mdi(hw, YT_BCR, 0, value);
+ rte_spinlock_unlock(&hw->phy_lock);
}
} else if ((value & YT_CHIP_MODE_MASK) == YT_CHIP_MODE_SEL(4)) {
hw->phy.autoneg_advertised |= NGBE_LINK_SPEED_1GB_FULL;
+ rte_spinlock_lock(&hw->phy_lock);
ngbe_read_phy_reg_ext_yt(hw, YT_RGMII_CONF1, 0, &value);
value |= YT_RGMII_CONF1_MODE;
ngbe_write_phy_reg_ext_yt(hw, YT_RGMII_CONF1, 0, value);
@@ -297,6 +314,7 @@ s32 ngbe_setup_phy_link_yt(struct ngbe_hw *hw, u32 speed,
ngbe_read_phy_reg_ext_yt(hw, YT_CHIP, 0, &value);
value &= ~YT_SMI_PHY_SW_RST;
ngbe_write_phy_reg_ext_yt(hw, YT_CHIP, 0, value);
+ rte_spinlock_unlock(&hw->phy_lock);
hw->phy.set_phy_power(hw, true);
} else if ((value & YT_CHIP_MODE_MASK) == YT_CHIP_MODE_SEL(5)) {
@@ -320,7 +338,9 @@ s32 ngbe_setup_phy_link_yt(struct ngbe_hw *hw, u32 speed,
}
/* duplex full */
value |= YT_BCR_DUPLEX | YT_BCR_RESET;
+ rte_spinlock_lock(&hw->phy_lock);
hw->phy.write_reg(hw, YT_BCR, 0, value);
+ rte_spinlock_unlock(&hw->phy_lock);
goto skip_an_sr;
}
@@ -339,19 +359,23 @@ s32 ngbe_setup_phy_link_yt(struct ngbe_hw *hw, u32 speed,
/* duplex full */
value |= YT_BCR_DUPLEX | YT_BCR_RESET;
+ rte_spinlock_lock(&hw->phy_lock);
hw->phy.write_reg(hw, YT_BCR, 0, value);
/* software reset to make the above configuration take effect */
hw->phy.read_reg(hw, YT_BCR, 0, &value);
value |= YT_BCR_RESET | YT_BCR_ANE | YT_BCR_RESTART_AN;
hw->phy.write_reg(hw, 0x0, 0, value);
+ rte_spinlock_unlock(&hw->phy_lock);
skip_an_sr:
hw->phy.set_phy_power(hw, true);
}
+ rte_spinlock_lock(&hw->phy_lock);
ngbe_write_phy_reg_ext_yt(hw, YT_SMI_PHY, 0, 0);
ngbe_read_phy_reg_mdi(hw, YT_INTR_STATUS, 0, &value);
+ rte_spinlock_unlock(&hw->phy_lock);
return 0;
}
@@ -366,6 +390,7 @@ s32 ngbe_reset_phy_yt(struct ngbe_hw *hw)
hw->phy.type != ngbe_phy_yt8521s_sfi)
return NGBE_ERR_PHY_TYPE;
+ rte_spinlock_lock(&hw->phy_lock);
/* check chip_mode first */
ngbe_read_phy_reg_ext_yt(hw, YT_CHIP, 0, &ctrl);
if (ctrl & YT_CHIP_MODE_MASK) {
@@ -395,6 +420,7 @@ s32 ngbe_reset_phy_yt(struct ngbe_hw *hw)
msleep(1);
}
}
+ rte_spinlock_unlock(&hw->phy_lock);
if (i == YT_PHY_RST_WAIT_PERIOD) {
DEBUGOUT("PHY reset polling failed to complete.");
@@ -409,7 +435,9 @@ s32 ngbe_get_phy_advertised_pause_yt(struct ngbe_hw *hw, u8 *pause_bit)
u16 value;
s32 status = 0;
+ rte_spinlock_lock(&hw->phy_lock);
status = hw->phy.read_reg(hw, YT_ANA, 0, &value);
+ rte_spinlock_unlock(&hw->phy_lock);
value &= YT_FANA_PAUSE_MASK;
*pause_bit = (u8)(value >> 7);
@@ -421,7 +449,9 @@ s32 ngbe_get_phy_lp_advertised_pause_yt(struct ngbe_hw *hw, u8 *pause_bit)
u16 value;
s32 status = 0;
+ rte_spinlock_lock(&hw->phy_lock);
status = hw->phy.read_reg(hw, YT_LPAR, 0, &value);
+ rte_spinlock_unlock(&hw->phy_lock);
value &= YT_FLPAR_PAUSE_MASK;
*pause_bit = (u8)(value >> 7);
@@ -433,10 +463,12 @@ s32 ngbe_set_phy_pause_adv_yt(struct ngbe_hw *hw, u16 pause_bit)
u16 value;
s32 status = 0;
+ rte_spinlock_lock(&hw->phy_lock);
status = hw->phy.read_reg(hw, YT_ANA, 0, &value);
value &= ~YT_FANA_PAUSE_MASK;
value |= pause_bit;
status = hw->phy.write_reg(hw, YT_ANA, 0, value);
+ rte_spinlock_unlock(&hw->phy_lock);
return status;
}
@@ -453,6 +485,7 @@ s32 ngbe_check_phy_link_yt(struct ngbe_hw *hw,
/* Initialize speed and link to default case */
*link_up = false;
*speed = NGBE_LINK_SPEED_UNKNOWN;
+ rte_spinlock_lock(&hw->phy_lock);
ngbe_write_phy_reg_ext_yt(hw, YT_SMI_PHY, 0, 0);
ngbe_read_phy_reg_mdi(hw, YT_INTR_STATUS, 0, &insr);
@@ -472,6 +505,7 @@ s32 ngbe_check_phy_link_yt(struct ngbe_hw *hw,
*link_up = true;
}
+ rte_spinlock_unlock(&hw->phy_lock);
if (*link_up) {
if (phy_speed == YT_SPST_SPEED_1000M)
*speed = NGBE_LINK_SPEED_1GB_FULL;
@@ -488,6 +522,7 @@ s32 ngbe_set_phy_power_yt(struct ngbe_hw *hw, bool on)
{
u16 value = 0;
+ rte_spinlock_lock(&hw->phy_lock);
/* power down/up in fiber mode */
hw->phy.read_reg(hw, YT_BCR, 0, &value);
if (on)
@@ -504,6 +539,7 @@ s32 ngbe_set_phy_power_yt(struct ngbe_hw *hw, bool on)
else
value |= YT_BCR_PWDN;
ngbe_write_phy_reg_mdi(hw, YT_BCR, 0, value);
+ rte_spinlock_unlock(&hw->phy_lock);
return 0;
}
diff --git a/drivers/net/ngbe/base/ngbe_type.h b/drivers/net/ngbe/base/ngbe_type.h
index aa5c41146c..05804eeab7 100644
--- a/drivers/net/ngbe/base/ngbe_type.h
+++ b/drivers/net/ngbe/base/ngbe_type.h
@@ -433,6 +433,7 @@ struct ngbe_hw {
bool gpio_ctl;
u32 led_conf;
bool init_phy;
+ rte_spinlock_t phy_lock;
struct {
u64 rx_qp_packets;
u64 tx_qp_packets;
--
2.27.0
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH 1/8] net/txgbe: fix Rx buffer size in configure register
2023-01-18 6:00 ` [PATCH 1/8] net/txgbe: fix Rx buffer size in configure register Jiawen Wu
@ 2023-01-27 15:36 ` Ferruh Yigit
2023-02-01 2:34 ` Jiawen Wu
0 siblings, 1 reply; 27+ messages in thread
From: Ferruh Yigit @ 2023-01-27 15:36 UTC (permalink / raw)
To: Jiawen Wu, dev; +Cc: stable
On 1/18/2023 6:00 AM, Jiawen Wu wrote:
> When round up buffer size to 1K, to configure the register, hardware will
> receive packets exceeding the buffer size in LRO mode. It will cause a
> segment fault in the receive function.
>
> Fixes: be797cbf4582 ("net/txgbe: add Rx and Tx init")
> Cc: stable@dpdk.org
>
> Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
> ---
> drivers/net/txgbe/txgbe_rxtx.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/drivers/net/txgbe/txgbe_rxtx.c b/drivers/net/txgbe/txgbe_rxtx.c
> index ac1bba08a3..ae70ca3beb 100644
> --- a/drivers/net/txgbe/txgbe_rxtx.c
> +++ b/drivers/net/txgbe/txgbe_rxtx.c
> @@ -4382,7 +4382,7 @@ txgbe_dev_rx_init(struct rte_eth_dev *dev)
> */
> buf_size = (uint16_t)(rte_pktmbuf_data_room_size(rxq->mb_pool) -
> RTE_PKTMBUF_HEADROOM);
> - buf_size = ROUND_UP(buf_size, 0x1 << 10);
> + buf_size = ROUND_DOWN(buf_size, 0x1 << 10);
> srrctl |= TXGBE_RXCFG_PKTLEN(buf_size);
>
What if 'buf_size' is less than 1K, won't 'ROUND_DOWN' make it 0, and I
assume setting 0 to register may cause problem.
And according to the code comment for 'buf_size' [1], buffer size can't
be more than 16K, but technically 'buf_size' can be more than 16K.
Does the HW constrain values larger than 16K? If not the 'buf_size'
value needs to be checked against the 16K limit.
[1]
/*
* Configure the RX buffer size in the PKTLEN field of
* the RXCFG register of the queue.
* The value is in 1 KB resolution. Valid values can be from
* 1 KB to 16 KB.
*/
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH 3/8] net/txgbe: fix packet type to parse from offload flags
2023-01-18 6:00 ` [PATCH 3/8] net/txgbe: fix packet type to parse from offload flags Jiawen Wu
@ 2023-01-27 15:36 ` Ferruh Yigit
2023-02-01 3:14 ` Jiawen Wu
0 siblings, 1 reply; 27+ messages in thread
From: Ferruh Yigit @ 2023-01-27 15:36 UTC (permalink / raw)
To: Jiawen Wu, dev; +Cc: stable
On 1/18/2023 6:00 AM, Jiawen Wu wrote:
> In some external applications, developers may fill in wrong packet_type
> in rte_mbuf for transmission. It will result in Tx ring hang when Tx
> checksum offload is on. So change it to parse from ol_flags.
>
Can you please give more information on what packet_type value is
causing problem in Tx path?
> Fixes: ca46fcd753b1 ("net/txgbe: support Tx with hardware offload")
> Cc: stable@dpdk.org
>
> Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
> ---
> drivers/net/txgbe/txgbe_rxtx.c | 19 ++++++++++++++-----
> 1 file changed, 14 insertions(+), 5 deletions(-)
>
> diff --git a/drivers/net/txgbe/txgbe_rxtx.c b/drivers/net/txgbe/txgbe_rxtx.c
> index ae70ca3beb..e91aaf60ef 100644
> --- a/drivers/net/txgbe/txgbe_rxtx.c
> +++ b/drivers/net/txgbe/txgbe_rxtx.c
> @@ -516,20 +516,21 @@ tx_desc_ol_flags_to_cmdtype(uint64_t ol_flags)
> return cmdtype;
> }
>
> -static inline uint8_t
> -tx_desc_ol_flags_to_ptid(uint64_t oflags, uint32_t ptype)
> +static inline uint32_t
> +tx_desc_ol_flags_to_ptype(uint64_t oflags)
> {
> + uint32_t ptype;
> bool tun;
>
> - if (ptype)
> - return txgbe_encode_ptype(ptype);
> -
> /* Only support flags in TXGBE_TX_OFFLOAD_MASK */
> tun = !!(oflags & RTE_MBUF_F_TX_TUNNEL_MASK);
>
> /* L2 level */
> ptype = RTE_PTYPE_L2_ETHER;
> if (oflags & RTE_MBUF_F_TX_VLAN)
> + ptype |= (tun ? RTE_PTYPE_INNER_L2_ETHER_VLAN : RTE_PTYPE_L2_ETHER_VLAN);
> +
> + if (oflags & RTE_MBUF_F_TX_QINQ) //tun + qinq is not supported
checkpatch is complaining about c99 comment syntax ('//').
> ptype |= RTE_PTYPE_L2_ETHER_VLAN;
>
> /* L3 level */
> @@ -587,6 +588,14 @@ tx_desc_ol_flags_to_ptid(uint64_t oflags, uint32_t ptype)
> break;
> }
>
> + return ptype;
> +}
> +
> +static inline uint8_t
> +tx_desc_ol_flags_to_ptid(uint64_t oflags, uint32_t ptype)
> +{
> + ptype = tx_desc_ol_flags_to_ptype(oflags);
> +
This function get 'ptype' as parameter and immediately overwrites is
with calculated value, what is the point of getting 'ptype' as argument.
> return txgbe_encode_ptype(ptype);
> }
>
Overall why 'ptype' is calculated for Tx path, I see this value is used
to see if it is tunneled packet or not, is there any other usage of
ptype in this path? If not why parse all packet types?
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH 4/8] net/ngbe: fix packet type to parse from offload flags
2023-01-18 6:00 ` [PATCH 4/8] net/ngbe: " Jiawen Wu
@ 2023-01-27 15:37 ` Ferruh Yigit
0 siblings, 0 replies; 27+ messages in thread
From: Ferruh Yigit @ 2023-01-27 15:37 UTC (permalink / raw)
To: Jiawen Wu, dev; +Cc: stable
On 1/18/2023 6:00 AM, Jiawen Wu wrote:
> In some external applications, developers may fill in wrong packet_type
> in rte_mbuf for transmission. It will result in Tx ring hang when Tx
> checksum offload is on. So change it to parse from ol_flags. And remove
> redundant tunnel type since the NIC does not support it.
>
> Fixes: 9f3206140274 ("net/ngbe: support TSO")
> Cc: stable@dpdk.org
>
Similar comments with txgbe version of this patch,
Can you please explain more what cause problem on setting mbuf
packet_type for Tx path?
And in 'tx_desc_ol_flags_to_ptid()', 'ptype' parameter is overwritten
before it is used and parameter is useless.
^ permalink raw reply [flat|nested] 27+ messages in thread
* RE: [PATCH 1/8] net/txgbe: fix Rx buffer size in configure register
2023-01-27 15:36 ` Ferruh Yigit
@ 2023-02-01 2:34 ` Jiawen Wu
2023-02-01 10:40 ` Ferruh Yigit
0 siblings, 1 reply; 27+ messages in thread
From: Jiawen Wu @ 2023-02-01 2:34 UTC (permalink / raw)
To: 'Ferruh Yigit', dev; +Cc: stable
On Friday, January 27, 2023 11:36 PM, Ferruh Yigit wrote:
> On 1/18/2023 6:00 AM, Jiawen Wu wrote:
> > When round up buffer size to 1K, to configure the register, hardware
> > will receive packets exceeding the buffer size in LRO mode. It will
> > cause a segment fault in the receive function.
> >
> > Fixes: be797cbf4582 ("net/txgbe: add Rx and Tx init")
> > Cc: stable@dpdk.org
> >
> > Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
> > ---
> > drivers/net/txgbe/txgbe_rxtx.c | 2 +-
> > 1 file changed, 1 insertion(+), 1 deletion(-)
> >
> > diff --git a/drivers/net/txgbe/txgbe_rxtx.c
> > b/drivers/net/txgbe/txgbe_rxtx.c index ac1bba08a3..ae70ca3beb 100644
> > --- a/drivers/net/txgbe/txgbe_rxtx.c
> > +++ b/drivers/net/txgbe/txgbe_rxtx.c
> > @@ -4382,7 +4382,7 @@ txgbe_dev_rx_init(struct rte_eth_dev *dev)
> > */
> > buf_size = (uint16_t)(rte_pktmbuf_data_room_size(rxq->mb_pool) -
> > RTE_PKTMBUF_HEADROOM);
> > - buf_size = ROUND_UP(buf_size, 0x1 << 10);
> > + buf_size = ROUND_DOWN(buf_size, 0x1 << 10);
> > srrctl |= TXGBE_RXCFG_PKTLEN(buf_size);
> >
>
> What if 'buf_size' is less than 1K, won't 'ROUND_DOWN' make it 0, and I assume
> setting 0 to register may cause problem.
>
It is indeed a problem, I will fix it.
> And according to the code comment for 'buf_size' [1], buffer size can't be more than
> 16K, but technically 'buf_size' can be more than 16K.
> Does the HW constrain values larger than 16K? If not the 'buf_size'
> value needs to be checked against the 16K limit.
>
There is a macro that defines the conversion to limit 16K.
#define PKTLEN(v) ROUND_OVER(v, 14, 10)
#define ROUND_OVER(x, maxbits, unitbits) \
((x) >= 1 << (maxbits) ? 0 : (x) >> (unitbits))
>
>
>
> [1]
> /*
> * Configure the RX buffer size in the PKTLEN field of
> * the RXCFG register of the queue.
> * The value is in 1 KB resolution. Valid values can be from
> * 1 KB to 16 KB.
> */
>
^ permalink raw reply [flat|nested] 27+ messages in thread
* RE: [PATCH 3/8] net/txgbe: fix packet type to parse from offload flags
2023-01-27 15:36 ` Ferruh Yigit
@ 2023-02-01 3:14 ` Jiawen Wu
2023-02-01 10:41 ` Ferruh Yigit
0 siblings, 1 reply; 27+ messages in thread
From: Jiawen Wu @ 2023-02-01 3:14 UTC (permalink / raw)
To: 'Ferruh Yigit', dev; +Cc: stable
On Friday, January 27, 2023 11:37 PM, Ferruh Yigit wrote:
> On 1/18/2023 6:00 AM, Jiawen Wu wrote:
> > In some external applications, developers may fill in wrong
> > packet_type in rte_mbuf for transmission. It will result in Tx ring
> > hang when Tx checksum offload is on. So change it to parse from ol_flags.
> >
>
> Can you please give more information on what packet_type value is causing
> problem in Tx path?
>
> > Fixes: ca46fcd753b1 ("net/txgbe: support Tx with hardware offload")
> > Cc: stable@dpdk.org
> >
> > Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
> > ---
> > drivers/net/txgbe/txgbe_rxtx.c | 19 ++++++++++++++-----
> > 1 file changed, 14 insertions(+), 5 deletions(-)
> >
> > diff --git a/drivers/net/txgbe/txgbe_rxtx.c
> > b/drivers/net/txgbe/txgbe_rxtx.c index ae70ca3beb..e91aaf60ef 100644
> > --- a/drivers/net/txgbe/txgbe_rxtx.c
> > +++ b/drivers/net/txgbe/txgbe_rxtx.c
> > @@ -516,20 +516,21 @@ tx_desc_ol_flags_to_cmdtype(uint64_t ol_flags)
> > return cmdtype;
> > }
> >
> > -static inline uint8_t
> > -tx_desc_ol_flags_to_ptid(uint64_t oflags, uint32_t ptype)
> > +static inline uint32_t
> > +tx_desc_ol_flags_to_ptype(uint64_t oflags)
> > {
> > + uint32_t ptype;
> > bool tun;
> >
> > - if (ptype)
> > - return txgbe_encode_ptype(ptype);
> > -
> > /* Only support flags in TXGBE_TX_OFFLOAD_MASK */
> > tun = !!(oflags & RTE_MBUF_F_TX_TUNNEL_MASK);
> >
> > /* L2 level */
> > ptype = RTE_PTYPE_L2_ETHER;
> > if (oflags & RTE_MBUF_F_TX_VLAN)
> > + ptype |= (tun ? RTE_PTYPE_INNER_L2_ETHER_VLAN :
> > +RTE_PTYPE_L2_ETHER_VLAN);
> > +
> > + if (oflags & RTE_MBUF_F_TX_QINQ) //tun + qinq is not supported
>
> checkpatch is complaining about c99 comment syntax ('//').
>
> > ptype |= RTE_PTYPE_L2_ETHER_VLAN;
> >
> > /* L3 level */
> > @@ -587,6 +588,14 @@ tx_desc_ol_flags_to_ptid(uint64_t oflags,
> > uint32_t ptype)
> > break;
> > }
> >
> > + return ptype;
> > +}
> > +
> > +static inline uint8_t
> > +tx_desc_ol_flags_to_ptid(uint64_t oflags, uint32_t ptype) {
> > + ptype = tx_desc_ol_flags_to_ptype(oflags);
> > +
>
> This function get 'ptype' as parameter and immediately overwrites is with
> calculated value, what is the point of getting 'ptype' as argument.
>
> > return txgbe_encode_ptype(ptype);
> > }
> >
>
> Overall why 'ptype' is calculated for Tx path, I see this value is used to see if it is
> tunneled packet or not, is there any other usage of ptype in this path? If not why
> parse all packet types?
>
If Tx checksum offload or TSO is on, a context descriptor is needed for our hardware, which contains the length of each packet layer and the packet type.
If the packet type and length do not strictly match, it will cause Tx ring hang. It's not just for tunnel packets. But most cases are caused by developers encap/decap at the application layer.
However, we can flexibly set the packet type. For example, if there is no inner checksum for a VXLAN packet, we can only regard it as a UDP packet.
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH 1/8] net/txgbe: fix Rx buffer size in configure register
2023-02-01 2:34 ` Jiawen Wu
@ 2023-02-01 10:40 ` Ferruh Yigit
0 siblings, 0 replies; 27+ messages in thread
From: Ferruh Yigit @ 2023-02-01 10:40 UTC (permalink / raw)
To: Jiawen Wu, dev; +Cc: stable
On 2/1/2023 2:34 AM, Jiawen Wu wrote:
> On Friday, January 27, 2023 11:36 PM, Ferruh Yigit wrote:
>> On 1/18/2023 6:00 AM, Jiawen Wu wrote:
>>> When round up buffer size to 1K, to configure the register, hardware
>>> will receive packets exceeding the buffer size in LRO mode. It will
>>> cause a segment fault in the receive function.
>>>
>>> Fixes: be797cbf4582 ("net/txgbe: add Rx and Tx init")
>>> Cc: stable@dpdk.org
>>>
>>> Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
>>> ---
>>> drivers/net/txgbe/txgbe_rxtx.c | 2 +-
>>> 1 file changed, 1 insertion(+), 1 deletion(-)
>>>
>>> diff --git a/drivers/net/txgbe/txgbe_rxtx.c
>>> b/drivers/net/txgbe/txgbe_rxtx.c index ac1bba08a3..ae70ca3beb 100644
>>> --- a/drivers/net/txgbe/txgbe_rxtx.c
>>> +++ b/drivers/net/txgbe/txgbe_rxtx.c
>>> @@ -4382,7 +4382,7 @@ txgbe_dev_rx_init(struct rte_eth_dev *dev)
>>> */
>>> buf_size = (uint16_t)(rte_pktmbuf_data_room_size(rxq->mb_pool) -
>>> RTE_PKTMBUF_HEADROOM);
>>> - buf_size = ROUND_UP(buf_size, 0x1 << 10);
>>> + buf_size = ROUND_DOWN(buf_size, 0x1 << 10);
>>> srrctl |= TXGBE_RXCFG_PKTLEN(buf_size);
>>>
>>
>> What if 'buf_size' is less than 1K, won't 'ROUND_DOWN' make it 0, and I assume
>> setting 0 to register may cause problem.
>>
>
> It is indeed a problem, I will fix it.
>
>> And according to the code comment for 'buf_size' [1], buffer size can't be more than
>> 16K, but technically 'buf_size' can be more than 16K.
>> Does the HW constrain values larger than 16K? If not the 'buf_size'
>> value needs to be checked against the 16K limit.
>>
>
> There is a macro that defines the conversion to limit 16K.
>
> #define PKTLEN(v) ROUND_OVER(v, 14, 10)
> #define ROUND_OVER(x, maxbits, unitbits) \
> ((x) >= 1 << (maxbits) ? 0 : (x) >> (unitbits))
>
ack
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH 3/8] net/txgbe: fix packet type to parse from offload flags
2023-02-01 3:14 ` Jiawen Wu
@ 2023-02-01 10:41 ` Ferruh Yigit
0 siblings, 0 replies; 27+ messages in thread
From: Ferruh Yigit @ 2023-02-01 10:41 UTC (permalink / raw)
To: Jiawen Wu, dev; +Cc: stable
On 2/1/2023 3:14 AM, Jiawen Wu wrote:
> On Friday, January 27, 2023 11:37 PM, Ferruh Yigit wrote:
>> On 1/18/2023 6:00 AM, Jiawen Wu wrote:
>>> In some external applications, developers may fill in wrong
>>> packet_type in rte_mbuf for transmission. It will result in Tx ring
>>> hang when Tx checksum offload is on. So change it to parse from ol_flags.
>>>
>>
>> Can you please give more information on what packet_type value is causing
>> problem in Tx path?
>>
>>> Fixes: ca46fcd753b1 ("net/txgbe: support Tx with hardware offload")
>>> Cc: stable@dpdk.org
>>>
>>> Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
>>> ---
>>> drivers/net/txgbe/txgbe_rxtx.c | 19 ++++++++++++++-----
>>> 1 file changed, 14 insertions(+), 5 deletions(-)
>>>
>>> diff --git a/drivers/net/txgbe/txgbe_rxtx.c
>>> b/drivers/net/txgbe/txgbe_rxtx.c index ae70ca3beb..e91aaf60ef 100644
>>> --- a/drivers/net/txgbe/txgbe_rxtx.c
>>> +++ b/drivers/net/txgbe/txgbe_rxtx.c
>>> @@ -516,20 +516,21 @@ tx_desc_ol_flags_to_cmdtype(uint64_t ol_flags)
>>> return cmdtype;
>>> }
>>>
>>> -static inline uint8_t
>>> -tx_desc_ol_flags_to_ptid(uint64_t oflags, uint32_t ptype)
>>> +static inline uint32_t
>>> +tx_desc_ol_flags_to_ptype(uint64_t oflags)
>>> {
>>> + uint32_t ptype;
>>> bool tun;
>>>
>>> - if (ptype)
>>> - return txgbe_encode_ptype(ptype);
>>> -
>>> /* Only support flags in TXGBE_TX_OFFLOAD_MASK */
>>> tun = !!(oflags & RTE_MBUF_F_TX_TUNNEL_MASK);
>>>
>>> /* L2 level */
>>> ptype = RTE_PTYPE_L2_ETHER;
>>> if (oflags & RTE_MBUF_F_TX_VLAN)
>>> + ptype |= (tun ? RTE_PTYPE_INNER_L2_ETHER_VLAN :
>>> +RTE_PTYPE_L2_ETHER_VLAN);
>>> +
>>> + if (oflags & RTE_MBUF_F_TX_QINQ) //tun + qinq is not supported
>>
>> checkpatch is complaining about c99 comment syntax ('//').
>>
>>> ptype |= RTE_PTYPE_L2_ETHER_VLAN;
>>>
>>> /* L3 level */
>>> @@ -587,6 +588,14 @@ tx_desc_ol_flags_to_ptid(uint64_t oflags,
>>> uint32_t ptype)
>>> break;
>>> }
>>>
>>> + return ptype;
>>> +}
>>> +
>>> +static inline uint8_t
>>> +tx_desc_ol_flags_to_ptid(uint64_t oflags, uint32_t ptype) {
>>> + ptype = tx_desc_ol_flags_to_ptype(oflags);
>>> +
>>
>> This function get 'ptype' as parameter and immediately overwrites is with
>> calculated value, what is the point of getting 'ptype' as argument.
>>
>>> return txgbe_encode_ptype(ptype);
>>> }
>>>
>>
>> Overall why 'ptype' is calculated for Tx path, I see this value is used to see if it is
>> tunneled packet or not, is there any other usage of ptype in this path? If not why
>> parse all packet types?
>>
>
> If Tx checksum offload or TSO is on, a context descriptor is needed for our hardware, which contains the length of each packet layer and the packet type.
> If the packet type and length do not strictly match, it will cause Tx ring hang. It's not just for tunnel packets. But most cases are caused by developers encap/decap at the application layer.
> However, we can flexibly set the packet type. For example, if there is no inner checksum for a VXLAN packet, we can only regard it as a UDP packet.
>
>
Thanks for clarification, can you please expand commit log with above
information in next version.
^ permalink raw reply [flat|nested] 27+ messages in thread
* [PATCH v2 01/10] net/ngbe: fix Rx buffer size in configure register
[not found] ` <20230202092132.3271910-1-jiawenwu@trustnetic.com>
@ 2023-02-02 9:21 ` Jiawen Wu
2023-02-08 10:28 ` Ferruh Yigit
2023-02-02 9:21 ` [PATCH v2 02/10] net/txgbe: " Jiawen Wu
` (5 subsequent siblings)
6 siblings, 1 reply; 27+ messages in thread
From: Jiawen Wu @ 2023-02-02 9:21 UTC (permalink / raw)
To: dev; +Cc: Jiawen Wu, stable
When buffer size is less than 1K, round down makes it 0, which is an
error value.
Fixes: 62fc35e63d0e ("net/ngbe: support Rx queue start/stop")
Cc: stable@dpdk.org
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
drivers/net/ngbe/ngbe_rxtx.c | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/drivers/net/ngbe/ngbe_rxtx.c b/drivers/net/ngbe/ngbe_rxtx.c
index 9fd24fa444..9a646cb6a7 100644
--- a/drivers/net/ngbe/ngbe_rxtx.c
+++ b/drivers/net/ngbe/ngbe_rxtx.c
@@ -2944,7 +2944,10 @@ ngbe_dev_rx_init(struct rte_eth_dev *dev)
*/
buf_size = (uint16_t)(rte_pktmbuf_data_room_size(rxq->mb_pool) -
RTE_PKTMBUF_HEADROOM);
- buf_size = ROUND_DOWN(buf_size, 0x1 << 10);
+ if (buf_size < 1024)
+ buf_size = ROUND_UP(buf_size, 0x1 << 10);
+ else
+ buf_size = ROUND_DOWN(buf_size, 0x1 << 10);
srrctl |= NGBE_RXCFG_PKTLEN(buf_size);
wr32(hw, NGBE_RXCFG(rxq->reg_idx), srrctl);
--
2.27.0
^ permalink raw reply [flat|nested] 27+ messages in thread
* [PATCH v2 02/10] net/txgbe: fix Rx buffer size in configure register
[not found] ` <20230202092132.3271910-1-jiawenwu@trustnetic.com>
2023-02-02 9:21 ` [PATCH v2 01/10] net/ngbe: fix Rx buffer size in configure register Jiawen Wu
@ 2023-02-02 9:21 ` Jiawen Wu
2023-02-02 9:21 ` [PATCH v2 03/10] net/txgbe: fix default signal quality value for KX/KX4 Jiawen Wu
` (4 subsequent siblings)
6 siblings, 0 replies; 27+ messages in thread
From: Jiawen Wu @ 2023-02-02 9:21 UTC (permalink / raw)
To: dev; +Cc: Jiawen Wu, stable
When round up buffer size to 1K, to configure the register, hardware will
receive packets exceeding the buffer size in LRO mode. It will cause a
segment fault in the receive function.
Fixes: be797cbf4582 ("net/txgbe: add Rx and Tx init")
Cc: stable@dpdk.org
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
drivers/net/txgbe/txgbe_rxtx.c | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/drivers/net/txgbe/txgbe_rxtx.c b/drivers/net/txgbe/txgbe_rxtx.c
index ac1bba08a3..328406908d 100644
--- a/drivers/net/txgbe/txgbe_rxtx.c
+++ b/drivers/net/txgbe/txgbe_rxtx.c
@@ -4382,7 +4382,10 @@ txgbe_dev_rx_init(struct rte_eth_dev *dev)
*/
buf_size = (uint16_t)(rte_pktmbuf_data_room_size(rxq->mb_pool) -
RTE_PKTMBUF_HEADROOM);
- buf_size = ROUND_UP(buf_size, 0x1 << 10);
+ if (buf_size < 1024)
+ buf_size = ROUND_UP(buf_size, 0x1 << 10);
+ else
+ buf_size = ROUND_DOWN(buf_size, 0x1 << 10);
srrctl |= TXGBE_RXCFG_PKTLEN(buf_size);
wr32(hw, TXGBE_RXCFG(rxq->reg_idx), srrctl);
--
2.27.0
^ permalink raw reply [flat|nested] 27+ messages in thread
* [PATCH v2 03/10] net/txgbe: fix default signal quality value for KX/KX4
[not found] ` <20230202092132.3271910-1-jiawenwu@trustnetic.com>
2023-02-02 9:21 ` [PATCH v2 01/10] net/ngbe: fix Rx buffer size in configure register Jiawen Wu
2023-02-02 9:21 ` [PATCH v2 02/10] net/txgbe: " Jiawen Wu
@ 2023-02-02 9:21 ` Jiawen Wu
2023-02-02 9:21 ` [PATCH v2 04/10] net/txgbe: fix packet type to parse from offload flags Jiawen Wu
` (3 subsequent siblings)
6 siblings, 0 replies; 27+ messages in thread
From: Jiawen Wu @ 2023-02-02 9:21 UTC (permalink / raw)
To: dev; +Cc: Jiawen Wu, stable
On old firmware versions, the default value of signal quality(TX_EQ) is
configured by the driver. Fix it for KX/KX4 mode.
Fixes: 01c3cf5c85a7 ("net/txgbe: add autoneg control read and write")
Cc: stable@dpdk.org
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
drivers/net/txgbe/base/txgbe_phy.c | 7 ++++---
1 file changed, 4 insertions(+), 3 deletions(-)
diff --git a/drivers/net/txgbe/base/txgbe_phy.c b/drivers/net/txgbe/base/txgbe_phy.c
index 9f46d5bdb0..87935abdaa 100644
--- a/drivers/net/txgbe/base/txgbe_phy.c
+++ b/drivers/net/txgbe/base/txgbe_phy.c
@@ -1693,9 +1693,10 @@ txgbe_set_link_to_kx4(struct txgbe_hw *hw, bool autoneg)
wr32_epcs(hw, TXGBE_PHY_TX_EQ_CTL1, value);
} else if (hw->fw_version <= TXGBE_FW_N_TXEQ) {
value = (0x1804 & ~0x3F3F);
+ value |= 40 << 8;
wr32_epcs(hw, TXGBE_PHY_TX_EQ_CTL0, value);
- value = (0x50 & ~0x7F) | 40 | (1 << 6);
+ value = (0x50 & ~0x7F) | (1 << 6);
wr32_epcs(hw, TXGBE_PHY_TX_EQ_CTL1, value);
}
out:
@@ -1907,10 +1908,10 @@ txgbe_set_link_to_kx(struct txgbe_hw *hw,
value |= hw->phy.ffe_post | (1 << 6);
wr32_epcs(hw, TXGBE_PHY_TX_EQ_CTL1, value);
} else if (hw->fw_version <= TXGBE_FW_N_TXEQ) {
- value = (0x1804 & ~0x3F3F) | (24 << 8) | 4;
+ value = (0x1804 & ~0x3F3F) | (40 << 8);
wr32_epcs(hw, TXGBE_PHY_TX_EQ_CTL0, value);
- value = (0x50 & ~0x7F) | 16 | (1 << 6);
+ value = (0x50 & ~0x7F) | (1 << 6);
wr32_epcs(hw, TXGBE_PHY_TX_EQ_CTL1, value);
}
out:
--
2.27.0
^ permalink raw reply [flat|nested] 27+ messages in thread
* [PATCH v2 04/10] net/txgbe: fix packet type to parse from offload flags
[not found] ` <20230202092132.3271910-1-jiawenwu@trustnetic.com>
` (2 preceding siblings ...)
2023-02-02 9:21 ` [PATCH v2 03/10] net/txgbe: fix default signal quality value for KX/KX4 Jiawen Wu
@ 2023-02-02 9:21 ` Jiawen Wu
2023-02-02 9:21 ` [PATCH v2 05/10] net/ngbe: " Jiawen Wu
` (2 subsequent siblings)
6 siblings, 0 replies; 27+ messages in thread
From: Jiawen Wu @ 2023-02-02 9:21 UTC (permalink / raw)
To: dev; +Cc: Jiawen Wu, stable
Context descriptors which contains the length of each packet layer and
the packet type are needed when Tx checksum offload or TSO is on. If the
packet type and length do not strictly match, it will cause Tx ring hang.
In some external applications, developers may fill in wrong packet_type
in rte_mbuf for Tx path. For example, they encap/decap the packets but
did not refill the packet_type. To prevent this, change it to parse from
ol_flags.
Fixes: ca46fcd753b1 ("net/txgbe: support Tx with hardware offload")
Cc: stable@dpdk.org
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
drivers/net/txgbe/txgbe_rxtx.c | 24 +++++++++++++++++-------
1 file changed, 17 insertions(+), 7 deletions(-)
diff --git a/drivers/net/txgbe/txgbe_rxtx.c b/drivers/net/txgbe/txgbe_rxtx.c
index 328406908d..f7e3159dc9 100644
--- a/drivers/net/txgbe/txgbe_rxtx.c
+++ b/drivers/net/txgbe/txgbe_rxtx.c
@@ -516,20 +516,21 @@ tx_desc_ol_flags_to_cmdtype(uint64_t ol_flags)
return cmdtype;
}
-static inline uint8_t
-tx_desc_ol_flags_to_ptid(uint64_t oflags, uint32_t ptype)
+static inline uint32_t
+tx_desc_ol_flags_to_ptype(uint64_t oflags)
{
+ uint32_t ptype;
bool tun;
- if (ptype)
- return txgbe_encode_ptype(ptype);
-
/* Only support flags in TXGBE_TX_OFFLOAD_MASK */
tun = !!(oflags & RTE_MBUF_F_TX_TUNNEL_MASK);
/* L2 level */
ptype = RTE_PTYPE_L2_ETHER;
if (oflags & RTE_MBUF_F_TX_VLAN)
+ ptype |= (tun ? RTE_PTYPE_INNER_L2_ETHER_VLAN : RTE_PTYPE_L2_ETHER_VLAN);
+
+ if (oflags & RTE_MBUF_F_TX_QINQ) /* tunnel + QINQ is not supported */
ptype |= RTE_PTYPE_L2_ETHER_VLAN;
/* L3 level */
@@ -587,6 +588,16 @@ tx_desc_ol_flags_to_ptid(uint64_t oflags, uint32_t ptype)
break;
}
+ return ptype;
+}
+
+static inline uint8_t
+tx_desc_ol_flags_to_ptid(uint64_t oflags)
+{
+ uint32_t ptype;
+
+ ptype = tx_desc_ol_flags_to_ptype(oflags);
+
return txgbe_encode_ptype(ptype);
}
@@ -776,8 +787,7 @@ txgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
/* If hardware offload required */
tx_ol_req = ol_flags & TXGBE_TX_OFFLOAD_MASK;
if (tx_ol_req) {
- tx_offload.ptid = tx_desc_ol_flags_to_ptid(tx_ol_req,
- tx_pkt->packet_type);
+ tx_offload.ptid = tx_desc_ol_flags_to_ptid(tx_ol_req);
if (tx_offload.ptid & TXGBE_PTID_PKT_TUN)
tx_offload.ptid |= txgbe_parse_tun_ptid(tx_pkt);
tx_offload.l2_len = tx_pkt->l2_len;
--
2.27.0
^ permalink raw reply [flat|nested] 27+ messages in thread
* [PATCH v2 05/10] net/ngbe: fix packet type to parse from offload flags
[not found] ` <20230202092132.3271910-1-jiawenwu@trustnetic.com>
` (3 preceding siblings ...)
2023-02-02 9:21 ` [PATCH v2 04/10] net/txgbe: fix packet type to parse from offload flags Jiawen Wu
@ 2023-02-02 9:21 ` Jiawen Wu
2023-02-02 9:21 ` [PATCH v2 06/10] net/ngbe: add spinlock protection on YT PHY Jiawen Wu
2023-02-02 9:21 ` [PATCH v2 09/10] net/txgbe: fix interrupt loss Jiawen Wu
6 siblings, 0 replies; 27+ messages in thread
From: Jiawen Wu @ 2023-02-02 9:21 UTC (permalink / raw)
To: dev; +Cc: Jiawen Wu, stable
Context descriptors which contains the length of each packet layer and
the packet type are needed when Tx checksum offload or TSO is on. If the
packet type and length do not strictly match, it will cause Tx ring hang.
In some external applications, developers may fill in wrong packet_type
in rte_mbuf for Tx path. For example, they encap/decap the packets but
did not refill the packet_type. To prevent this, change it to parse from
ol_flags.
And remove redundant tunnel type since the NIC does not support it.
Fixes: 9f3206140274 ("net/ngbe: support TSO")
Cc: stable@dpdk.org
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
drivers/net/ngbe/ngbe_rxtx.c | 92 +++++++++---------------------------
1 file changed, 23 insertions(+), 69 deletions(-)
diff --git a/drivers/net/ngbe/ngbe_rxtx.c b/drivers/net/ngbe/ngbe_rxtx.c
index 9a646cb6a7..0dce4079b5 100644
--- a/drivers/net/ngbe/ngbe_rxtx.c
+++ b/drivers/net/ngbe/ngbe_rxtx.c
@@ -24,15 +24,11 @@
/* Bit Mask to indicate what bits required for building Tx context */
static const u64 NGBE_TX_OFFLOAD_MASK = (RTE_MBUF_F_TX_IP_CKSUM |
- RTE_MBUF_F_TX_OUTER_IPV6 |
- RTE_MBUF_F_TX_OUTER_IPV4 |
RTE_MBUF_F_TX_IPV6 |
RTE_MBUF_F_TX_IPV4 |
RTE_MBUF_F_TX_VLAN |
RTE_MBUF_F_TX_L4_MASK |
RTE_MBUF_F_TX_TCP_SEG |
- RTE_MBUF_F_TX_TUNNEL_MASK |
- RTE_MBUF_F_TX_OUTER_IP_CKSUM |
NGBE_TX_IEEE1588_TMST);
#define NGBE_TX_OFFLOAD_NOTSUP_MASK \
@@ -333,34 +329,15 @@ ngbe_set_xmit_ctx(struct ngbe_tx_queue *txq,
}
vlan_macip_lens = NGBE_TXD_IPLEN(tx_offload.l3_len >> 1);
-
- if (ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) {
- tx_offload_mask.outer_tun_len |= ~0;
- tx_offload_mask.outer_l2_len |= ~0;
- tx_offload_mask.outer_l3_len |= ~0;
- tx_offload_mask.l2_len |= ~0;
- tunnel_seed = NGBE_TXD_ETUNLEN(tx_offload.outer_tun_len >> 1);
- tunnel_seed |= NGBE_TXD_EIPLEN(tx_offload.outer_l3_len >> 2);
-
- switch (ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) {
- case RTE_MBUF_F_TX_TUNNEL_IPIP:
- /* for non UDP / GRE tunneling, set to 0b */
- break;
- default:
- PMD_TX_LOG(ERR, "Tunnel type not supported");
- return;
- }
- vlan_macip_lens |= NGBE_TXD_MACLEN(tx_offload.outer_l2_len);
- } else {
- tunnel_seed = 0;
- vlan_macip_lens |= NGBE_TXD_MACLEN(tx_offload.l2_len);
- }
+ vlan_macip_lens |= NGBE_TXD_MACLEN(tx_offload.l2_len);
if (ol_flags & RTE_MBUF_F_TX_VLAN) {
tx_offload_mask.vlan_tci |= ~0;
vlan_macip_lens |= NGBE_TXD_VLAN(tx_offload.vlan_tci);
}
+ tunnel_seed = 0;
+
txq->ctx_cache[ctx_idx].flags = ol_flags;
txq->ctx_cache[ctx_idx].tx_offload.data[0] =
tx_offload_mask.data[0] & tx_offload.data[0];
@@ -449,16 +426,10 @@ tx_desc_ol_flags_to_cmdtype(uint64_t ol_flags)
return cmdtype;
}
-static inline uint8_t
-tx_desc_ol_flags_to_ptid(uint64_t oflags, uint32_t ptype)
+static inline uint32_t
+tx_desc_ol_flags_to_ptype(uint64_t oflags)
{
- bool tun;
-
- if (ptype)
- return ngbe_encode_ptype(ptype);
-
- /* Only support flags in NGBE_TX_OFFLOAD_MASK */
- tun = !!(oflags & RTE_MBUF_F_TX_TUNNEL_MASK);
+ uint32_t ptype;
/* L2 level */
ptype = RTE_PTYPE_L2_ETHER;
@@ -466,41 +437,36 @@ tx_desc_ol_flags_to_ptid(uint64_t oflags, uint32_t ptype)
ptype |= RTE_PTYPE_L2_ETHER_VLAN;
/* L3 level */
- if (oflags & (RTE_MBUF_F_TX_OUTER_IPV4 | RTE_MBUF_F_TX_OUTER_IP_CKSUM))
- ptype |= RTE_PTYPE_L3_IPV4;
- else if (oflags & (RTE_MBUF_F_TX_OUTER_IPV6))
- ptype |= RTE_PTYPE_L3_IPV6;
-
if (oflags & (RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CKSUM))
- ptype |= (tun ? RTE_PTYPE_INNER_L3_IPV4 : RTE_PTYPE_L3_IPV4);
+ ptype |= RTE_PTYPE_L3_IPV4;
else if (oflags & (RTE_MBUF_F_TX_IPV6))
- ptype |= (tun ? RTE_PTYPE_INNER_L3_IPV6 : RTE_PTYPE_L3_IPV6);
+ ptype |= RTE_PTYPE_L3_IPV6;
/* L4 level */
switch (oflags & (RTE_MBUF_F_TX_L4_MASK)) {
case RTE_MBUF_F_TX_TCP_CKSUM:
- ptype |= (tun ? RTE_PTYPE_INNER_L4_TCP : RTE_PTYPE_L4_TCP);
+ ptype |= RTE_PTYPE_L4_TCP;
break;
case RTE_MBUF_F_TX_UDP_CKSUM:
- ptype |= (tun ? RTE_PTYPE_INNER_L4_UDP : RTE_PTYPE_L4_UDP);
+ ptype |= RTE_PTYPE_L4_UDP;
break;
case RTE_MBUF_F_TX_SCTP_CKSUM:
- ptype |= (tun ? RTE_PTYPE_INNER_L4_SCTP : RTE_PTYPE_L4_SCTP);
+ ptype |= RTE_PTYPE_L4_SCTP;
break;
}
if (oflags & RTE_MBUF_F_TX_TCP_SEG)
- ptype |= (tun ? RTE_PTYPE_INNER_L4_TCP : RTE_PTYPE_L4_TCP);
-
- /* Tunnel */
- switch (oflags & RTE_MBUF_F_TX_TUNNEL_MASK) {
- case RTE_MBUF_F_TX_TUNNEL_IPIP:
- case RTE_MBUF_F_TX_TUNNEL_IP:
- ptype |= RTE_PTYPE_L2_ETHER |
- RTE_PTYPE_L3_IPV4 |
- RTE_PTYPE_TUNNEL_IP;
- break;
- }
+ ptype |= RTE_PTYPE_L4_TCP;
+
+ return ptype;
+}
+
+static inline uint8_t
+tx_desc_ol_flags_to_ptid(uint64_t oflags)
+{
+ uint32_t ptype;
+
+ ptype = tx_desc_ol_flags_to_ptype(oflags);
return ngbe_encode_ptype(ptype);
}
@@ -622,16 +588,12 @@ ngbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
/* If hardware offload required */
tx_ol_req = ol_flags & NGBE_TX_OFFLOAD_MASK;
if (tx_ol_req) {
- tx_offload.ptid = tx_desc_ol_flags_to_ptid(tx_ol_req,
- tx_pkt->packet_type);
+ tx_offload.ptid = tx_desc_ol_flags_to_ptid(tx_ol_req);
tx_offload.l2_len = tx_pkt->l2_len;
tx_offload.l3_len = tx_pkt->l3_len;
tx_offload.l4_len = tx_pkt->l4_len;
tx_offload.vlan_tci = tx_pkt->vlan_tci;
tx_offload.tso_segsz = tx_pkt->tso_segsz;
- tx_offload.outer_l2_len = tx_pkt->outer_l2_len;
- tx_offload.outer_l3_len = tx_pkt->outer_l3_len;
- tx_offload.outer_tun_len = 0;
/* If new context need be built or reuse the exist ctx*/
ctx = what_ctx_update(txq, tx_ol_req, tx_offload);
@@ -752,10 +714,6 @@ ngbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
*/
pkt_len -= (tx_offload.l2_len +
tx_offload.l3_len + tx_offload.l4_len);
- pkt_len -=
- (tx_pkt->ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK)
- ? tx_offload.outer_l2_len +
- tx_offload.outer_l3_len : 0;
}
/*
@@ -1939,12 +1897,8 @@ ngbe_get_tx_port_offloads(struct rte_eth_dev *dev)
RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
- RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
RTE_ETH_TX_OFFLOAD_TCP_TSO |
RTE_ETH_TX_OFFLOAD_UDP_TSO |
- RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO |
- RTE_ETH_TX_OFFLOAD_IP_TNL_TSO |
- RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO |
RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
if (hw->is_pf)
--
2.27.0
^ permalink raw reply [flat|nested] 27+ messages in thread
* [PATCH v2 06/10] net/ngbe: add spinlock protection on YT PHY
[not found] ` <20230202092132.3271910-1-jiawenwu@trustnetic.com>
` (4 preceding siblings ...)
2023-02-02 9:21 ` [PATCH v2 05/10] net/ngbe: " Jiawen Wu
@ 2023-02-02 9:21 ` Jiawen Wu
2023-02-02 9:21 ` [PATCH v2 09/10] net/txgbe: fix interrupt loss Jiawen Wu
6 siblings, 0 replies; 27+ messages in thread
From: Jiawen Wu @ 2023-02-02 9:21 UTC (permalink / raw)
To: dev; +Cc: Jiawen Wu, stable
For yt8521s/yt8531s PHY, if other registers are accessing between
reads/writes of ext field registers, the value of ext filed registers
will get weird for unknown reasons. So it's protected when all of ext
field registers accessing.
Fixes: 44e97550ca68 ("net/ngbe: identify and reset PHY")
Cc: stable@dpdk.org
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
drivers/net/ngbe/base/ngbe_phy_yt.c | 36 +++++++++++++++++++++++++++++
drivers/net/ngbe/base/ngbe_type.h | 1 +
2 files changed, 37 insertions(+)
diff --git a/drivers/net/ngbe/base/ngbe_phy_yt.c b/drivers/net/ngbe/base/ngbe_phy_yt.c
index c88946f7c3..726d6c8ef5 100644
--- a/drivers/net/ngbe/base/ngbe_phy_yt.c
+++ b/drivers/net/ngbe/base/ngbe_phy_yt.c
@@ -100,11 +100,15 @@ s32 ngbe_write_phy_reg_sds_ext_yt(struct ngbe_hw *hw,
s32 ngbe_init_phy_yt(struct ngbe_hw *hw)
{
+ rte_spinlock_init(&hw->phy_lock);
+
+ rte_spinlock_lock(&hw->phy_lock);
/* close sds area register */
ngbe_write_phy_reg_ext_yt(hw, YT_SMI_PHY, 0, 0);
/* enable interrupts */
ngbe_write_phy_reg_mdi(hw, YT_INTR, 0,
YT_INTR_ENA_MASK | YT_SDS_INTR_ENA_MASK);
+ rte_spinlock_unlock(&hw->phy_lock);
hw->phy.set_phy_power(hw, false);
@@ -123,7 +127,9 @@ s32 ngbe_setup_phy_link_yt(struct ngbe_hw *hw, u32 speed,
hw->phy.autoneg_advertised = 0;
/* check chip_mode first */
+ rte_spinlock_lock(&hw->phy_lock);
ngbe_read_phy_reg_ext_yt(hw, YT_CHIP, 0, &value);
+ rte_spinlock_unlock(&hw->phy_lock);
if ((value & YT_CHIP_MODE_MASK) == YT_CHIP_MODE_SEL(0)) {
/* UTP to rgmii */
if (!hw->mac.autoneg) {
@@ -146,11 +152,14 @@ s32 ngbe_setup_phy_link_yt(struct ngbe_hw *hw, u32 speed,
}
/* duplex full */
value |= YT_BCR_DUPLEX | YT_BCR_RESET;
+ rte_spinlock_lock(&hw->phy_lock);
ngbe_write_phy_reg_mdi(hw, YT_BCR, 0, value);
+ rte_spinlock_unlock(&hw->phy_lock);
goto skip_an;
}
+ rte_spinlock_lock(&hw->phy_lock);
/*disable 100/10base-T Self-negotiation ability*/
ngbe_read_phy_reg_mdi(hw, YT_ANA, 0, &value);
value &= ~(YT_ANA_100BASET_FULL | YT_ANA_100BASET_HALF |
@@ -189,6 +198,7 @@ s32 ngbe_setup_phy_link_yt(struct ngbe_hw *hw, u32 speed,
ngbe_read_phy_reg_mdi(hw, YT_BCR, 0, &value);
value |= YT_BCR_RESET | YT_BCR_ANE | YT_BCR_RESTART_AN;
ngbe_write_phy_reg_mdi(hw, YT_BCR, 0, value);
+ rte_spinlock_unlock(&hw->phy_lock);
skip_an:
hw->phy.set_phy_power(hw, true);
} else if ((value & YT_CHIP_MODE_MASK) == YT_CHIP_MODE_SEL(1)) {
@@ -199,6 +209,7 @@ s32 ngbe_setup_phy_link_yt(struct ngbe_hw *hw, u32 speed,
value = YT_RGMII_CONF1_RXDELAY |
YT_RGMII_CONF1_TXDELAY_FE |
YT_RGMII_CONF1_TXDELAY;
+ rte_spinlock_lock(&hw->phy_lock);
ngbe_write_phy_reg_ext_yt(hw, YT_RGMII_CONF1, 0, value);
value = YT_CHIP_MODE_SEL(1) |
YT_CHIP_SW_LDO_EN |
@@ -225,17 +236,21 @@ s32 ngbe_setup_phy_link_yt(struct ngbe_hw *hw, u32 speed,
value = YT_BCR_RESET | YT_BCR_DUPLEX |
YT_BCR_SPEED_SELECT1;
hw->phy.write_reg(hw, YT_BCR, 0, value);
+ rte_spinlock_unlock(&hw->phy_lock);
hw->phy.set_phy_power(hw, true);
} else if ((value & YT_CHIP_MODE_MASK) == YT_CHIP_MODE_SEL(2)) {
hw->phy.set_phy_power(hw, true);
+ rte_spinlock_lock(&hw->phy_lock);
hw->phy.read_reg(hw, YT_SPST, 0, &value);
+ rte_spinlock_unlock(&hw->phy_lock);
if (value & YT_SPST_LINK) {
/* fiber up */
hw->phy.autoneg_advertised |= NGBE_LINK_SPEED_1GB_FULL;
} else {
/* utp up */
+ rte_spinlock_lock(&hw->phy_lock);
/*disable 100/10base-T Self-negotiation ability*/
ngbe_read_phy_reg_mdi(hw, YT_ANA, 0, &value);
value &= ~(YT_ANA_100BASET_FULL | YT_ANA_100BASET_HALF |
@@ -279,10 +294,12 @@ s32 ngbe_setup_phy_link_yt(struct ngbe_hw *hw, u32 speed,
ngbe_read_phy_reg_mdi(hw, YT_BCR, 0, &value);
value |= YT_BCR_RESET;
ngbe_write_phy_reg_mdi(hw, YT_BCR, 0, value);
+ rte_spinlock_unlock(&hw->phy_lock);
}
} else if ((value & YT_CHIP_MODE_MASK) == YT_CHIP_MODE_SEL(4)) {
hw->phy.autoneg_advertised |= NGBE_LINK_SPEED_1GB_FULL;
+ rte_spinlock_lock(&hw->phy_lock);
ngbe_read_phy_reg_ext_yt(hw, YT_RGMII_CONF1, 0, &value);
value |= YT_RGMII_CONF1_MODE;
ngbe_write_phy_reg_ext_yt(hw, YT_RGMII_CONF1, 0, value);
@@ -297,6 +314,7 @@ s32 ngbe_setup_phy_link_yt(struct ngbe_hw *hw, u32 speed,
ngbe_read_phy_reg_ext_yt(hw, YT_CHIP, 0, &value);
value &= ~YT_SMI_PHY_SW_RST;
ngbe_write_phy_reg_ext_yt(hw, YT_CHIP, 0, value);
+ rte_spinlock_unlock(&hw->phy_lock);
hw->phy.set_phy_power(hw, true);
} else if ((value & YT_CHIP_MODE_MASK) == YT_CHIP_MODE_SEL(5)) {
@@ -320,7 +338,9 @@ s32 ngbe_setup_phy_link_yt(struct ngbe_hw *hw, u32 speed,
}
/* duplex full */
value |= YT_BCR_DUPLEX | YT_BCR_RESET;
+ rte_spinlock_lock(&hw->phy_lock);
hw->phy.write_reg(hw, YT_BCR, 0, value);
+ rte_spinlock_unlock(&hw->phy_lock);
goto skip_an_sr;
}
@@ -339,19 +359,23 @@ s32 ngbe_setup_phy_link_yt(struct ngbe_hw *hw, u32 speed,
/* duplex full */
value |= YT_BCR_DUPLEX | YT_BCR_RESET;
+ rte_spinlock_lock(&hw->phy_lock);
hw->phy.write_reg(hw, YT_BCR, 0, value);
/* software reset to make the above configuration take effect */
hw->phy.read_reg(hw, YT_BCR, 0, &value);
value |= YT_BCR_RESET | YT_BCR_ANE | YT_BCR_RESTART_AN;
hw->phy.write_reg(hw, 0x0, 0, value);
+ rte_spinlock_unlock(&hw->phy_lock);
skip_an_sr:
hw->phy.set_phy_power(hw, true);
}
+ rte_spinlock_lock(&hw->phy_lock);
ngbe_write_phy_reg_ext_yt(hw, YT_SMI_PHY, 0, 0);
ngbe_read_phy_reg_mdi(hw, YT_INTR_STATUS, 0, &value);
+ rte_spinlock_unlock(&hw->phy_lock);
return 0;
}
@@ -366,6 +390,7 @@ s32 ngbe_reset_phy_yt(struct ngbe_hw *hw)
hw->phy.type != ngbe_phy_yt8521s_sfi)
return NGBE_ERR_PHY_TYPE;
+ rte_spinlock_lock(&hw->phy_lock);
/* check chip_mode first */
ngbe_read_phy_reg_ext_yt(hw, YT_CHIP, 0, &ctrl);
if (ctrl & YT_CHIP_MODE_MASK) {
@@ -395,6 +420,7 @@ s32 ngbe_reset_phy_yt(struct ngbe_hw *hw)
msleep(1);
}
}
+ rte_spinlock_unlock(&hw->phy_lock);
if (i == YT_PHY_RST_WAIT_PERIOD) {
DEBUGOUT("PHY reset polling failed to complete.");
@@ -409,7 +435,9 @@ s32 ngbe_get_phy_advertised_pause_yt(struct ngbe_hw *hw, u8 *pause_bit)
u16 value;
s32 status = 0;
+ rte_spinlock_lock(&hw->phy_lock);
status = hw->phy.read_reg(hw, YT_ANA, 0, &value);
+ rte_spinlock_unlock(&hw->phy_lock);
value &= YT_FANA_PAUSE_MASK;
*pause_bit = (u8)(value >> 7);
@@ -421,7 +449,9 @@ s32 ngbe_get_phy_lp_advertised_pause_yt(struct ngbe_hw *hw, u8 *pause_bit)
u16 value;
s32 status = 0;
+ rte_spinlock_lock(&hw->phy_lock);
status = hw->phy.read_reg(hw, YT_LPAR, 0, &value);
+ rte_spinlock_unlock(&hw->phy_lock);
value &= YT_FLPAR_PAUSE_MASK;
*pause_bit = (u8)(value >> 7);
@@ -433,10 +463,12 @@ s32 ngbe_set_phy_pause_adv_yt(struct ngbe_hw *hw, u16 pause_bit)
u16 value;
s32 status = 0;
+ rte_spinlock_lock(&hw->phy_lock);
status = hw->phy.read_reg(hw, YT_ANA, 0, &value);
value &= ~YT_FANA_PAUSE_MASK;
value |= pause_bit;
status = hw->phy.write_reg(hw, YT_ANA, 0, value);
+ rte_spinlock_unlock(&hw->phy_lock);
return status;
}
@@ -453,6 +485,7 @@ s32 ngbe_check_phy_link_yt(struct ngbe_hw *hw,
/* Initialize speed and link to default case */
*link_up = false;
*speed = NGBE_LINK_SPEED_UNKNOWN;
+ rte_spinlock_lock(&hw->phy_lock);
ngbe_write_phy_reg_ext_yt(hw, YT_SMI_PHY, 0, 0);
ngbe_read_phy_reg_mdi(hw, YT_INTR_STATUS, 0, &insr);
@@ -472,6 +505,7 @@ s32 ngbe_check_phy_link_yt(struct ngbe_hw *hw,
*link_up = true;
}
+ rte_spinlock_unlock(&hw->phy_lock);
if (*link_up) {
if (phy_speed == YT_SPST_SPEED_1000M)
*speed = NGBE_LINK_SPEED_1GB_FULL;
@@ -488,6 +522,7 @@ s32 ngbe_set_phy_power_yt(struct ngbe_hw *hw, bool on)
{
u16 value = 0;
+ rte_spinlock_lock(&hw->phy_lock);
/* power down/up in fiber mode */
hw->phy.read_reg(hw, YT_BCR, 0, &value);
if (on)
@@ -504,6 +539,7 @@ s32 ngbe_set_phy_power_yt(struct ngbe_hw *hw, bool on)
else
value |= YT_BCR_PWDN;
ngbe_write_phy_reg_mdi(hw, YT_BCR, 0, value);
+ rte_spinlock_unlock(&hw->phy_lock);
return 0;
}
diff --git a/drivers/net/ngbe/base/ngbe_type.h b/drivers/net/ngbe/base/ngbe_type.h
index aa5c41146c..05804eeab7 100644
--- a/drivers/net/ngbe/base/ngbe_type.h
+++ b/drivers/net/ngbe/base/ngbe_type.h
@@ -433,6 +433,7 @@ struct ngbe_hw {
bool gpio_ctl;
u32 led_conf;
bool init_phy;
+ rte_spinlock_t phy_lock;
struct {
u64 rx_qp_packets;
u64 tx_qp_packets;
--
2.27.0
^ permalink raw reply [flat|nested] 27+ messages in thread
* [PATCH v2 09/10] net/txgbe: fix interrupt loss
[not found] ` <20230202092132.3271910-1-jiawenwu@trustnetic.com>
` (5 preceding siblings ...)
2023-02-02 9:21 ` [PATCH v2 06/10] net/ngbe: add spinlock protection on YT PHY Jiawen Wu
@ 2023-02-02 9:21 ` Jiawen Wu
6 siblings, 0 replies; 27+ messages in thread
From: Jiawen Wu @ 2023-02-02 9:21 UTC (permalink / raw)
To: dev; +Cc: Jiawen Wu, stable
Disable interrupt in the interrupt handling process will sometimes cause
interrupts to be lost.
Fixes: 2fc745e6b606 ("net/txgbe: add interrupt operation")
Cc: stable@dpdk.org
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
drivers/net/txgbe/txgbe_ethdev.c | 5 ++---
1 file changed, 2 insertions(+), 3 deletions(-)
diff --git a/drivers/net/txgbe/txgbe_ethdev.c b/drivers/net/txgbe/txgbe_ethdev.c
index ce7cf2506d..6e939b8ce3 100644
--- a/drivers/net/txgbe/txgbe_ethdev.c
+++ b/drivers/net/txgbe/txgbe_ethdev.c
@@ -2972,9 +2972,6 @@ txgbe_dev_interrupt_get_status(struct rte_eth_dev *dev,
rte_intr_type_get(intr_handle) != RTE_INTR_HANDLE_VFIO_MSIX)
wr32(hw, TXGBE_PX_INTA, 1);
- /* clear all cause mask */
- txgbe_disable_intr(hw);
-
/* read-on-clear nic registers here */
eicr = ((u32 *)hw->isb_mem)[TXGBE_ISB_MISC];
PMD_DRV_LOG(DEBUG, "eicr %x", eicr);
@@ -3000,6 +2997,8 @@ txgbe_dev_interrupt_get_status(struct rte_eth_dev *dev,
if (eicr & TXGBE_ICRMISC_HEAT)
intr->flags |= TXGBE_FLAG_OVERHEAT;
+ ((u32 *)hw->isb_mem)[TXGBE_ISB_MISC] = 0;
+
return 0;
}
--
2.27.0
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH v2 01/10] net/ngbe: fix Rx buffer size in configure register
2023-02-02 9:21 ` [PATCH v2 01/10] net/ngbe: fix Rx buffer size in configure register Jiawen Wu
@ 2023-02-08 10:28 ` Ferruh Yigit
2023-02-09 9:00 ` Jiawen Wu
0 siblings, 1 reply; 27+ messages in thread
From: Ferruh Yigit @ 2023-02-08 10:28 UTC (permalink / raw)
To: Jiawen Wu, dev; +Cc: stable
On 2/2/2023 9:21 AM, Jiawen Wu wrote:
> When buffer size is less than 1K, round down makes it 0, which is an
> error value.
>
> Fixes: 62fc35e63d0e ("net/ngbe: support Rx queue start/stop")
> Cc: stable@dpdk.org
>
> Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
> ---
> drivers/net/ngbe/ngbe_rxtx.c | 5 ++++-
> 1 file changed, 4 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/net/ngbe/ngbe_rxtx.c b/drivers/net/ngbe/ngbe_rxtx.c
> index 9fd24fa444..9a646cb6a7 100644
> --- a/drivers/net/ngbe/ngbe_rxtx.c
> +++ b/drivers/net/ngbe/ngbe_rxtx.c
> @@ -2944,7 +2944,10 @@ ngbe_dev_rx_init(struct rte_eth_dev *dev)
> */
> buf_size = (uint16_t)(rte_pktmbuf_data_room_size(rxq->mb_pool) -
> RTE_PKTMBUF_HEADROOM);
> - buf_size = ROUND_DOWN(buf_size, 0x1 << 10);
> + if (buf_size < 1024)
> + buf_size = ROUND_UP(buf_size, 0x1 << 10);
Back to original problem statement in previous version, can't this cause
HW to receive packets exceeding the buffer size?
If HW accepts buffer size in multiple of 1K, does this mean any buffer
size less than 1K is an error condition for this HW?
> + else
> + buf_size = ROUND_DOWN(buf_size, 0x1 << 10);
> srrctl |= NGBE_RXCFG_PKTLEN(buf_size);
>
> wr32(hw, NGBE_RXCFG(rxq->reg_idx), srrctl);
^ permalink raw reply [flat|nested] 27+ messages in thread
* RE: [PATCH v2 01/10] net/ngbe: fix Rx buffer size in configure register
2023-02-08 10:28 ` Ferruh Yigit
@ 2023-02-09 9:00 ` Jiawen Wu
2023-02-14 8:15 ` Jiawen Wu
0 siblings, 1 reply; 27+ messages in thread
From: Jiawen Wu @ 2023-02-09 9:00 UTC (permalink / raw)
To: 'Ferruh Yigit', dev; +Cc: stable
On Wednesday, February 8, 2023 6:28 PM, Ferruh Yigit wrote:
> On 2/2/2023 9:21 AM, Jiawen Wu wrote:
> > When buffer size is less than 1K, round down makes it 0, which is an
> > error value.
> >
> > Fixes: 62fc35e63d0e ("net/ngbe: support Rx queue start/stop")
> > Cc: stable@dpdk.org
> >
> > Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
> > ---
> > drivers/net/ngbe/ngbe_rxtx.c | 5 ++++-
> > 1 file changed, 4 insertions(+), 1 deletion(-)
> >
> > diff --git a/drivers/net/ngbe/ngbe_rxtx.c
> > b/drivers/net/ngbe/ngbe_rxtx.c index 9fd24fa444..9a646cb6a7 100644
> > --- a/drivers/net/ngbe/ngbe_rxtx.c
> > +++ b/drivers/net/ngbe/ngbe_rxtx.c
> > @@ -2944,7 +2944,10 @@ ngbe_dev_rx_init(struct rte_eth_dev *dev)
> > */
> > buf_size = (uint16_t)(rte_pktmbuf_data_room_size(rxq->mb_pool) -
> > RTE_PKTMBUF_HEADROOM);
> > - buf_size = ROUND_DOWN(buf_size, 0x1 << 10);
> > + if (buf_size < 1024)
> > + buf_size = ROUND_UP(buf_size, 0x1 << 10);
>
> Back to original problem statement in previous version, can't this cause HW to
> receive packets exceeding the buffer size?
>
> If HW accepts buffer size in multiple of 1K, does this mean any buffer size less than
> 1K is an error condition for this HW?
>
After rechecking the code, the minimum buffer size is limited to 1K by the txgbe/ngbe [1].
I think v1 patch for txgbe is enough.
[1]
static int
txgbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
dev_info->min_rx_bufsize = 1024;
> > + else
> > + buf_size = ROUND_DOWN(buf_size, 0x1 << 10);
> > srrctl |= NGBE_RXCFG_PKTLEN(buf_size);
> >
> > wr32(hw, NGBE_RXCFG(rxq->reg_idx), srrctl);
>
>
^ permalink raw reply [flat|nested] 27+ messages in thread
* RE: [PATCH v2 01/10] net/ngbe: fix Rx buffer size in configure register
2023-02-09 9:00 ` Jiawen Wu
@ 2023-02-14 8:15 ` Jiawen Wu
2023-02-14 9:55 ` Ferruh Yigit
0 siblings, 1 reply; 27+ messages in thread
From: Jiawen Wu @ 2023-02-14 8:15 UTC (permalink / raw)
To: 'Ferruh Yigit', dev; +Cc: stable
On Thursday, February 9, 2023 5:00 PM, Jiawen Wu wrote:
> On Wednesday, February 8, 2023 6:28 PM, Ferruh Yigit wrote:
> > On 2/2/2023 9:21 AM, Jiawen Wu wrote:
> > > When buffer size is less than 1K, round down makes it 0, which is an
> > > error value.
> > >
> > > Fixes: 62fc35e63d0e ("net/ngbe: support Rx queue start/stop")
> > > Cc: stable@dpdk.org
> > >
> > > Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
> > > ---
> > > drivers/net/ngbe/ngbe_rxtx.c | 5 ++++-
> > > 1 file changed, 4 insertions(+), 1 deletion(-)
> > >
> > > diff --git a/drivers/net/ngbe/ngbe_rxtx.c
> > > b/drivers/net/ngbe/ngbe_rxtx.c index 9fd24fa444..9a646cb6a7 100644
> > > --- a/drivers/net/ngbe/ngbe_rxtx.c
> > > +++ b/drivers/net/ngbe/ngbe_rxtx.c
> > > @@ -2944,7 +2944,10 @@ ngbe_dev_rx_init(struct rte_eth_dev *dev)
> > > */
> > > buf_size = (uint16_t)(rte_pktmbuf_data_room_size(rxq->mb_pool) -
> > > RTE_PKTMBUF_HEADROOM);
> > > - buf_size = ROUND_DOWN(buf_size, 0x1 << 10);
> > > + if (buf_size < 1024)
> > > + buf_size = ROUND_UP(buf_size, 0x1 << 10);
> >
> > Back to original problem statement in previous version, can't this
> > cause HW to receive packets exceeding the buffer size?
> >
> > If HW accepts buffer size in multiple of 1K, does this mean any buffer
> > size less than 1K is an error condition for this HW?
> >
>
> After rechecking the code, the minimum buffer size is limited to 1K by the
> txgbe/ngbe [1].
> I think v1 patch for txgbe is enough.
>
> [1]
> static int
> txgbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) {
> struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
> struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
>
> dev_info->min_rx_bufsize = 1024;
>
>
> > > + else
> > > + buf_size = ROUND_DOWN(buf_size, 0x1 << 10);
> > > srrctl |= NGBE_RXCFG_PKTLEN(buf_size);
> > >
> > > wr32(hw, NGBE_RXCFG(rxq->reg_idx), srrctl);
> >
> >
>
Hi Ferruh,
Is my proposal feasible or do I need to send v3 patch for it?
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH v2 01/10] net/ngbe: fix Rx buffer size in configure register
2023-02-14 8:15 ` Jiawen Wu
@ 2023-02-14 9:55 ` Ferruh Yigit
2023-02-15 9:35 ` Ferruh Yigit
0 siblings, 1 reply; 27+ messages in thread
From: Ferruh Yigit @ 2023-02-14 9:55 UTC (permalink / raw)
To: Jiawen Wu, dev; +Cc: stable
On 2/14/2023 8:15 AM, Jiawen Wu wrote:
> On Thursday, February 9, 2023 5:00 PM, Jiawen Wu wrote:
>> On Wednesday, February 8, 2023 6:28 PM, Ferruh Yigit wrote:
>>> On 2/2/2023 9:21 AM, Jiawen Wu wrote:
>>>> When buffer size is less than 1K, round down makes it 0, which is an
>>>> error value.
>>>>
>>>> Fixes: 62fc35e63d0e ("net/ngbe: support Rx queue start/stop")
>>>> Cc: stable@dpdk.org
>>>>
>>>> Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
>>>> ---
>>>> drivers/net/ngbe/ngbe_rxtx.c | 5 ++++-
>>>> 1 file changed, 4 insertions(+), 1 deletion(-)
>>>>
>>>> diff --git a/drivers/net/ngbe/ngbe_rxtx.c
>>>> b/drivers/net/ngbe/ngbe_rxtx.c index 9fd24fa444..9a646cb6a7 100644
>>>> --- a/drivers/net/ngbe/ngbe_rxtx.c
>>>> +++ b/drivers/net/ngbe/ngbe_rxtx.c
>>>> @@ -2944,7 +2944,10 @@ ngbe_dev_rx_init(struct rte_eth_dev *dev)
>>>> */
>>>> buf_size = (uint16_t)(rte_pktmbuf_data_room_size(rxq->mb_pool) -
>>>> RTE_PKTMBUF_HEADROOM);
>>>> - buf_size = ROUND_DOWN(buf_size, 0x1 << 10);
>>>> + if (buf_size < 1024)
>>>> + buf_size = ROUND_UP(buf_size, 0x1 << 10);
>>>
>>> Back to original problem statement in previous version, can't this
>>> cause HW to receive packets exceeding the buffer size?
>>>
>>> If HW accepts buffer size in multiple of 1K, does this mean any buffer
>>> size less than 1K is an error condition for this HW?
>>>
>>
>> After rechecking the code, the minimum buffer size is limited to 1K by the
>> txgbe/ngbe [1].
>> I think v1 patch for txgbe is enough.
>>
>> [1]
>> static int
>> txgbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) {
>> struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
>> struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
>>
>> dev_info->min_rx_bufsize = 1024;
>>
>>
>>>> + else
>>>> + buf_size = ROUND_DOWN(buf_size, 0x1 << 10);
>>>> srrctl |= NGBE_RXCFG_PKTLEN(buf_size);
>>>>
>>>> wr32(hw, NGBE_RXCFG(rxq->reg_idx), srrctl);
>>>
>>>
>>
>
> Hi Ferruh,
>
> Is my proposal feasible or do I need to send v3 patch for it?
>
>
Sorry Jiawen, I missed your response.
Yes, you are right, 'dev_info->min_rx_bufsize' prevents user to set
buffer size less than 1K, so change in V1 is good.
There were some other changes too, instead of getting this patch from
v1, can you please send a new version with latest updates?
^ permalink raw reply [flat|nested] 27+ messages in thread
* [PATCH v3] net/txgbe: fix Rx buffer size in configure register
[not found] <20230118060039.3074016-1-jiawenwu@trustnetic.com>
` (5 preceding siblings ...)
[not found] ` <20230202092132.3271910-1-jiawenwu@trustnetic.com>
@ 2023-02-15 2:00 ` Jiawen Wu
2023-02-15 10:24 ` Ferruh Yigit
6 siblings, 1 reply; 27+ messages in thread
From: Jiawen Wu @ 2023-02-15 2:00 UTC (permalink / raw)
To: dev; +Cc: Jiawen Wu, stable
When round up buffer size to 1K, to configure the register, hardware will
receive packets exceeding the buffer size in LRO mode. It will cause a
segment fault in the receive function.
Fixes: be797cbf4582 ("net/txgbe: add Rx and Tx init")
Cc: stable@dpdk.org
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
drivers/net/txgbe/txgbe_rxtx.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/txgbe/txgbe_rxtx.c b/drivers/net/txgbe/txgbe_rxtx.c
index ac1bba08a3..ae70ca3beb 100644
--- a/drivers/net/txgbe/txgbe_rxtx.c
+++ b/drivers/net/txgbe/txgbe_rxtx.c
@@ -4382,7 +4382,7 @@ txgbe_dev_rx_init(struct rte_eth_dev *dev)
*/
buf_size = (uint16_t)(rte_pktmbuf_data_room_size(rxq->mb_pool) -
RTE_PKTMBUF_HEADROOM);
- buf_size = ROUND_UP(buf_size, 0x1 << 10);
+ buf_size = ROUND_DOWN(buf_size, 0x1 << 10);
srrctl |= TXGBE_RXCFG_PKTLEN(buf_size);
wr32(hw, TXGBE_RXCFG(rxq->reg_idx), srrctl);
--
2.27.0
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH v2 01/10] net/ngbe: fix Rx buffer size in configure register
2023-02-14 9:55 ` Ferruh Yigit
@ 2023-02-15 9:35 ` Ferruh Yigit
2023-02-15 10:09 ` Jiawen Wu
0 siblings, 1 reply; 27+ messages in thread
From: Ferruh Yigit @ 2023-02-15 9:35 UTC (permalink / raw)
To: Jiawen Wu, dev; +Cc: stable
On 2/14/2023 9:55 AM, Ferruh Yigit wrote:
> On 2/14/2023 8:15 AM, Jiawen Wu wrote:
>> On Thursday, February 9, 2023 5:00 PM, Jiawen Wu wrote:
>>> On Wednesday, February 8, 2023 6:28 PM, Ferruh Yigit wrote:
>>>> On 2/2/2023 9:21 AM, Jiawen Wu wrote:
>>>>> When buffer size is less than 1K, round down makes it 0, which is an
>>>>> error value.
>>>>>
>>>>> Fixes: 62fc35e63d0e ("net/ngbe: support Rx queue start/stop")
>>>>> Cc: stable@dpdk.org
>>>>>
>>>>> Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
>>>>> ---
>>>>> drivers/net/ngbe/ngbe_rxtx.c | 5 ++++-
>>>>> 1 file changed, 4 insertions(+), 1 deletion(-)
>>>>>
>>>>> diff --git a/drivers/net/ngbe/ngbe_rxtx.c
>>>>> b/drivers/net/ngbe/ngbe_rxtx.c index 9fd24fa444..9a646cb6a7 100644
>>>>> --- a/drivers/net/ngbe/ngbe_rxtx.c
>>>>> +++ b/drivers/net/ngbe/ngbe_rxtx.c
>>>>> @@ -2944,7 +2944,10 @@ ngbe_dev_rx_init(struct rte_eth_dev *dev)
>>>>> */
>>>>> buf_size = (uint16_t)(rte_pktmbuf_data_room_size(rxq->mb_pool) -
>>>>> RTE_PKTMBUF_HEADROOM);
>>>>> - buf_size = ROUND_DOWN(buf_size, 0x1 << 10);
>>>>> + if (buf_size < 1024)
>>>>> + buf_size = ROUND_UP(buf_size, 0x1 << 10);
>>>>
>>>> Back to original problem statement in previous version, can't this
>>>> cause HW to receive packets exceeding the buffer size?
>>>>
>>>> If HW accepts buffer size in multiple of 1K, does this mean any buffer
>>>> size less than 1K is an error condition for this HW?
>>>>
>>>
>>> After rechecking the code, the minimum buffer size is limited to 1K by the
>>> txgbe/ngbe [1].
>>> I think v1 patch for txgbe is enough.
>>>
>>> [1]
>>> static int
>>> txgbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) {
>>> struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
>>> struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
>>>
>>> dev_info->min_rx_bufsize = 1024;
>>>
>>>
>>>>> + else
>>>>> + buf_size = ROUND_DOWN(buf_size, 0x1 << 10);
>>>>> srrctl |= NGBE_RXCFG_PKTLEN(buf_size);
>>>>>
>>>>> wr32(hw, NGBE_RXCFG(rxq->reg_idx), srrctl);
>>>>
>>>>
>>>
>>
>> Hi Ferruh,
>>
>> Is my proposal feasible or do I need to send v3 patch for it?
>>
>>
>
> Sorry Jiawen, I missed your response.
>
> Yes, you are right, 'dev_info->min_rx_bufsize' prevents user to set
> buffer size less than 1K, so change in V1 is good.
>
> There were some other changes too, instead of getting this patch from
> v1, can you please send a new version with latest updates?
We can drop 1/10 and you have sent a v3 for 2/10, right?
^ permalink raw reply [flat|nested] 27+ messages in thread
* RE: [PATCH v2 01/10] net/ngbe: fix Rx buffer size in configure register
2023-02-15 9:35 ` Ferruh Yigit
@ 2023-02-15 10:09 ` Jiawen Wu
0 siblings, 0 replies; 27+ messages in thread
From: Jiawen Wu @ 2023-02-15 10:09 UTC (permalink / raw)
To: 'Ferruh Yigit', dev; +Cc: stable
On Wednesday, February 15, 2023 5:36 PM, Ferruh Yigit wrote:
> On 2/14/2023 9:55 AM, Ferruh Yigit wrote:
> > On 2/14/2023 8:15 AM, Jiawen Wu wrote:
> >> On Thursday, February 9, 2023 5:00 PM, Jiawen Wu wrote:
> >>> On Wednesday, February 8, 2023 6:28 PM, Ferruh Yigit wrote:
> >>>> On 2/2/2023 9:21 AM, Jiawen Wu wrote:
> >>>>> When buffer size is less than 1K, round down makes it 0, which is
> >>>>> an error value.
> >>>>>
> >>>>> Fixes: 62fc35e63d0e ("net/ngbe: support Rx queue start/stop")
> >>>>> Cc: stable@dpdk.org
> >>>>>
> >>>>> Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
> >>>>> ---
> >>>>> drivers/net/ngbe/ngbe_rxtx.c | 5 ++++-
> >>>>> 1 file changed, 4 insertions(+), 1 deletion(-)
> >>>>>
> >>>>> diff --git a/drivers/net/ngbe/ngbe_rxtx.c
> >>>>> b/drivers/net/ngbe/ngbe_rxtx.c index 9fd24fa444..9a646cb6a7 100644
> >>>>> --- a/drivers/net/ngbe/ngbe_rxtx.c
> >>>>> +++ b/drivers/net/ngbe/ngbe_rxtx.c
> >>>>> @@ -2944,7 +2944,10 @@ ngbe_dev_rx_init(struct rte_eth_dev *dev)
> >>>>> */
> >>>>> buf_size = (uint16_t)(rte_pktmbuf_data_room_size(rxq->mb_pool) -
> >>>>> RTE_PKTMBUF_HEADROOM);
> >>>>> - buf_size = ROUND_DOWN(buf_size, 0x1 << 10);
> >>>>> + if (buf_size < 1024)
> >>>>> + buf_size = ROUND_UP(buf_size, 0x1 << 10);
> >>>>
> >>>> Back to original problem statement in previous version, can't this
> >>>> cause HW to receive packets exceeding the buffer size?
> >>>>
> >>>> If HW accepts buffer size in multiple of 1K, does this mean any
> >>>> buffer size less than 1K is an error condition for this HW?
> >>>>
> >>>
> >>> After rechecking the code, the minimum buffer size is limited to 1K
> >>> by the txgbe/ngbe [1].
> >>> I think v1 patch for txgbe is enough.
> >>>
> >>> [1]
> >>> static int
> >>> txgbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) {
> >>> struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
> >>> struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
> >>>
> >>> dev_info->min_rx_bufsize = 1024;
> >>>
> >>>
> >>>>> + else
> >>>>> + buf_size = ROUND_DOWN(buf_size, 0x1 << 10);
> >>>>> srrctl |= NGBE_RXCFG_PKTLEN(buf_size);
> >>>>>
> >>>>> wr32(hw, NGBE_RXCFG(rxq->reg_idx), srrctl);
> >>>>
> >>>>
> >>>
> >>
> >> Hi Ferruh,
> >>
> >> Is my proposal feasible or do I need to send v3 patch for it?
> >>
> >>
> >
> > Sorry Jiawen, I missed your response.
> >
> > Yes, you are right, 'dev_info->min_rx_bufsize' prevents user to set
> > buffer size less than 1K, so change in V1 is good.
> >
> > There were some other changes too, instead of getting this patch from
> > v1, can you please send a new version with latest updates?
>
> We can drop 1/10 and you have sent a v3 for 2/10, right?
>
>
Yes.
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH v3] net/txgbe: fix Rx buffer size in configure register
2023-02-15 2:00 ` [PATCH v3] net/txgbe: fix Rx buffer size in configure register Jiawen Wu
@ 2023-02-15 10:24 ` Ferruh Yigit
0 siblings, 0 replies; 27+ messages in thread
From: Ferruh Yigit @ 2023-02-15 10:24 UTC (permalink / raw)
To: Jiawen Wu, dev; +Cc: stable
On 2/15/2023 2:00 AM, Jiawen Wu wrote:
> When round up buffer size to 1K, to configure the register, hardware will
> receive packets exceeding the buffer size in LRO mode. It will cause a
> segment fault in the receive function.
>
> Fixes: be797cbf4582 ("net/txgbe: add Rx and Tx init")
> Cc: stable@dpdk.org
>
> Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Applied to dpdk-next-net/main, thanks.
^ permalink raw reply [flat|nested] 27+ messages in thread
end of thread, other threads:[~2023-02-15 10:25 UTC | newest]
Thread overview: 27+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
[not found] <20230118060039.3074016-1-jiawenwu@trustnetic.com>
2023-01-18 6:00 ` [PATCH 1/8] net/txgbe: fix Rx buffer size in configure register Jiawen Wu
2023-01-27 15:36 ` Ferruh Yigit
2023-02-01 2:34 ` Jiawen Wu
2023-02-01 10:40 ` Ferruh Yigit
2023-01-18 6:00 ` [PATCH 2/8] net/txgbe: fix default signal quality value for KX/KX4 Jiawen Wu
2023-01-18 6:00 ` [PATCH 3/8] net/txgbe: fix packet type to parse from offload flags Jiawen Wu
2023-01-27 15:36 ` Ferruh Yigit
2023-02-01 3:14 ` Jiawen Wu
2023-02-01 10:41 ` Ferruh Yigit
2023-01-18 6:00 ` [PATCH 4/8] net/ngbe: " Jiawen Wu
2023-01-27 15:37 ` Ferruh Yigit
2023-01-18 6:00 ` [PATCH 5/8] net/ngbe: add spinlock protection on YT PHY Jiawen Wu
[not found] ` <20230202092132.3271910-1-jiawenwu@trustnetic.com>
2023-02-02 9:21 ` [PATCH v2 01/10] net/ngbe: fix Rx buffer size in configure register Jiawen Wu
2023-02-08 10:28 ` Ferruh Yigit
2023-02-09 9:00 ` Jiawen Wu
2023-02-14 8:15 ` Jiawen Wu
2023-02-14 9:55 ` Ferruh Yigit
2023-02-15 9:35 ` Ferruh Yigit
2023-02-15 10:09 ` Jiawen Wu
2023-02-02 9:21 ` [PATCH v2 02/10] net/txgbe: " Jiawen Wu
2023-02-02 9:21 ` [PATCH v2 03/10] net/txgbe: fix default signal quality value for KX/KX4 Jiawen Wu
2023-02-02 9:21 ` [PATCH v2 04/10] net/txgbe: fix packet type to parse from offload flags Jiawen Wu
2023-02-02 9:21 ` [PATCH v2 05/10] net/ngbe: " Jiawen Wu
2023-02-02 9:21 ` [PATCH v2 06/10] net/ngbe: add spinlock protection on YT PHY Jiawen Wu
2023-02-02 9:21 ` [PATCH v2 09/10] net/txgbe: fix interrupt loss Jiawen Wu
2023-02-15 2:00 ` [PATCH v3] net/txgbe: fix Rx buffer size in configure register Jiawen Wu
2023-02-15 10:24 ` Ferruh Yigit
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).