* [dpdk-dev] [PATCH 00/32] net/ngbe: add many features
@ 2021-09-08 8:37 Jiawen Wu
2021-09-08 8:37 ` [dpdk-dev] [PATCH 01/32] net/ngbe: add packet type Jiawen Wu
` (31 more replies)
0 siblings, 32 replies; 54+ messages in thread
From: Jiawen Wu @ 2021-09-08 8:37 UTC (permalink / raw)
To: dev; +Cc: Jiawen Wu
This patch adds a number of major features to complete ngbe PMD.
Jiawen Wu (32):
net/ngbe: add packet type
net/ngbe: support scattered Rx
net/ngbe: support Rx checksum offload
net/ngbe: support TSO
net/ngbe: support CRC offload
net/ngbe: support jumbo frame
net/ngbe: support VLAN and QinQ offload
net/ngbe: support basic statistics
net/ngbe: support device xstats
net/ngbe: support MTU set
net/ngbe: add device promiscuous and allmulticast mode
net/ngbe: support getting FW version
net/ngbe: add loopback mode
net/ngbe: support Rx interrupt
net/ngbe: support MAC filters
net/ngbe: support VLAN filter
net/ngbe: support RSS hash
net/ngbe: support SRIOV
net/ngbe: add mailbox process operations
net/ngbe: support flow control
net/ngbe: support device LED on and off
net/ngbe: support EEPROM dump
net/ngbe: support register dump
net/ngbe: support timesync
net/ngbe: add Rx and Tx queue info get
net/ngbe: add Rx and Tx descriptor status
net/ngbe: add Tx done cleanup
net/ngbe: add IPsec context creation
net/ngbe: create and destroy security session
net/ngbe: support security operations
net/ngbe: add security offload in Rx and Tx
doc: update for ngbe
doc/guides/nics/features/ngbe.ini | 33 +
doc/guides/nics/ngbe.rst | 16 +
doc/guides/rel_notes/release_21_11.rst | 10 +
drivers/net/ngbe/base/meson.build | 1 +
drivers/net/ngbe/base/ngbe.h | 4 +
drivers/net/ngbe/base/ngbe_dummy.h | 131 ++
drivers/net/ngbe/base/ngbe_eeprom.c | 133 ++
drivers/net/ngbe/base/ngbe_eeprom.h | 10 +
drivers/net/ngbe/base/ngbe_hw.c | 912 ++++++++++-
drivers/net/ngbe/base/ngbe_hw.h | 24 +
drivers/net/ngbe/base/ngbe_mbx.c | 327 ++++
drivers/net/ngbe/base/ngbe_mbx.h | 89 +
drivers/net/ngbe/base/ngbe_mng.c | 85 +
drivers/net/ngbe/base/ngbe_mng.h | 18 +
drivers/net/ngbe/base/ngbe_phy.c | 9 +
drivers/net/ngbe/base/ngbe_phy.h | 3 +
drivers/net/ngbe/base/ngbe_phy_mvl.c | 57 +
drivers/net/ngbe/base/ngbe_phy_mvl.h | 4 +
drivers/net/ngbe/base/ngbe_phy_rtl.c | 42 +
drivers/net/ngbe/base/ngbe_phy_rtl.h | 3 +
drivers/net/ngbe/base/ngbe_phy_yt.c | 44 +
drivers/net/ngbe/base/ngbe_phy_yt.h | 6 +
drivers/net/ngbe/base/ngbe_type.h | 226 +++
drivers/net/ngbe/meson.build | 7 +
drivers/net/ngbe/ngbe_ethdev.c | 2077 ++++++++++++++++++++++-
drivers/net/ngbe/ngbe_ethdev.h | 199 +++
drivers/net/ngbe/ngbe_ipsec.c | 702 ++++++++
drivers/net/ngbe/ngbe_ipsec.h | 95 ++
drivers/net/ngbe/ngbe_pf.c | 760 +++++++++
drivers/net/ngbe/ngbe_ptypes.c | 300 ++++
drivers/net/ngbe/ngbe_ptypes.h | 240 +++
drivers/net/ngbe/ngbe_regs_group.h | 54 +
drivers/net/ngbe/ngbe_rxtx.c | 2083 +++++++++++++++++++++++-
drivers/net/ngbe/ngbe_rxtx.h | 84 +-
drivers/net/ngbe/rte_pmd_ngbe.h | 39 +
35 files changed, 8799 insertions(+), 28 deletions(-)
create mode 100644 drivers/net/ngbe/base/ngbe_mbx.c
create mode 100644 drivers/net/ngbe/base/ngbe_mbx.h
create mode 100644 drivers/net/ngbe/ngbe_ipsec.c
create mode 100644 drivers/net/ngbe/ngbe_ipsec.h
create mode 100644 drivers/net/ngbe/ngbe_pf.c
create mode 100644 drivers/net/ngbe/ngbe_ptypes.c
create mode 100644 drivers/net/ngbe/ngbe_ptypes.h
create mode 100644 drivers/net/ngbe/ngbe_regs_group.h
create mode 100644 drivers/net/ngbe/rte_pmd_ngbe.h
--
2.21.0.windows.1
^ permalink raw reply [flat|nested] 54+ messages in thread
* [dpdk-dev] [PATCH 01/32] net/ngbe: add packet type
2021-09-08 8:37 [dpdk-dev] [PATCH 00/32] net/ngbe: add many features Jiawen Wu
@ 2021-09-08 8:37 ` Jiawen Wu
2021-09-15 16:47 ` Ferruh Yigit
2021-09-08 8:37 ` [dpdk-dev] [PATCH 02/32] net/ngbe: support scattered Rx Jiawen Wu
` (30 subsequent siblings)
31 siblings, 1 reply; 54+ messages in thread
From: Jiawen Wu @ 2021-09-08 8:37 UTC (permalink / raw)
To: dev; +Cc: Jiawen Wu
Add packet type marco definition and convert ptype to ptid.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
doc/guides/nics/features/ngbe.ini | 1 +
doc/guides/nics/ngbe.rst | 1 +
drivers/net/ngbe/meson.build | 1 +
drivers/net/ngbe/ngbe_ethdev.c | 9 +
drivers/net/ngbe/ngbe_ethdev.h | 4 +
drivers/net/ngbe/ngbe_ptypes.c | 300 ++++++++++++++++++++++++++++++
drivers/net/ngbe/ngbe_ptypes.h | 240 ++++++++++++++++++++++++
drivers/net/ngbe/ngbe_rxtx.c | 16 ++
drivers/net/ngbe/ngbe_rxtx.h | 2 +
9 files changed, 574 insertions(+)
create mode 100644 drivers/net/ngbe/ngbe_ptypes.c
create mode 100644 drivers/net/ngbe/ngbe_ptypes.h
diff --git a/doc/guides/nics/features/ngbe.ini b/doc/guides/nics/features/ngbe.ini
index 08d5f1b0dc..8b7588184a 100644
--- a/doc/guides/nics/features/ngbe.ini
+++ b/doc/guides/nics/features/ngbe.ini
@@ -8,6 +8,7 @@ Speed capabilities = Y
Link status = Y
Link status event = Y
Queue start/stop = Y
+Packet type parsing = Y
Multiprocess aware = Y
Linux = Y
ARMv8 = Y
diff --git a/doc/guides/nics/ngbe.rst b/doc/guides/nics/ngbe.rst
index 3ba3bb755f..d044397cd5 100644
--- a/doc/guides/nics/ngbe.rst
+++ b/doc/guides/nics/ngbe.rst
@@ -11,6 +11,7 @@ for Wangxun 1 Gigabit Ethernet NICs.
Features
--------
+- Packet type information
- Link state information
diff --git a/drivers/net/ngbe/meson.build b/drivers/net/ngbe/meson.build
index 815ef4da23..05f94fe7d6 100644
--- a/drivers/net/ngbe/meson.build
+++ b/drivers/net/ngbe/meson.build
@@ -12,6 +12,7 @@ objs = [base_objs]
sources = files(
'ngbe_ethdev.c',
+ 'ngbe_ptypes.c',
'ngbe_rxtx.c',
)
diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c
index 3b5c6615ad..4388d93560 100644
--- a/drivers/net/ngbe/ngbe_ethdev.c
+++ b/drivers/net/ngbe/ngbe_ethdev.c
@@ -667,6 +667,15 @@ ngbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
return 0;
}
+const uint32_t *
+ngbe_dev_supported_ptypes_get(struct rte_eth_dev *dev)
+{
+ if (dev->rx_pkt_burst == ngbe_recv_pkts)
+ return ngbe_get_supported_ptypes();
+
+ return NULL;
+}
+
/* return 0 means link status changed, -1 means not changed */
int
ngbe_dev_link_update_share(struct rte_eth_dev *dev,
diff --git a/drivers/net/ngbe/ngbe_ethdev.h b/drivers/net/ngbe/ngbe_ethdev.h
index 7fb72f3f1f..486c6c3839 100644
--- a/drivers/net/ngbe/ngbe_ethdev.h
+++ b/drivers/net/ngbe/ngbe_ethdev.h
@@ -6,6 +6,8 @@
#ifndef _NGBE_ETHDEV_H_
#define _NGBE_ETHDEV_H_
+#include "ngbe_ptypes.h"
+
/* need update link, bit flag */
#define NGBE_FLAG_NEED_LINK_UPDATE ((uint32_t)(1 << 0))
#define NGBE_FLAG_MAILBOX ((uint32_t)(1 << 1))
@@ -131,4 +133,6 @@ ngbe_dev_link_update_share(struct rte_eth_dev *dev,
#define NGBE_DEFAULT_TX_HTHRESH 0
#define NGBE_DEFAULT_TX_WTHRESH 0
+const uint32_t *ngbe_dev_supported_ptypes_get(struct rte_eth_dev *dev);
+
#endif /* _NGBE_ETHDEV_H_ */
diff --git a/drivers/net/ngbe/ngbe_ptypes.c b/drivers/net/ngbe/ngbe_ptypes.c
new file mode 100644
index 0000000000..d6d82105c9
--- /dev/null
+++ b/drivers/net/ngbe/ngbe_ptypes.c
@@ -0,0 +1,300 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018-2021 Beijing WangXun Technology Co., Ltd.
+ */
+
+#include <rte_mbuf.h>
+#include <rte_memory.h>
+
+#include "base/ngbe_type.h"
+#include "ngbe_ptypes.h"
+
+/* The ngbe_ptype_lookup is used to convert from the 8-bit ptid in the
+ * hardware to a bit-field that can be used by SW to more easily determine the
+ * packet type.
+ *
+ * Macros are used to shorten the table lines and make this table human
+ * readable.
+ *
+ * We store the PTYPE in the top byte of the bit field - this is just so that
+ * we can check that the table doesn't have a row missing, as the index into
+ * the table should be the PTYPE.
+ */
+#define TPTE(ptid, l2, l3, l4, tun, el2, el3, el4) \
+ [ptid] = (RTE_PTYPE_L2_##l2 | \
+ RTE_PTYPE_L3_##l3 | \
+ RTE_PTYPE_L4_##l4 | \
+ RTE_PTYPE_TUNNEL_##tun | \
+ RTE_PTYPE_INNER_L2_##el2 | \
+ RTE_PTYPE_INNER_L3_##el3 | \
+ RTE_PTYPE_INNER_L4_##el4)
+
+#define RTE_PTYPE_L2_NONE 0
+#define RTE_PTYPE_L3_NONE 0
+#define RTE_PTYPE_L4_NONE 0
+#define RTE_PTYPE_TUNNEL_NONE 0
+#define RTE_PTYPE_INNER_L2_NONE 0
+#define RTE_PTYPE_INNER_L3_NONE 0
+#define RTE_PTYPE_INNER_L4_NONE 0
+
+static u32 ngbe_ptype_lookup[NGBE_PTID_MAX] __rte_cache_aligned = {
+ /* L2:0-3 L3:4-7 L4:8-11 TUN:12-15 EL2:16-19 EL3:20-23 EL2:24-27 */
+ /* L2: ETH */
+ TPTE(0x10, ETHER, NONE, NONE, NONE, NONE, NONE, NONE),
+ TPTE(0x11, ETHER, NONE, NONE, NONE, NONE, NONE, NONE),
+ TPTE(0x12, ETHER_TIMESYNC, NONE, NONE, NONE, NONE, NONE, NONE),
+ TPTE(0x13, ETHER_FIP, NONE, NONE, NONE, NONE, NONE, NONE),
+ TPTE(0x14, ETHER_LLDP, NONE, NONE, NONE, NONE, NONE, NONE),
+ TPTE(0x15, ETHER_CNM, NONE, NONE, NONE, NONE, NONE, NONE),
+ TPTE(0x16, ETHER_EAPOL, NONE, NONE, NONE, NONE, NONE, NONE),
+ TPTE(0x17, ETHER_ARP, NONE, NONE, NONE, NONE, NONE, NONE),
+ /* L2: Ethertype Filter */
+ TPTE(0x18, ETHER_FILTER, NONE, NONE, NONE, NONE, NONE, NONE),
+ TPTE(0x19, ETHER_FILTER, NONE, NONE, NONE, NONE, NONE, NONE),
+ TPTE(0x1A, ETHER_FILTER, NONE, NONE, NONE, NONE, NONE, NONE),
+ TPTE(0x1B, ETHER_FILTER, NONE, NONE, NONE, NONE, NONE, NONE),
+ TPTE(0x1C, ETHER_FILTER, NONE, NONE, NONE, NONE, NONE, NONE),
+ TPTE(0x1D, ETHER_FILTER, NONE, NONE, NONE, NONE, NONE, NONE),
+ TPTE(0x1E, ETHER_FILTER, NONE, NONE, NONE, NONE, NONE, NONE),
+ TPTE(0x1F, ETHER_FILTER, NONE, NONE, NONE, NONE, NONE, NONE),
+ /* L3: IP */
+ TPTE(0x20, ETHER, IPV4, NONFRAG, NONE, NONE, NONE, NONE),
+ TPTE(0x21, ETHER, IPV4, FRAG, NONE, NONE, NONE, NONE),
+ TPTE(0x22, ETHER, IPV4, NONFRAG, NONE, NONE, NONE, NONE),
+ TPTE(0x23, ETHER, IPV4, UDP, NONE, NONE, NONE, NONE),
+ TPTE(0x24, ETHER, IPV4, TCP, NONE, NONE, NONE, NONE),
+ TPTE(0x25, ETHER, IPV4, SCTP, NONE, NONE, NONE, NONE),
+ TPTE(0x29, ETHER, IPV6, FRAG, NONE, NONE, NONE, NONE),
+ TPTE(0x2A, ETHER, IPV6, NONFRAG, NONE, NONE, NONE, NONE),
+ TPTE(0x2B, ETHER, IPV6, UDP, NONE, NONE, NONE, NONE),
+ TPTE(0x2C, ETHER, IPV6, TCP, NONE, NONE, NONE, NONE),
+ TPTE(0x2D, ETHER, IPV6, SCTP, NONE, NONE, NONE, NONE),
+ /* IPv4 -> IPv4/IPv6 */
+ TPTE(0x81, ETHER, IPV4, NONE, IP, NONE, IPV4, FRAG),
+ TPTE(0x82, ETHER, IPV4, NONE, IP, NONE, IPV4, NONFRAG),
+ TPTE(0x83, ETHER, IPV4, NONE, IP, NONE, IPV4, UDP),
+ TPTE(0x84, ETHER, IPV4, NONE, IP, NONE, IPV4, TCP),
+ TPTE(0x85, ETHER, IPV4, NONE, IP, NONE, IPV4, SCTP),
+ TPTE(0x89, ETHER, IPV4, NONE, IP, NONE, IPV6, FRAG),
+ TPTE(0x8A, ETHER, IPV4, NONE, IP, NONE, IPV6, NONFRAG),
+ TPTE(0x8B, ETHER, IPV4, NONE, IP, NONE, IPV6, UDP),
+ TPTE(0x8C, ETHER, IPV4, NONE, IP, NONE, IPV6, TCP),
+ TPTE(0x8D, ETHER, IPV4, NONE, IP, NONE, IPV6, SCTP),
+ /* IPv6 -> IPv4/IPv6 */
+ TPTE(0xC1, ETHER, IPV6, NONE, IP, NONE, IPV4, FRAG),
+ TPTE(0xC2, ETHER, IPV6, NONE, IP, NONE, IPV4, NONFRAG),
+ TPTE(0xC3, ETHER, IPV6, NONE, IP, NONE, IPV4, UDP),
+ TPTE(0xC4, ETHER, IPV6, NONE, IP, NONE, IPV4, TCP),
+ TPTE(0xC5, ETHER, IPV6, NONE, IP, NONE, IPV4, SCTP),
+ TPTE(0xC9, ETHER, IPV6, NONE, IP, NONE, IPV6, FRAG),
+ TPTE(0xCA, ETHER, IPV6, NONE, IP, NONE, IPV6, NONFRAG),
+ TPTE(0xCB, ETHER, IPV6, NONE, IP, NONE, IPV6, UDP),
+ TPTE(0xCC, ETHER, IPV6, NONE, IP, NONE, IPV6, TCP),
+ TPTE(0xCD, ETHER, IPV6, NONE, IP, NONE, IPV6, SCTP),
+};
+
+u32 *ngbe_get_supported_ptypes(void)
+{
+ static u32 ptypes[] = {
+ /* For non-vec functions,
+ * refers to ngbe_rxd_pkt_info_to_pkt_type();
+ */
+ RTE_PTYPE_L2_ETHER,
+ RTE_PTYPE_L3_IPV4,
+ RTE_PTYPE_L3_IPV4_EXT,
+ RTE_PTYPE_L3_IPV6,
+ RTE_PTYPE_L3_IPV6_EXT,
+ RTE_PTYPE_L4_SCTP,
+ RTE_PTYPE_L4_TCP,
+ RTE_PTYPE_L4_UDP,
+ RTE_PTYPE_TUNNEL_IP,
+ RTE_PTYPE_INNER_L3_IPV6,
+ RTE_PTYPE_INNER_L3_IPV6_EXT,
+ RTE_PTYPE_INNER_L4_TCP,
+ RTE_PTYPE_INNER_L4_UDP,
+ RTE_PTYPE_UNKNOWN
+ };
+
+ return ptypes;
+}
+
+static inline u8
+ngbe_encode_ptype_mac(u32 ptype)
+{
+ u8 ptid;
+
+ ptid = NGBE_PTID_PKT_MAC;
+
+ switch (ptype & RTE_PTYPE_L2_MASK) {
+ case RTE_PTYPE_UNKNOWN:
+ break;
+ case RTE_PTYPE_L2_ETHER_TIMESYNC:
+ ptid |= NGBE_PTID_TYP_TS;
+ break;
+ case RTE_PTYPE_L2_ETHER_ARP:
+ ptid |= NGBE_PTID_TYP_ARP;
+ break;
+ case RTE_PTYPE_L2_ETHER_LLDP:
+ ptid |= NGBE_PTID_TYP_LLDP;
+ break;
+ default:
+ ptid |= NGBE_PTID_TYP_MAC;
+ break;
+ }
+
+ return ptid;
+}
+
+static inline u8
+ngbe_encode_ptype_ip(u32 ptype)
+{
+ u8 ptid;
+
+ ptid = NGBE_PTID_PKT_IP;
+
+ switch (ptype & RTE_PTYPE_L3_MASK) {
+ case RTE_PTYPE_L3_IPV4:
+ case RTE_PTYPE_L3_IPV4_EXT:
+ case RTE_PTYPE_L3_IPV4_EXT_UNKNOWN:
+ break;
+ case RTE_PTYPE_L3_IPV6:
+ case RTE_PTYPE_L3_IPV6_EXT:
+ case RTE_PTYPE_L3_IPV6_EXT_UNKNOWN:
+ ptid |= NGBE_PTID_PKT_IPV6;
+ break;
+ default:
+ return ngbe_encode_ptype_mac(ptype);
+ }
+
+ switch (ptype & RTE_PTYPE_L4_MASK) {
+ case RTE_PTYPE_L4_TCP:
+ ptid |= NGBE_PTID_TYP_TCP;
+ break;
+ case RTE_PTYPE_L4_UDP:
+ ptid |= NGBE_PTID_TYP_UDP;
+ break;
+ case RTE_PTYPE_L4_SCTP:
+ ptid |= NGBE_PTID_TYP_SCTP;
+ break;
+ case RTE_PTYPE_L4_FRAG:
+ ptid |= NGBE_PTID_TYP_IPFRAG;
+ break;
+ default:
+ ptid |= NGBE_PTID_TYP_IPDATA;
+ break;
+ }
+
+ return ptid;
+}
+
+static inline u8
+ngbe_encode_ptype_tunnel(u32 ptype)
+{
+ u8 ptid;
+
+ ptid = NGBE_PTID_PKT_TUN;
+
+ switch (ptype & RTE_PTYPE_L3_MASK) {
+ case RTE_PTYPE_L3_IPV4:
+ case RTE_PTYPE_L3_IPV4_EXT:
+ case RTE_PTYPE_L3_IPV4_EXT_UNKNOWN:
+ break;
+ case RTE_PTYPE_L3_IPV6:
+ case RTE_PTYPE_L3_IPV6_EXT:
+ case RTE_PTYPE_L3_IPV6_EXT_UNKNOWN:
+ ptid |= NGBE_PTID_TUN_IPV6;
+ break;
+ default:
+ return ngbe_encode_ptype_ip(ptype);
+ }
+
+ /* VXLAN/GRE/Teredo/VXLAN-GPE are not supported in EM */
+ switch (ptype & RTE_PTYPE_TUNNEL_MASK) {
+ case RTE_PTYPE_TUNNEL_IP:
+ ptid |= NGBE_PTID_TUN_EI;
+ break;
+ case RTE_PTYPE_TUNNEL_GRE:
+ case RTE_PTYPE_TUNNEL_VXLAN_GPE:
+ ptid |= NGBE_PTID_TUN_EIG;
+ break;
+ case RTE_PTYPE_TUNNEL_VXLAN:
+ case RTE_PTYPE_TUNNEL_NVGRE:
+ case RTE_PTYPE_TUNNEL_GENEVE:
+ case RTE_PTYPE_TUNNEL_GRENAT:
+ break;
+ default:
+ return ptid;
+ }
+
+ switch (ptype & RTE_PTYPE_INNER_L2_MASK) {
+ case RTE_PTYPE_INNER_L2_ETHER:
+ ptid |= NGBE_PTID_TUN_EIGM;
+ break;
+ case RTE_PTYPE_INNER_L2_ETHER_VLAN:
+ ptid |= NGBE_PTID_TUN_EIGMV;
+ break;
+ case RTE_PTYPE_INNER_L2_ETHER_QINQ:
+ ptid |= NGBE_PTID_TUN_EIGMV;
+ break;
+ default:
+ break;
+ }
+
+ switch (ptype & RTE_PTYPE_INNER_L3_MASK) {
+ case RTE_PTYPE_INNER_L3_IPV4:
+ case RTE_PTYPE_INNER_L3_IPV4_EXT:
+ case RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN:
+ break;
+ case RTE_PTYPE_INNER_L3_IPV6:
+ case RTE_PTYPE_INNER_L3_IPV6_EXT:
+ case RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN:
+ ptid |= NGBE_PTID_PKT_IPV6;
+ break;
+ default:
+ return ptid;
+ }
+
+ switch (ptype & RTE_PTYPE_INNER_L4_MASK) {
+ case RTE_PTYPE_INNER_L4_TCP:
+ ptid |= NGBE_PTID_TYP_TCP;
+ break;
+ case RTE_PTYPE_INNER_L4_UDP:
+ ptid |= NGBE_PTID_TYP_UDP;
+ break;
+ case RTE_PTYPE_INNER_L4_SCTP:
+ ptid |= NGBE_PTID_TYP_SCTP;
+ break;
+ case RTE_PTYPE_INNER_L4_FRAG:
+ ptid |= NGBE_PTID_TYP_IPFRAG;
+ break;
+ default:
+ ptid |= NGBE_PTID_TYP_IPDATA;
+ break;
+ }
+
+ return ptid;
+}
+
+u32 ngbe_decode_ptype(u8 ptid)
+{
+ if (-1 != ngbe_etflt_id(ptid))
+ return RTE_PTYPE_UNKNOWN;
+
+ return ngbe_ptype_lookup[ptid];
+}
+
+u8 ngbe_encode_ptype(u32 ptype)
+{
+ u8 ptid = 0;
+
+ if (ptype & RTE_PTYPE_TUNNEL_MASK)
+ ptid = ngbe_encode_ptype_tunnel(ptype);
+ else if (ptype & RTE_PTYPE_L3_MASK)
+ ptid = ngbe_encode_ptype_ip(ptype);
+ else if (ptype & RTE_PTYPE_L2_MASK)
+ ptid = ngbe_encode_ptype_mac(ptype);
+ else
+ ptid = NGBE_PTID_NULL;
+
+ return ptid;
+}
+
diff --git a/drivers/net/ngbe/ngbe_ptypes.h b/drivers/net/ngbe/ngbe_ptypes.h
new file mode 100644
index 0000000000..2ac33d814b
--- /dev/null
+++ b/drivers/net/ngbe/ngbe_ptypes.h
@@ -0,0 +1,240 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018-2021 Beijing WangXun Technology Co., Ltd.
+ */
+
+#ifndef _NGBE_PTYPE_H_
+#define _NGBE_PTYPE_H_
+
+/**
+ * PTID(Packet Type Identifier, 8bits)
+ * - Bit 3:0 detailed types.
+ * - Bit 5:4 basic types.
+ * - Bit 7:6 tunnel types.
+ **/
+#define NGBE_PTID_NULL 0
+#define NGBE_PTID_MAX 256
+#define NGBE_PTID_MASK 0xFF
+#define NGBE_PTID_MASK_TUNNEL 0x7F
+
+/* TUN */
+#define NGBE_PTID_TUN_IPV6 0x40
+#define NGBE_PTID_TUN_EI 0x00 /* IP */
+#define NGBE_PTID_TUN_EIG 0x10 /* IP+GRE */
+#define NGBE_PTID_TUN_EIGM 0x20 /* IP+GRE+MAC */
+#define NGBE_PTID_TUN_EIGMV 0x30 /* IP+GRE+MAC+VLAN */
+
+/* PKT for !TUN */
+#define NGBE_PTID_PKT_TUN (0x80)
+#define NGBE_PTID_PKT_MAC (0x10)
+#define NGBE_PTID_PKT_IP (0x20)
+
+/* TYP for PKT=mac */
+#define NGBE_PTID_TYP_MAC (0x01)
+#define NGBE_PTID_TYP_TS (0x02) /* time sync */
+#define NGBE_PTID_TYP_FIP (0x03)
+#define NGBE_PTID_TYP_LLDP (0x04)
+#define NGBE_PTID_TYP_CNM (0x05)
+#define NGBE_PTID_TYP_EAPOL (0x06)
+#define NGBE_PTID_TYP_ARP (0x07)
+#define NGBE_PTID_TYP_ETF (0x08)
+
+/* TYP for PKT=ip */
+#define NGBE_PTID_PKT_IPV6 (0x08)
+#define NGBE_PTID_TYP_IPFRAG (0x01)
+#define NGBE_PTID_TYP_IPDATA (0x02)
+#define NGBE_PTID_TYP_UDP (0x03)
+#define NGBE_PTID_TYP_TCP (0x04)
+#define NGBE_PTID_TYP_SCTP (0x05)
+
+/* packet type non-ip values */
+enum ngbe_l2_ptids {
+ NGBE_PTID_L2_ABORTED = (NGBE_PTID_PKT_MAC),
+ NGBE_PTID_L2_MAC = (NGBE_PTID_PKT_MAC | NGBE_PTID_TYP_MAC),
+ NGBE_PTID_L2_TMST = (NGBE_PTID_PKT_MAC | NGBE_PTID_TYP_TS),
+ NGBE_PTID_L2_FIP = (NGBE_PTID_PKT_MAC | NGBE_PTID_TYP_FIP),
+ NGBE_PTID_L2_LLDP = (NGBE_PTID_PKT_MAC | NGBE_PTID_TYP_LLDP),
+ NGBE_PTID_L2_CNM = (NGBE_PTID_PKT_MAC | NGBE_PTID_TYP_CNM),
+ NGBE_PTID_L2_EAPOL = (NGBE_PTID_PKT_MAC | NGBE_PTID_TYP_EAPOL),
+ NGBE_PTID_L2_ARP = (NGBE_PTID_PKT_MAC | NGBE_PTID_TYP_ARP),
+
+ NGBE_PTID_L2_IPV4_FRAG = (NGBE_PTID_PKT_IP | NGBE_PTID_TYP_IPFRAG),
+ NGBE_PTID_L2_IPV4 = (NGBE_PTID_PKT_IP | NGBE_PTID_TYP_IPDATA),
+ NGBE_PTID_L2_IPV4_UDP = (NGBE_PTID_PKT_IP | NGBE_PTID_TYP_UDP),
+ NGBE_PTID_L2_IPV4_TCP = (NGBE_PTID_PKT_IP | NGBE_PTID_TYP_TCP),
+ NGBE_PTID_L2_IPV4_SCTP = (NGBE_PTID_PKT_IP | NGBE_PTID_TYP_SCTP),
+ NGBE_PTID_L2_IPV6_FRAG = (NGBE_PTID_PKT_IP | NGBE_PTID_PKT_IPV6 |
+ NGBE_PTID_TYP_IPFRAG),
+ NGBE_PTID_L2_IPV6 = (NGBE_PTID_PKT_IP | NGBE_PTID_PKT_IPV6 |
+ NGBE_PTID_TYP_IPDATA),
+ NGBE_PTID_L2_IPV6_UDP = (NGBE_PTID_PKT_IP | NGBE_PTID_PKT_IPV6 |
+ NGBE_PTID_TYP_UDP),
+ NGBE_PTID_L2_IPV6_TCP = (NGBE_PTID_PKT_IP | NGBE_PTID_PKT_IPV6 |
+ NGBE_PTID_TYP_TCP),
+ NGBE_PTID_L2_IPV6_SCTP = (NGBE_PTID_PKT_IP | NGBE_PTID_PKT_IPV6 |
+ NGBE_PTID_TYP_SCTP),
+
+ NGBE_PTID_L2_TUN4_MAC = (NGBE_PTID_PKT_TUN |
+ NGBE_PTID_TUN_EIGM),
+ NGBE_PTID_L2_TUN6_MAC = (NGBE_PTID_PKT_TUN |
+ NGBE_PTID_TUN_IPV6 | NGBE_PTID_TUN_EIGM),
+};
+
+
+/*
+ * PTYPE(Packet Type, 32bits)
+ * - Bit 3:0 is for L2 types.
+ * - Bit 7:4 is for L3 or outer L3 (for tunneling case) types.
+ * - Bit 11:8 is for L4 or outer L4 (for tunneling case) types.
+ * - Bit 15:12 is for tunnel types.
+ * - Bit 19:16 is for inner L2 types.
+ * - Bit 23:20 is for inner L3 types.
+ * - Bit 27:24 is for inner L4 types.
+ * - Bit 31:28 is reserved.
+ * please ref to rte_mbuf.h: rte_mbuf.packet_type
+ */
+struct rte_ngbe_ptype {
+ u32 l2:4; /* outer mac */
+ u32 l3:4; /* outer internet protocol */
+ u32 l4:4; /* outer transport protocol */
+ u32 tun:4; /* tunnel protocol */
+
+ u32 el2:4; /* inner mac */
+ u32 el3:4; /* inner internet protocol */
+ u32 el4:4; /* inner transport protocol */
+ u32 rsv:3;
+ u32 known:1;
+};
+
+#ifndef RTE_PTYPE_UNKNOWN
+#define RTE_PTYPE_UNKNOWN 0x00000000
+#define RTE_PTYPE_L2_ETHER 0x00000001
+#define RTE_PTYPE_L2_ETHER_TIMESYNC 0x00000002
+#define RTE_PTYPE_L2_ETHER_ARP 0x00000003
+#define RTE_PTYPE_L2_ETHER_LLDP 0x00000004
+#define RTE_PTYPE_L2_ETHER_NSH 0x00000005
+#define RTE_PTYPE_L2_ETHER_FCOE 0x00000009
+#define RTE_PTYPE_L3_IPV4 0x00000010
+#define RTE_PTYPE_L3_IPV4_EXT 0x00000030
+#define RTE_PTYPE_L3_IPV6 0x00000040
+#define RTE_PTYPE_L3_IPV4_EXT_UNKNOWN 0x00000090
+#define RTE_PTYPE_L3_IPV6_EXT 0x000000c0
+#define RTE_PTYPE_L3_IPV6_EXT_UNKNOWN 0x000000e0
+#define RTE_PTYPE_L4_TCP 0x00000100
+#define RTE_PTYPE_L4_UDP 0x00000200
+#define RTE_PTYPE_L4_FRAG 0x00000300
+#define RTE_PTYPE_L4_SCTP 0x00000400
+#define RTE_PTYPE_L4_ICMP 0x00000500
+#define RTE_PTYPE_L4_NONFRAG 0x00000600
+#define RTE_PTYPE_TUNNEL_IP 0x00001000
+#define RTE_PTYPE_TUNNEL_GRE 0x00002000
+#define RTE_PTYPE_TUNNEL_VXLAN 0x00003000
+#define RTE_PTYPE_TUNNEL_NVGRE 0x00004000
+#define RTE_PTYPE_TUNNEL_GENEVE 0x00005000
+#define RTE_PTYPE_TUNNEL_GRENAT 0x00006000
+#define RTE_PTYPE_INNER_L2_ETHER 0x00010000
+#define RTE_PTYPE_INNER_L2_ETHER_VLAN 0x00020000
+#define RTE_PTYPE_INNER_L3_IPV4 0x00100000
+#define RTE_PTYPE_INNER_L3_IPV4_EXT 0x00200000
+#define RTE_PTYPE_INNER_L3_IPV6 0x00300000
+#define RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN 0x00400000
+#define RTE_PTYPE_INNER_L3_IPV6_EXT 0x00500000
+#define RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN 0x00600000
+#define RTE_PTYPE_INNER_L4_TCP 0x01000000
+#define RTE_PTYPE_INNER_L4_UDP 0x02000000
+#define RTE_PTYPE_INNER_L4_FRAG 0x03000000
+#define RTE_PTYPE_INNER_L4_SCTP 0x04000000
+#define RTE_PTYPE_INNER_L4_ICMP 0x05000000
+#define RTE_PTYPE_INNER_L4_NONFRAG 0x06000000
+#endif /* !RTE_PTYPE_UNKNOWN */
+#define RTE_PTYPE_L3_IPV4u RTE_PTYPE_L3_IPV4_EXT_UNKNOWN
+#define RTE_PTYPE_L3_IPV6u RTE_PTYPE_L3_IPV6_EXT_UNKNOWN
+#define RTE_PTYPE_INNER_L3_IPV4u RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN
+#define RTE_PTYPE_INNER_L3_IPV6u RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN
+#define RTE_PTYPE_L2_ETHER_FIP RTE_PTYPE_L2_ETHER
+#define RTE_PTYPE_L2_ETHER_CNM RTE_PTYPE_L2_ETHER
+#define RTE_PTYPE_L2_ETHER_EAPOL RTE_PTYPE_L2_ETHER
+#define RTE_PTYPE_L2_ETHER_FILTER RTE_PTYPE_L2_ETHER
+
+u32 *ngbe_get_supported_ptypes(void);
+u32 ngbe_decode_ptype(u8 ptid);
+u8 ngbe_encode_ptype(u32 ptype);
+
+/**
+ * PT(Packet Type, 32bits)
+ * - Bit 3:0 is for L2 types.
+ * - Bit 7:4 is for L3 or outer L3 (for tunneling case) types.
+ * - Bit 11:8 is for L4 or outer L4 (for tunneling case) types.
+ * - Bit 15:12 is for tunnel types.
+ * - Bit 19:16 is for inner L2 types.
+ * - Bit 23:20 is for inner L3 types.
+ * - Bit 27:24 is for inner L4 types.
+ * - Bit 31:28 is reserved.
+ * PT is a more accurate version of PTYPE
+ **/
+#define NGBE_PT_ETHER 0x00
+#define NGBE_PT_IPV4 0x01
+#define NGBE_PT_IPV4_TCP 0x11
+#define NGBE_PT_IPV4_UDP 0x21
+#define NGBE_PT_IPV4_SCTP 0x41
+#define NGBE_PT_IPV4_EXT 0x03
+#define NGBE_PT_IPV4_EXT_TCP 0x13
+#define NGBE_PT_IPV4_EXT_UDP 0x23
+#define NGBE_PT_IPV4_EXT_SCTP 0x43
+#define NGBE_PT_IPV6 0x04
+#define NGBE_PT_IPV6_TCP 0x14
+#define NGBE_PT_IPV6_UDP 0x24
+#define NGBE_PT_IPV6_SCTP 0x44
+#define NGBE_PT_IPV6_EXT 0x0C
+#define NGBE_PT_IPV6_EXT_TCP 0x1C
+#define NGBE_PT_IPV6_EXT_UDP 0x2C
+#define NGBE_PT_IPV6_EXT_SCTP 0x4C
+#define NGBE_PT_IPV4_IPV6 0x05
+#define NGBE_PT_IPV4_IPV6_TCP 0x15
+#define NGBE_PT_IPV4_IPV6_UDP 0x25
+#define NGBE_PT_IPV4_IPV6_SCTP 0x45
+#define NGBE_PT_IPV4_EXT_IPV6 0x07
+#define NGBE_PT_IPV4_EXT_IPV6_TCP 0x17
+#define NGBE_PT_IPV4_EXT_IPV6_UDP 0x27
+#define NGBE_PT_IPV4_EXT_IPV6_SCTP 0x47
+#define NGBE_PT_IPV4_IPV6_EXT 0x0D
+#define NGBE_PT_IPV4_IPV6_EXT_TCP 0x1D
+#define NGBE_PT_IPV4_IPV6_EXT_UDP 0x2D
+#define NGBE_PT_IPV4_IPV6_EXT_SCTP 0x4D
+#define NGBE_PT_IPV4_EXT_IPV6_EXT 0x0F
+#define NGBE_PT_IPV4_EXT_IPV6_EXT_TCP 0x1F
+#define NGBE_PT_IPV4_EXT_IPV6_EXT_UDP 0x2F
+#define NGBE_PT_IPV4_EXT_IPV6_EXT_SCTP 0x4F
+
+#define NGBE_PT_MAX 256
+
+/* ether type filter list: one static filter per filter consumer. This is
+ * to avoid filter collisions later. Add new filters
+ * here!!
+ * EAPOL 802.1x (0x888e): Filter 0
+ * FCoE (0x8906): Filter 2
+ * 1588 (0x88f7): Filter 3
+ * FIP (0x8914): Filter 4
+ * LLDP (0x88CC): Filter 5
+ * LACP (0x8809): Filter 6
+ * FC (0x8808): Filter 7
+ */
+#define NGBE_ETF_ID_EAPOL 0
+#define NGBE_ETF_ID_FCOE 2
+#define NGBE_ETF_ID_1588 3
+#define NGBE_ETF_ID_FIP 4
+#define NGBE_ETF_ID_LLDP 5
+#define NGBE_ETF_ID_LACP 6
+#define NGBE_ETF_ID_FC 7
+#define NGBE_ETF_ID_MAX 8
+
+#define NGBE_PTID_ETF_MIN 0x18
+#define NGBE_PTID_ETF_MAX 0x1F
+static inline int ngbe_etflt_id(u8 ptid)
+{
+ if (ptid >= NGBE_PTID_ETF_MIN && ptid <= NGBE_PTID_ETF_MAX)
+ return ptid - NGBE_PTID_ETF_MIN;
+ else
+ return -1;
+}
+
+#endif /* _NGBE_PTYPE_H_ */
diff --git a/drivers/net/ngbe/ngbe_rxtx.c b/drivers/net/ngbe/ngbe_rxtx.c
index 5c06e0d550..a3ef0f7577 100644
--- a/drivers/net/ngbe/ngbe_rxtx.c
+++ b/drivers/net/ngbe/ngbe_rxtx.c
@@ -253,6 +253,16 @@ ngbe_xmit_pkts_simple(void *tx_queue, struct rte_mbuf **tx_pkts,
* Rx functions
*
**********************************************************************/
+static inline uint32_t
+ngbe_rxd_pkt_info_to_pkt_type(uint32_t pkt_info, uint16_t ptid_mask)
+{
+ uint16_t ptid = NGBE_RXD_PTID(pkt_info);
+
+ ptid &= ptid_mask;
+
+ return ngbe_decode_ptype(ptid);
+}
+
uint16_t
ngbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
uint16_t nb_pkts)
@@ -267,6 +277,7 @@ ngbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
struct ngbe_rx_desc rxd;
uint64_t dma_addr;
uint32_t staterr;
+ uint32_t pkt_info;
uint16_t pkt_len;
uint16_t rx_id;
uint16_t nb_rx;
@@ -378,6 +389,10 @@ ngbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
rxm->data_len = pkt_len;
rxm->port = rxq->port_id;
+ pkt_info = rte_le_to_cpu_32(rxd.qw0.dw0);
+ rxm->packet_type = ngbe_rxd_pkt_info_to_pkt_type(pkt_info,
+ rxq->pkt_type_mask);
+
/*
* Store the mbuf address into the next entry of the array
* of returned packets.
@@ -799,6 +814,7 @@ ngbe_dev_rx_queue_setup(struct rte_eth_dev *dev,
rxq->port_id = dev->data->port_id;
rxq->drop_en = rx_conf->rx_drop_en;
rxq->rx_deferred_start = rx_conf->rx_deferred_start;
+ rxq->pkt_type_mask = NGBE_PTID_MASK;
/*
* Allocate Rx ring hardware descriptors. A memzone large enough to
diff --git a/drivers/net/ngbe/ngbe_rxtx.h b/drivers/net/ngbe/ngbe_rxtx.h
index a89d59e06b..788d684def 100644
--- a/drivers/net/ngbe/ngbe_rxtx.h
+++ b/drivers/net/ngbe/ngbe_rxtx.h
@@ -238,6 +238,8 @@ struct ngbe_rx_queue {
uint16_t rx_free_thresh; /**< max free Rx desc to hold */
uint16_t queue_id; /**< RX queue index */
uint16_t reg_idx; /**< RX queue register index */
+ /** Packet type mask for different NICs */
+ uint16_t pkt_type_mask;
uint16_t port_id; /**< Device port identifier */
uint8_t drop_en; /**< If not 0, set SRRCTL.Drop_En */
uint8_t rx_deferred_start; /**< not in global dev start */
--
2.21.0.windows.1
^ permalink raw reply [flat|nested] 54+ messages in thread
* [dpdk-dev] [PATCH 02/32] net/ngbe: support scattered Rx
2021-09-08 8:37 [dpdk-dev] [PATCH 00/32] net/ngbe: add many features Jiawen Wu
2021-09-08 8:37 ` [dpdk-dev] [PATCH 01/32] net/ngbe: add packet type Jiawen Wu
@ 2021-09-08 8:37 ` Jiawen Wu
2021-09-15 13:22 ` Ferruh Yigit
2021-09-08 8:37 ` [dpdk-dev] [PATCH 03/32] net/ngbe: support Rx checksum offload Jiawen Wu
` (29 subsequent siblings)
31 siblings, 1 reply; 54+ messages in thread
From: Jiawen Wu @ 2021-09-08 8:37 UTC (permalink / raw)
To: dev; +Cc: Jiawen Wu
Add scattered Rx function to support receiving segmented mbufs.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
doc/guides/nics/features/ngbe.ini | 1 +
doc/guides/nics/ngbe.rst | 1 +
drivers/net/ngbe/ngbe_ethdev.c | 20 +-
drivers/net/ngbe/ngbe_ethdev.h | 8 +
drivers/net/ngbe/ngbe_rxtx.c | 541 ++++++++++++++++++++++++++++++
drivers/net/ngbe/ngbe_rxtx.h | 5 +
6 files changed, 574 insertions(+), 2 deletions(-)
diff --git a/doc/guides/nics/features/ngbe.ini b/doc/guides/nics/features/ngbe.ini
index 8b7588184a..f85754eb7a 100644
--- a/doc/guides/nics/features/ngbe.ini
+++ b/doc/guides/nics/features/ngbe.ini
@@ -8,6 +8,7 @@ Speed capabilities = Y
Link status = Y
Link status event = Y
Queue start/stop = Y
+Scattered Rx = Y
Packet type parsing = Y
Multiprocess aware = Y
Linux = Y
diff --git a/doc/guides/nics/ngbe.rst b/doc/guides/nics/ngbe.rst
index d044397cd5..463452ce8c 100644
--- a/doc/guides/nics/ngbe.rst
+++ b/doc/guides/nics/ngbe.rst
@@ -13,6 +13,7 @@ Features
- Packet type information
- Link state information
+- Scattered for RX
Prerequisites
diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c
index 4388d93560..fba0a2dcfd 100644
--- a/drivers/net/ngbe/ngbe_ethdev.c
+++ b/drivers/net/ngbe/ngbe_ethdev.c
@@ -140,8 +140,16 @@ eth_ngbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
eth_dev->rx_pkt_burst = &ngbe_recv_pkts;
eth_dev->tx_pkt_burst = &ngbe_xmit_pkts_simple;
- if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+ /*
+ * For secondary processes, we don't initialise any further as primary
+ * has already done this work. Only check we don't need a different
+ * Rx and Tx function.
+ */
+ if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
+ ngbe_set_rx_function(eth_dev);
+
return 0;
+ }
rte_eth_copy_pci_info(eth_dev, pci_dev);
@@ -528,6 +536,9 @@ ngbe_dev_stop(struct rte_eth_dev *dev)
ngbe_dev_clear_queues(dev);
+ /* Clear stored conf */
+ dev->data->scattered_rx = 0;
+
/* Clear recorded link status */
memset(&link, 0, sizeof(link));
rte_eth_linkstatus_set(dev, &link);
@@ -628,6 +639,8 @@ ngbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->max_tx_queues = (uint16_t)hw->mac.max_tx_queues;
dev_info->min_rx_bufsize = 1024;
dev_info->max_rx_pktlen = 15872;
+ dev_info->rx_offload_capa = (ngbe_get_rx_port_offloads(dev) |
+ dev_info->rx_queue_offload_capa);
dev_info->default_rxconf = (struct rte_eth_rxconf) {
.rx_thresh = {
@@ -670,7 +683,10 @@ ngbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
const uint32_t *
ngbe_dev_supported_ptypes_get(struct rte_eth_dev *dev)
{
- if (dev->rx_pkt_burst == ngbe_recv_pkts)
+ if (dev->rx_pkt_burst == ngbe_recv_pkts ||
+ dev->rx_pkt_burst == ngbe_recv_pkts_sc_single_alloc ||
+ dev->rx_pkt_burst == ngbe_recv_pkts_sc_bulk_alloc ||
+ dev->rx_pkt_burst == ngbe_recv_pkts_bulk_alloc)
return ngbe_get_supported_ptypes();
return NULL;
diff --git a/drivers/net/ngbe/ngbe_ethdev.h b/drivers/net/ngbe/ngbe_ethdev.h
index 486c6c3839..e7fe9a03b7 100644
--- a/drivers/net/ngbe/ngbe_ethdev.h
+++ b/drivers/net/ngbe/ngbe_ethdev.h
@@ -106,6 +106,14 @@ int ngbe_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id);
uint16_t ngbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
uint16_t nb_pkts);
+uint16_t ngbe_recv_pkts_bulk_alloc(void *rx_queue, struct rte_mbuf **rx_pkts,
+ uint16_t nb_pkts);
+
+uint16_t ngbe_recv_pkts_sc_single_alloc(void *rx_queue,
+ struct rte_mbuf **rx_pkts, uint16_t nb_pkts);
+uint16_t ngbe_recv_pkts_sc_bulk_alloc(void *rx_queue,
+ struct rte_mbuf **rx_pkts, uint16_t nb_pkts);
+
uint16_t ngbe_xmit_pkts_simple(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts);
diff --git a/drivers/net/ngbe/ngbe_rxtx.c b/drivers/net/ngbe/ngbe_rxtx.c
index a3ef0f7577..49fa978853 100644
--- a/drivers/net/ngbe/ngbe_rxtx.c
+++ b/drivers/net/ngbe/ngbe_rxtx.c
@@ -263,6 +263,243 @@ ngbe_rxd_pkt_info_to_pkt_type(uint32_t pkt_info, uint16_t ptid_mask)
return ngbe_decode_ptype(ptid);
}
+/*
+ * LOOK_AHEAD defines how many desc statuses to check beyond the
+ * current descriptor.
+ * It must be a pound define for optimal performance.
+ * Do not change the value of LOOK_AHEAD, as the ngbe_rx_scan_hw_ring
+ * function only works with LOOK_AHEAD=8.
+ */
+#define LOOK_AHEAD 8
+#if (LOOK_AHEAD != 8)
+#error "PMD NGBE: LOOK_AHEAD must be 8\n"
+#endif
+static inline int
+ngbe_rx_scan_hw_ring(struct ngbe_rx_queue *rxq)
+{
+ volatile struct ngbe_rx_desc *rxdp;
+ struct ngbe_rx_entry *rxep;
+ struct rte_mbuf *mb;
+ uint16_t pkt_len;
+ int nb_dd;
+ uint32_t s[LOOK_AHEAD];
+ uint32_t pkt_info[LOOK_AHEAD];
+ int i, j, nb_rx = 0;
+ uint32_t status;
+
+ /* get references to current descriptor and S/W ring entry */
+ rxdp = &rxq->rx_ring[rxq->rx_tail];
+ rxep = &rxq->sw_ring[rxq->rx_tail];
+
+ status = rxdp->qw1.lo.status;
+ /* check to make sure there is at least 1 packet to receive */
+ if (!(status & rte_cpu_to_le_32(NGBE_RXD_STAT_DD)))
+ return 0;
+
+ /*
+ * Scan LOOK_AHEAD descriptors at a time to determine which descriptors
+ * reference packets that are ready to be received.
+ */
+ for (i = 0; i < RTE_PMD_NGBE_RX_MAX_BURST;
+ i += LOOK_AHEAD, rxdp += LOOK_AHEAD, rxep += LOOK_AHEAD) {
+ /* Read desc statuses backwards to avoid race condition */
+ for (j = 0; j < LOOK_AHEAD; j++)
+ s[j] = rte_le_to_cpu_32(rxdp[j].qw1.lo.status);
+
+ rte_atomic_thread_fence(__ATOMIC_ACQUIRE);
+
+ /* Compute how many status bits were set */
+ for (nb_dd = 0; nb_dd < LOOK_AHEAD &&
+ (s[nb_dd] & NGBE_RXD_STAT_DD); nb_dd++)
+ ;
+
+ for (j = 0; j < nb_dd; j++)
+ pkt_info[j] = rte_le_to_cpu_32(rxdp[j].qw0.dw0);
+
+ nb_rx += nb_dd;
+
+ /* Translate descriptor info to mbuf format */
+ for (j = 0; j < nb_dd; ++j) {
+ mb = rxep[j].mbuf;
+ pkt_len = rte_le_to_cpu_16(rxdp[j].qw1.hi.len);
+ mb->data_len = pkt_len;
+ mb->pkt_len = pkt_len;
+
+ mb->packet_type =
+ ngbe_rxd_pkt_info_to_pkt_type(pkt_info[j],
+ rxq->pkt_type_mask);
+ }
+
+ /* Move mbuf pointers from the S/W ring to the stage */
+ for (j = 0; j < LOOK_AHEAD; ++j)
+ rxq->rx_stage[i + j] = rxep[j].mbuf;
+
+ /* stop if all requested packets could not be received */
+ if (nb_dd != LOOK_AHEAD)
+ break;
+ }
+
+ /* clear software ring entries so we can cleanup correctly */
+ for (i = 0; i < nb_rx; ++i)
+ rxq->sw_ring[rxq->rx_tail + i].mbuf = NULL;
+
+ return nb_rx;
+}
+
+static inline int
+ngbe_rx_alloc_bufs(struct ngbe_rx_queue *rxq, bool reset_mbuf)
+{
+ volatile struct ngbe_rx_desc *rxdp;
+ struct ngbe_rx_entry *rxep;
+ struct rte_mbuf *mb;
+ uint16_t alloc_idx;
+ __le64 dma_addr;
+ int diag, i;
+
+ /* allocate buffers in bulk directly into the S/W ring */
+ alloc_idx = rxq->rx_free_trigger - (rxq->rx_free_thresh - 1);
+ rxep = &rxq->sw_ring[alloc_idx];
+ diag = rte_mempool_get_bulk(rxq->mb_pool, (void *)rxep,
+ rxq->rx_free_thresh);
+ if (unlikely(diag != 0))
+ return -ENOMEM;
+
+ rxdp = &rxq->rx_ring[alloc_idx];
+ for (i = 0; i < rxq->rx_free_thresh; ++i) {
+ /* populate the static rte mbuf fields */
+ mb = rxep[i].mbuf;
+ if (reset_mbuf)
+ mb->port = rxq->port_id;
+
+ rte_mbuf_refcnt_set(mb, 1);
+ mb->data_off = RTE_PKTMBUF_HEADROOM;
+
+ /* populate the descriptors */
+ dma_addr = rte_cpu_to_le_64(rte_mbuf_data_iova_default(mb));
+ NGBE_RXD_HDRADDR(&rxdp[i], 0);
+ NGBE_RXD_PKTADDR(&rxdp[i], dma_addr);
+ }
+
+ /* update state of internal queue structure */
+ rxq->rx_free_trigger = rxq->rx_free_trigger + rxq->rx_free_thresh;
+ if (rxq->rx_free_trigger >= rxq->nb_rx_desc)
+ rxq->rx_free_trigger = rxq->rx_free_thresh - 1;
+
+ /* no errors */
+ return 0;
+}
+
+static inline uint16_t
+ngbe_rx_fill_from_stage(struct ngbe_rx_queue *rxq, struct rte_mbuf **rx_pkts,
+ uint16_t nb_pkts)
+{
+ struct rte_mbuf **stage = &rxq->rx_stage[rxq->rx_next_avail];
+ int i;
+
+ /* how many packets are ready to return? */
+ nb_pkts = (uint16_t)RTE_MIN(nb_pkts, rxq->rx_nb_avail);
+
+ /* copy mbuf pointers to the application's packet list */
+ for (i = 0; i < nb_pkts; ++i)
+ rx_pkts[i] = stage[i];
+
+ /* update internal queue state */
+ rxq->rx_nb_avail = (uint16_t)(rxq->rx_nb_avail - nb_pkts);
+ rxq->rx_next_avail = (uint16_t)(rxq->rx_next_avail + nb_pkts);
+
+ return nb_pkts;
+}
+
+static inline uint16_t
+ngbe_rx_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
+ uint16_t nb_pkts)
+{
+ struct ngbe_rx_queue *rxq = (struct ngbe_rx_queue *)rx_queue;
+ struct rte_eth_dev *dev = &rte_eth_devices[rxq->port_id];
+ uint16_t nb_rx = 0;
+
+ /* Any previously recv'd pkts will be returned from the Rx stage */
+ if (rxq->rx_nb_avail)
+ return ngbe_rx_fill_from_stage(rxq, rx_pkts, nb_pkts);
+
+ /* Scan the H/W ring for packets to receive */
+ nb_rx = (uint16_t)ngbe_rx_scan_hw_ring(rxq);
+
+ /* update internal queue state */
+ rxq->rx_next_avail = 0;
+ rxq->rx_nb_avail = nb_rx;
+ rxq->rx_tail = (uint16_t)(rxq->rx_tail + nb_rx);
+
+ /* if required, allocate new buffers to replenish descriptors */
+ if (rxq->rx_tail > rxq->rx_free_trigger) {
+ uint16_t cur_free_trigger = rxq->rx_free_trigger;
+
+ if (ngbe_rx_alloc_bufs(rxq, true) != 0) {
+ int i, j;
+
+ PMD_RX_LOG(DEBUG, "RX mbuf alloc failed port_id=%u "
+ "queue_id=%u", (uint16_t)rxq->port_id,
+ (uint16_t)rxq->queue_id);
+
+ dev->data->rx_mbuf_alloc_failed +=
+ rxq->rx_free_thresh;
+
+ /*
+ * Need to rewind any previous receives if we cannot
+ * allocate new buffers to replenish the old ones.
+ */
+ rxq->rx_nb_avail = 0;
+ rxq->rx_tail = (uint16_t)(rxq->rx_tail - nb_rx);
+ for (i = 0, j = rxq->rx_tail; i < nb_rx; ++i, ++j)
+ rxq->sw_ring[j].mbuf = rxq->rx_stage[i];
+
+ return 0;
+ }
+
+ /* update tail pointer */
+ rte_wmb();
+ ngbe_set32_relaxed(rxq->rdt_reg_addr, cur_free_trigger);
+ }
+
+ if (rxq->rx_tail >= rxq->nb_rx_desc)
+ rxq->rx_tail = 0;
+
+ /* received any packets this loop? */
+ if (rxq->rx_nb_avail)
+ return ngbe_rx_fill_from_stage(rxq, rx_pkts, nb_pkts);
+
+ return 0;
+}
+
+/* split requests into chunks of size RTE_PMD_NGBE_RX_MAX_BURST */
+uint16_t
+ngbe_recv_pkts_bulk_alloc(void *rx_queue, struct rte_mbuf **rx_pkts,
+ uint16_t nb_pkts)
+{
+ uint16_t nb_rx;
+
+ if (unlikely(nb_pkts == 0))
+ return 0;
+
+ if (likely(nb_pkts <= RTE_PMD_NGBE_RX_MAX_BURST))
+ return ngbe_rx_recv_pkts(rx_queue, rx_pkts, nb_pkts);
+
+ /* request is relatively large, chunk it up */
+ nb_rx = 0;
+ while (nb_pkts) {
+ uint16_t ret, n;
+
+ n = (uint16_t)RTE_MIN(nb_pkts, RTE_PMD_NGBE_RX_MAX_BURST);
+ ret = ngbe_rx_recv_pkts(rx_queue, &rx_pkts[nb_rx], n);
+ nb_rx = (uint16_t)(nb_rx + ret);
+ nb_pkts = (uint16_t)(nb_pkts - ret);
+ if (ret < n)
+ break;
+ }
+
+ return nb_rx;
+}
+
uint16_t
ngbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
uint16_t nb_pkts)
@@ -426,6 +663,246 @@ ngbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
return nb_rx;
}
+static inline void
+ngbe_fill_cluster_head_buf(struct rte_mbuf *head, struct ngbe_rx_desc *desc,
+ struct ngbe_rx_queue *rxq, uint32_t staterr)
+{
+ uint32_t pkt_info;
+
+ RTE_SET_USED(staterr);
+ head->port = rxq->port_id;
+
+ pkt_info = rte_le_to_cpu_32(desc->qw0.dw0);
+ head->packet_type = ngbe_rxd_pkt_info_to_pkt_type(pkt_info,
+ rxq->pkt_type_mask);
+}
+
+/**
+ * ngbe_recv_pkts_sc - receive handler for scatter case.
+ *
+ * @rx_queue Rx queue handle
+ * @rx_pkts table of received packets
+ * @nb_pkts size of rx_pkts table
+ * @bulk_alloc if TRUE bulk allocation is used for a HW ring refilling
+ *
+ * Returns the number of received packets/clusters (according to the "bulk
+ * receive" interface).
+ */
+static inline uint16_t
+ngbe_recv_pkts_sc(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts,
+ bool bulk_alloc)
+{
+ struct ngbe_rx_queue *rxq = rx_queue;
+ struct rte_eth_dev *dev = &rte_eth_devices[rxq->port_id];
+ volatile struct ngbe_rx_desc *rx_ring = rxq->rx_ring;
+ struct ngbe_rx_entry *sw_ring = rxq->sw_ring;
+ struct ngbe_scattered_rx_entry *sw_sc_ring = rxq->sw_sc_ring;
+ uint16_t rx_id = rxq->rx_tail;
+ uint16_t nb_rx = 0;
+ uint16_t nb_hold = rxq->nb_rx_hold;
+ uint16_t prev_id = rxq->rx_tail;
+
+ while (nb_rx < nb_pkts) {
+ bool eop;
+ struct ngbe_rx_entry *rxe;
+ struct ngbe_scattered_rx_entry *sc_entry;
+ struct ngbe_scattered_rx_entry *next_sc_entry = NULL;
+ struct ngbe_rx_entry *next_rxe = NULL;
+ struct rte_mbuf *first_seg;
+ struct rte_mbuf *rxm;
+ struct rte_mbuf *nmb = NULL;
+ struct ngbe_rx_desc rxd;
+ uint16_t data_len;
+ uint16_t next_id;
+ volatile struct ngbe_rx_desc *rxdp;
+ uint32_t staterr;
+
+next_desc:
+ rxdp = &rx_ring[rx_id];
+ staterr = rte_le_to_cpu_32(rxdp->qw1.lo.status);
+
+ if (!(staterr & NGBE_RXD_STAT_DD))
+ break;
+
+ rxd = *rxdp;
+
+ PMD_RX_LOG(DEBUG, "port_id=%u queue_id=%u rx_id=%u "
+ "staterr=0x%x data_len=%u",
+ rxq->port_id, rxq->queue_id, rx_id, staterr,
+ rte_le_to_cpu_16(rxd.qw1.hi.len));
+
+ if (!bulk_alloc) {
+ nmb = rte_mbuf_raw_alloc(rxq->mb_pool);
+ if (nmb == NULL) {
+ PMD_RX_LOG(DEBUG, "Rx mbuf alloc failed "
+ "port_id=%u queue_id=%u",
+ rxq->port_id, rxq->queue_id);
+
+ dev->data->rx_mbuf_alloc_failed++;
+ break;
+ }
+ } else if (nb_hold > rxq->rx_free_thresh) {
+ uint16_t next_rdt = rxq->rx_free_trigger;
+
+ if (!ngbe_rx_alloc_bufs(rxq, false)) {
+ rte_wmb();
+ ngbe_set32_relaxed(rxq->rdt_reg_addr,
+ next_rdt);
+ nb_hold -= rxq->rx_free_thresh;
+ } else {
+ PMD_RX_LOG(DEBUG, "Rx bulk alloc failed "
+ "port_id=%u queue_id=%u",
+ rxq->port_id, rxq->queue_id);
+
+ dev->data->rx_mbuf_alloc_failed++;
+ break;
+ }
+ }
+
+ nb_hold++;
+ rxe = &sw_ring[rx_id];
+ eop = staterr & NGBE_RXD_STAT_EOP;
+
+ next_id = rx_id + 1;
+ if (next_id == rxq->nb_rx_desc)
+ next_id = 0;
+
+ /* Prefetch next mbuf while processing current one. */
+ rte_ngbe_prefetch(sw_ring[next_id].mbuf);
+
+ /*
+ * When next Rx descriptor is on a cache-line boundary,
+ * prefetch the next 4 RX descriptors and the next 4 pointers
+ * to mbufs.
+ */
+ if ((next_id & 0x3) == 0) {
+ rte_ngbe_prefetch(&rx_ring[next_id]);
+ rte_ngbe_prefetch(&sw_ring[next_id]);
+ }
+
+ rxm = rxe->mbuf;
+
+ if (!bulk_alloc) {
+ __le64 dma =
+ rte_cpu_to_le_64(rte_mbuf_data_iova_default(nmb));
+ /*
+ * Update Rx descriptor with the physical address of the
+ * new data buffer of the new allocated mbuf.
+ */
+ rxe->mbuf = nmb;
+
+ rxm->data_off = RTE_PKTMBUF_HEADROOM;
+ NGBE_RXD_HDRADDR(rxdp, 0);
+ NGBE_RXD_PKTADDR(rxdp, dma);
+ } else {
+ rxe->mbuf = NULL;
+ }
+
+ /*
+ * Set data length & data buffer address of mbuf.
+ */
+ data_len = rte_le_to_cpu_16(rxd.qw1.hi.len);
+ rxm->data_len = data_len;
+
+ if (!eop) {
+ uint16_t nextp_id;
+
+ nextp_id = next_id;
+ next_sc_entry = &sw_sc_ring[nextp_id];
+ next_rxe = &sw_ring[nextp_id];
+ rte_ngbe_prefetch(next_rxe);
+ }
+
+ sc_entry = &sw_sc_ring[rx_id];
+ first_seg = sc_entry->fbuf;
+ sc_entry->fbuf = NULL;
+
+ /*
+ * If this is the first buffer of the received packet,
+ * set the pointer to the first mbuf of the packet and
+ * initialize its context.
+ * Otherwise, update the total length and the number of segments
+ * of the current scattered packet, and update the pointer to
+ * the last mbuf of the current packet.
+ */
+ if (first_seg == NULL) {
+ first_seg = rxm;
+ first_seg->pkt_len = data_len;
+ first_seg->nb_segs = 1;
+ } else {
+ first_seg->pkt_len += data_len;
+ first_seg->nb_segs++;
+ }
+
+ prev_id = rx_id;
+ rx_id = next_id;
+
+ /*
+ * If this is not the last buffer of the received packet, update
+ * the pointer to the first mbuf at the NEXTP entry in the
+ * sw_sc_ring and continue to parse the Rx ring.
+ */
+ if (!eop && next_rxe) {
+ rxm->next = next_rxe->mbuf;
+ next_sc_entry->fbuf = first_seg;
+ goto next_desc;
+ }
+
+ /* Initialize the first mbuf of the returned packet */
+ ngbe_fill_cluster_head_buf(first_seg, &rxd, rxq, staterr);
+
+ /* Prefetch data of first segment, if configured to do so. */
+ rte_packet_prefetch((char *)first_seg->buf_addr +
+ first_seg->data_off);
+
+ /*
+ * Store the mbuf address into the next entry of the array
+ * of returned packets.
+ */
+ rx_pkts[nb_rx++] = first_seg;
+ }
+
+ /*
+ * Record index of the next Rx descriptor to probe.
+ */
+ rxq->rx_tail = rx_id;
+
+ /*
+ * If the number of free Rx descriptors is greater than the Rx free
+ * threshold of the queue, advance the Receive Descriptor Tail (RDT)
+ * register.
+ * Update the RDT with the value of the last processed Rx descriptor
+ * minus 1, to guarantee that the RDT register is never equal to the
+ * RDH register, which creates a "full" ring situation from the
+ * hardware point of view...
+ */
+ if (!bulk_alloc && nb_hold > rxq->rx_free_thresh) {
+ PMD_RX_LOG(DEBUG, "port_id=%u queue_id=%u rx_tail=%u "
+ "nb_hold=%u nb_rx=%u",
+ rxq->port_id, rxq->queue_id, rx_id, nb_hold, nb_rx);
+
+ rte_wmb();
+ ngbe_set32_relaxed(rxq->rdt_reg_addr, prev_id);
+ nb_hold = 0;
+ }
+
+ rxq->nb_rx_hold = nb_hold;
+ return nb_rx;
+}
+
+uint16_t
+ngbe_recv_pkts_sc_single_alloc(void *rx_queue, struct rte_mbuf **rx_pkts,
+ uint16_t nb_pkts)
+{
+ return ngbe_recv_pkts_sc(rx_queue, rx_pkts, nb_pkts, false);
+}
+
+uint16_t
+ngbe_recv_pkts_sc_bulk_alloc(void *rx_queue, struct rte_mbuf **rx_pkts,
+ uint16_t nb_pkts)
+{
+ return ngbe_recv_pkts_sc(rx_queue, rx_pkts, nb_pkts, true);
+}
/*********************************************************************
*
@@ -777,6 +1254,12 @@ ngbe_reset_rx_queue(struct ngbe_adapter *adapter, struct ngbe_rx_queue *rxq)
rxq->pkt_last_seg = NULL;
}
+uint64_t
+ngbe_get_rx_port_offloads(struct rte_eth_dev *dev __rte_unused)
+{
+ return DEV_RX_OFFLOAD_SCATTER;
+}
+
int
ngbe_dev_rx_queue_setup(struct rte_eth_dev *dev,
uint16_t queue_idx,
@@ -790,10 +1273,13 @@ ngbe_dev_rx_queue_setup(struct rte_eth_dev *dev,
struct ngbe_hw *hw;
uint16_t len;
struct ngbe_adapter *adapter = ngbe_dev_adapter(dev);
+ uint64_t offloads;
PMD_INIT_FUNC_TRACE();
hw = ngbe_dev_hw(dev);
+ offloads = rx_conf->offloads | dev->data->dev_conf.rxmode.offloads;
+
/* Free memory prior to re-allocation if needed... */
if (dev->data->rx_queues[queue_idx] != NULL) {
ngbe_rx_queue_release(dev->data->rx_queues[queue_idx]);
@@ -814,6 +1300,7 @@ ngbe_dev_rx_queue_setup(struct rte_eth_dev *dev,
rxq->port_id = dev->data->port_id;
rxq->drop_en = rx_conf->rx_drop_en;
rxq->rx_deferred_start = rx_conf->rx_deferred_start;
+ rxq->offloads = offloads;
rxq->pkt_type_mask = NGBE_PTID_MASK;
/*
@@ -978,6 +1465,54 @@ ngbe_alloc_rx_queue_mbufs(struct ngbe_rx_queue *rxq)
return 0;
}
+void
+ngbe_set_rx_function(struct rte_eth_dev *dev)
+{
+ struct ngbe_adapter *adapter = ngbe_dev_adapter(dev);
+
+ if (dev->data->scattered_rx) {
+ /*
+ * Set the scattered callback: there are bulk and
+ * single allocation versions.
+ */
+ if (adapter->rx_bulk_alloc_allowed) {
+ PMD_INIT_LOG(DEBUG, "Using a Scattered with bulk "
+ "allocation callback (port=%d).",
+ dev->data->port_id);
+ dev->rx_pkt_burst = ngbe_recv_pkts_sc_bulk_alloc;
+ } else {
+ PMD_INIT_LOG(DEBUG, "Using Regular (non-vector, "
+ "single allocation) "
+ "Scattered Rx callback "
+ "(port=%d).",
+ dev->data->port_id);
+
+ dev->rx_pkt_burst = ngbe_recv_pkts_sc_single_alloc;
+ }
+ /*
+ * Below we set "simple" callbacks according to port/queues parameters.
+ * If parameters allow we are going to choose between the following
+ * callbacks:
+ * - Bulk Allocation
+ * - Single buffer allocation (the simplest one)
+ */
+ } else if (adapter->rx_bulk_alloc_allowed) {
+ PMD_INIT_LOG(DEBUG, "Rx Burst Bulk Alloc Preconditions are "
+ "satisfied. Rx Burst Bulk Alloc function "
+ "will be used on port=%d.",
+ dev->data->port_id);
+
+ dev->rx_pkt_burst = ngbe_recv_pkts_bulk_alloc;
+ } else {
+ PMD_INIT_LOG(DEBUG, "Rx Burst Bulk Alloc Preconditions are not "
+ "satisfied, or Scattered Rx is requested "
+ "(port=%d).",
+ dev->data->port_id);
+
+ dev->rx_pkt_burst = ngbe_recv_pkts;
+ }
+}
+
/*
* Initializes Receive Unit.
*/
@@ -992,6 +1527,7 @@ ngbe_dev_rx_init(struct rte_eth_dev *dev)
uint32_t srrctl;
uint16_t buf_size;
uint16_t i;
+ struct rte_eth_rxmode *rx_conf = &dev->data->dev_conf.rxmode;
PMD_INIT_FUNC_TRACE();
hw = ngbe_dev_hw(dev);
@@ -1048,6 +1584,11 @@ ngbe_dev_rx_init(struct rte_eth_dev *dev)
wr32(hw, NGBE_RXCFG(rxq->reg_idx), srrctl);
}
+ if (rx_conf->offloads & DEV_RX_OFFLOAD_SCATTER)
+ dev->data->scattered_rx = 1;
+
+ ngbe_set_rx_function(dev);
+
return 0;
}
diff --git a/drivers/net/ngbe/ngbe_rxtx.h b/drivers/net/ngbe/ngbe_rxtx.h
index 788d684def..07b5ac3fbe 100644
--- a/drivers/net/ngbe/ngbe_rxtx.h
+++ b/drivers/net/ngbe/ngbe_rxtx.h
@@ -243,6 +243,7 @@ struct ngbe_rx_queue {
uint16_t port_id; /**< Device port identifier */
uint8_t drop_en; /**< If not 0, set SRRCTL.Drop_En */
uint8_t rx_deferred_start; /**< not in global dev start */
+ uint64_t offloads; /**< Rx offloads with DEV_RX_OFFLOAD_* */
/** need to alloc dummy mbuf, for wraparound when scanning hw ring */
struct rte_mbuf fake_mbuf;
/** hold packets to return to application */
@@ -308,4 +309,8 @@ struct ngbe_txq_ops {
void (*reset)(struct ngbe_tx_queue *txq);
};
+void ngbe_set_rx_function(struct rte_eth_dev *dev);
+
+uint64_t ngbe_get_rx_port_offloads(struct rte_eth_dev *dev);
+
#endif /* _NGBE_RXTX_H_ */
--
2.21.0.windows.1
^ permalink raw reply [flat|nested] 54+ messages in thread
* [dpdk-dev] [PATCH 03/32] net/ngbe: support Rx checksum offload
2021-09-08 8:37 [dpdk-dev] [PATCH 00/32] net/ngbe: add many features Jiawen Wu
2021-09-08 8:37 ` [dpdk-dev] [PATCH 01/32] net/ngbe: add packet type Jiawen Wu
2021-09-08 8:37 ` [dpdk-dev] [PATCH 02/32] net/ngbe: support scattered Rx Jiawen Wu
@ 2021-09-08 8:37 ` Jiawen Wu
2021-09-15 16:48 ` Ferruh Yigit
2021-09-08 8:37 ` [dpdk-dev] [PATCH 04/32] net/ngbe: support TSO Jiawen Wu
` (28 subsequent siblings)
31 siblings, 1 reply; 54+ messages in thread
From: Jiawen Wu @ 2021-09-08 8:37 UTC (permalink / raw)
To: dev; +Cc: Jiawen Wu
Support IP/L4 checksum on Rx, and convert it to mbuf flags.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
doc/guides/nics/features/ngbe.ini | 2 +
doc/guides/nics/ngbe.rst | 1 +
drivers/net/ngbe/ngbe_rxtx.c | 75 +++++++++++++++++++++++++++++--
3 files changed, 75 insertions(+), 3 deletions(-)
diff --git a/doc/guides/nics/features/ngbe.ini b/doc/guides/nics/features/ngbe.ini
index f85754eb7a..2777ed5a62 100644
--- a/doc/guides/nics/features/ngbe.ini
+++ b/doc/guides/nics/features/ngbe.ini
@@ -9,6 +9,8 @@ Link status = Y
Link status event = Y
Queue start/stop = Y
Scattered Rx = Y
+L3 checksum offload = P
+L4 checksum offload = P
Packet type parsing = Y
Multiprocess aware = Y
Linux = Y
diff --git a/doc/guides/nics/ngbe.rst b/doc/guides/nics/ngbe.rst
index 463452ce8c..0a14252ff2 100644
--- a/doc/guides/nics/ngbe.rst
+++ b/doc/guides/nics/ngbe.rst
@@ -12,6 +12,7 @@ Features
--------
- Packet type information
+- Checksum offload
- Link state information
- Scattered for RX
diff --git a/drivers/net/ngbe/ngbe_rxtx.c b/drivers/net/ngbe/ngbe_rxtx.c
index 49fa978853..1661ecafa5 100644
--- a/drivers/net/ngbe/ngbe_rxtx.c
+++ b/drivers/net/ngbe/ngbe_rxtx.c
@@ -263,6 +263,31 @@ ngbe_rxd_pkt_info_to_pkt_type(uint32_t pkt_info, uint16_t ptid_mask)
return ngbe_decode_ptype(ptid);
}
+static inline uint64_t
+rx_desc_error_to_pkt_flags(uint32_t rx_status)
+{
+ uint64_t pkt_flags = 0;
+
+ /* checksum offload can't be disabled */
+ if (rx_status & NGBE_RXD_STAT_IPCS) {
+ pkt_flags |= (rx_status & NGBE_RXD_ERR_IPCS
+ ? PKT_RX_IP_CKSUM_BAD : PKT_RX_IP_CKSUM_GOOD);
+ }
+
+ if (rx_status & NGBE_RXD_STAT_L4CS) {
+ pkt_flags |= (rx_status & NGBE_RXD_ERR_L4CS
+ ? PKT_RX_L4_CKSUM_BAD : PKT_RX_L4_CKSUM_GOOD);
+ }
+
+ if (rx_status & NGBE_RXD_STAT_EIPCS &&
+ rx_status & NGBE_RXD_ERR_EIPCS) {
+ pkt_flags |= PKT_RX_OUTER_IP_CKSUM_BAD;
+ }
+
+
+ return pkt_flags;
+}
+
/*
* LOOK_AHEAD defines how many desc statuses to check beyond the
* current descriptor.
@@ -281,6 +306,7 @@ ngbe_rx_scan_hw_ring(struct ngbe_rx_queue *rxq)
struct ngbe_rx_entry *rxep;
struct rte_mbuf *mb;
uint16_t pkt_len;
+ uint64_t pkt_flags;
int nb_dd;
uint32_t s[LOOK_AHEAD];
uint32_t pkt_info[LOOK_AHEAD];
@@ -325,6 +351,9 @@ ngbe_rx_scan_hw_ring(struct ngbe_rx_queue *rxq)
mb->data_len = pkt_len;
mb->pkt_len = pkt_len;
+ /* convert descriptor fields to rte mbuf flags */
+ pkt_flags = rx_desc_error_to_pkt_flags(s[j]);
+ mb->ol_flags = pkt_flags;
mb->packet_type =
ngbe_rxd_pkt_info_to_pkt_type(pkt_info[j],
rxq->pkt_type_mask);
@@ -519,6 +548,7 @@ ngbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
uint16_t rx_id;
uint16_t nb_rx;
uint16_t nb_hold;
+ uint64_t pkt_flags;
nb_rx = 0;
nb_hold = 0;
@@ -611,11 +641,14 @@ ngbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
/*
* Initialize the returned mbuf.
- * setup generic mbuf fields:
+ * 1) setup generic mbuf fields:
* - number of segments,
* - next segment,
* - packet length,
* - Rx port identifier.
+ * 2) integrate hardware offload data, if any:
+ * - IP checksum flag,
+ * - error flags.
*/
pkt_len = (uint16_t)(rte_le_to_cpu_16(rxd.qw1.hi.len));
rxm->data_off = RTE_PKTMBUF_HEADROOM;
@@ -627,6 +660,8 @@ ngbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
rxm->port = rxq->port_id;
pkt_info = rte_le_to_cpu_32(rxd.qw0.dw0);
+ pkt_flags = rx_desc_error_to_pkt_flags(staterr);
+ rxm->ol_flags = pkt_flags;
rxm->packet_type = ngbe_rxd_pkt_info_to_pkt_type(pkt_info,
rxq->pkt_type_mask);
@@ -663,16 +698,30 @@ ngbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
return nb_rx;
}
+/**
+ * ngbe_fill_cluster_head_buf - fill the first mbuf of the returned packet
+ *
+ * Fill the following info in the HEAD buffer of the Rx cluster:
+ * - RX port identifier
+ * - hardware offload data, if any:
+ * - IP checksum flag
+ * - error flags
+ * @head HEAD of the packet cluster
+ * @desc HW descriptor to get data from
+ * @rxq Pointer to the Rx queue
+ */
static inline void
ngbe_fill_cluster_head_buf(struct rte_mbuf *head, struct ngbe_rx_desc *desc,
struct ngbe_rx_queue *rxq, uint32_t staterr)
{
uint32_t pkt_info;
+ uint64_t pkt_flags;
- RTE_SET_USED(staterr);
head->port = rxq->port_id;
pkt_info = rte_le_to_cpu_32(desc->qw0.dw0);
+ pkt_flags = rx_desc_error_to_pkt_flags(staterr);
+ head->ol_flags = pkt_flags;
head->packet_type = ngbe_rxd_pkt_info_to_pkt_type(pkt_info,
rxq->pkt_type_mask);
}
@@ -1257,7 +1306,14 @@ ngbe_reset_rx_queue(struct ngbe_adapter *adapter, struct ngbe_rx_queue *rxq)
uint64_t
ngbe_get_rx_port_offloads(struct rte_eth_dev *dev __rte_unused)
{
- return DEV_RX_OFFLOAD_SCATTER;
+ uint64_t offloads;
+
+ offloads = DEV_RX_OFFLOAD_IPV4_CKSUM |
+ DEV_RX_OFFLOAD_UDP_CKSUM |
+ DEV_RX_OFFLOAD_TCP_CKSUM |
+ DEV_RX_OFFLOAD_SCATTER;
+
+ return offloads;
}
int
@@ -1525,6 +1581,7 @@ ngbe_dev_rx_init(struct rte_eth_dev *dev)
uint32_t fctrl;
uint32_t hlreg0;
uint32_t srrctl;
+ uint32_t rxcsum;
uint16_t buf_size;
uint16_t i;
struct rte_eth_rxmode *rx_conf = &dev->data->dev_conf.rxmode;
@@ -1586,6 +1643,18 @@ ngbe_dev_rx_init(struct rte_eth_dev *dev)
if (rx_conf->offloads & DEV_RX_OFFLOAD_SCATTER)
dev->data->scattered_rx = 1;
+ /*
+ * Setup the Checksum Register.
+ * Enable IP/L4 checksum computation by hardware if requested to do so.
+ */
+ rxcsum = rd32(hw, NGBE_PSRCTL);
+ rxcsum |= NGBE_PSRCTL_PCSD;
+ if (rx_conf->offloads & DEV_RX_OFFLOAD_CHECKSUM)
+ rxcsum |= NGBE_PSRCTL_L4CSUM;
+ else
+ rxcsum &= ~NGBE_PSRCTL_L4CSUM;
+
+ wr32(hw, NGBE_PSRCTL, rxcsum);
ngbe_set_rx_function(dev);
--
2.21.0.windows.1
^ permalink raw reply [flat|nested] 54+ messages in thread
* [dpdk-dev] [PATCH 04/32] net/ngbe: support TSO
2021-09-08 8:37 [dpdk-dev] [PATCH 00/32] net/ngbe: add many features Jiawen Wu
` (2 preceding siblings ...)
2021-09-08 8:37 ` [dpdk-dev] [PATCH 03/32] net/ngbe: support Rx checksum offload Jiawen Wu
@ 2021-09-08 8:37 ` Jiawen Wu
2021-09-15 16:57 ` Ferruh Yigit
2021-09-08 8:37 ` [dpdk-dev] [PATCH 05/32] net/ngbe: support CRC offload Jiawen Wu
` (27 subsequent siblings)
31 siblings, 1 reply; 54+ messages in thread
From: Jiawen Wu @ 2021-09-08 8:37 UTC (permalink / raw)
To: dev; +Cc: Jiawen Wu
Add transmit datapath with offloads, and support TCP segmentation
offload.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
doc/guides/nics/features/ngbe.ini | 3 +
doc/guides/nics/ngbe.rst | 3 +-
drivers/net/ngbe/ngbe_ethdev.c | 19 +-
drivers/net/ngbe/ngbe_ethdev.h | 6 +
drivers/net/ngbe/ngbe_rxtx.c | 678 ++++++++++++++++++++++++++++++
drivers/net/ngbe/ngbe_rxtx.h | 58 +++
6 files changed, 765 insertions(+), 2 deletions(-)
diff --git a/doc/guides/nics/features/ngbe.ini b/doc/guides/nics/features/ngbe.ini
index 2777ed5a62..32f74a3084 100644
--- a/doc/guides/nics/features/ngbe.ini
+++ b/doc/guides/nics/features/ngbe.ini
@@ -9,8 +9,11 @@ Link status = Y
Link status event = Y
Queue start/stop = Y
Scattered Rx = Y
+TSO = Y
L3 checksum offload = P
L4 checksum offload = P
+Inner L3 checksum = P
+Inner L4 checksum = P
Packet type parsing = Y
Multiprocess aware = Y
Linux = Y
diff --git a/doc/guides/nics/ngbe.rst b/doc/guides/nics/ngbe.rst
index 0a14252ff2..6a6ae39243 100644
--- a/doc/guides/nics/ngbe.rst
+++ b/doc/guides/nics/ngbe.rst
@@ -13,8 +13,9 @@ Features
- Packet type information
- Checksum offload
+- TSO offload
- Link state information
-- Scattered for RX
+- Scattered and gather for TX and RX
Prerequisites
diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c
index fba0a2dcfd..e7d63f1b14 100644
--- a/drivers/net/ngbe/ngbe_ethdev.c
+++ b/drivers/net/ngbe/ngbe_ethdev.c
@@ -138,7 +138,8 @@ eth_ngbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
eth_dev->dev_ops = &ngbe_eth_dev_ops;
eth_dev->rx_pkt_burst = &ngbe_recv_pkts;
- eth_dev->tx_pkt_burst = &ngbe_xmit_pkts_simple;
+ eth_dev->tx_pkt_burst = &ngbe_xmit_pkts;
+ eth_dev->tx_pkt_prepare = &ngbe_prep_pkts;
/*
* For secondary processes, we don't initialise any further as primary
@@ -146,6 +147,20 @@ eth_ngbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
* Rx and Tx function.
*/
if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
+ struct ngbe_tx_queue *txq;
+ /* Tx queue function in primary, set by last queue initialized
+ * Tx queue may not initialized by primary process
+ */
+ if (eth_dev->data->tx_queues) {
+ uint16_t nb_tx_queues = eth_dev->data->nb_tx_queues;
+ txq = eth_dev->data->tx_queues[nb_tx_queues - 1];
+ ngbe_set_tx_function(eth_dev, txq);
+ } else {
+ /* Use default Tx function if we get here */
+ PMD_INIT_LOG(NOTICE,
+ "No Tx queues configured yet. Using default Tx function.");
+ }
+
ngbe_set_rx_function(eth_dev);
return 0;
@@ -641,6 +656,8 @@ ngbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->max_rx_pktlen = 15872;
dev_info->rx_offload_capa = (ngbe_get_rx_port_offloads(dev) |
dev_info->rx_queue_offload_capa);
+ dev_info->tx_queue_offload_capa = 0;
+ dev_info->tx_offload_capa = ngbe_get_tx_port_offloads(dev);
dev_info->default_rxconf = (struct rte_eth_rxconf) {
.rx_thresh = {
diff --git a/drivers/net/ngbe/ngbe_ethdev.h b/drivers/net/ngbe/ngbe_ethdev.h
index e7fe9a03b7..cbf3ab558f 100644
--- a/drivers/net/ngbe/ngbe_ethdev.h
+++ b/drivers/net/ngbe/ngbe_ethdev.h
@@ -114,9 +114,15 @@ uint16_t ngbe_recv_pkts_sc_single_alloc(void *rx_queue,
uint16_t ngbe_recv_pkts_sc_bulk_alloc(void *rx_queue,
struct rte_mbuf **rx_pkts, uint16_t nb_pkts);
+uint16_t ngbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
+ uint16_t nb_pkts);
+
uint16_t ngbe_xmit_pkts_simple(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts);
+uint16_t ngbe_prep_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
+ uint16_t nb_pkts);
+
void ngbe_set_ivar_map(struct ngbe_hw *hw, int8_t direction,
uint8_t queue, uint8_t msix_vector);
diff --git a/drivers/net/ngbe/ngbe_rxtx.c b/drivers/net/ngbe/ngbe_rxtx.c
index 1661ecafa5..21f5808787 100644
--- a/drivers/net/ngbe/ngbe_rxtx.c
+++ b/drivers/net/ngbe/ngbe_rxtx.c
@@ -9,11 +9,24 @@
#include <rte_ethdev.h>
#include <ethdev_driver.h>
#include <rte_malloc.h>
+#include <rte_net.h>
#include "ngbe_logs.h"
#include "base/ngbe.h"
#include "ngbe_ethdev.h"
#include "ngbe_rxtx.h"
+/* Bit Mask to indicate what bits required for building Tx context */
+static const u64 NGBE_TX_OFFLOAD_MASK = (PKT_TX_IP_CKSUM |
+ PKT_TX_OUTER_IPV6 |
+ PKT_TX_OUTER_IPV4 |
+ PKT_TX_IPV6 |
+ PKT_TX_IPV4 |
+ PKT_TX_L4_MASK |
+ PKT_TX_TCP_SEG |
+ PKT_TX_TUNNEL_MASK |
+ PKT_TX_OUTER_IP_CKSUM);
+#define NGBE_TX_OFFLOAD_NOTSUP_MASK \
+ (PKT_TX_OFFLOAD_MASK ^ NGBE_TX_OFFLOAD_MASK)
/*
* Prefetch a cache line into all cache levels.
@@ -248,6 +261,614 @@ ngbe_xmit_pkts_simple(void *tx_queue, struct rte_mbuf **tx_pkts,
return nb_tx;
}
+static inline void
+ngbe_set_xmit_ctx(struct ngbe_tx_queue *txq,
+ volatile struct ngbe_tx_ctx_desc *ctx_txd,
+ uint64_t ol_flags, union ngbe_tx_offload tx_offload)
+{
+ union ngbe_tx_offload tx_offload_mask;
+ uint32_t type_tucmd_mlhl;
+ uint32_t mss_l4len_idx;
+ uint32_t ctx_idx;
+ uint32_t vlan_macip_lens;
+ uint32_t tunnel_seed;
+
+ ctx_idx = txq->ctx_curr;
+ tx_offload_mask.data[0] = 0;
+ tx_offload_mask.data[1] = 0;
+
+ /* Specify which HW CTX to upload. */
+ mss_l4len_idx = NGBE_TXD_IDX(ctx_idx);
+ type_tucmd_mlhl = NGBE_TXD_CTXT;
+
+ tx_offload_mask.ptid |= ~0;
+ type_tucmd_mlhl |= NGBE_TXD_PTID(tx_offload.ptid);
+
+ /* check if TCP segmentation required for this packet */
+ if (ol_flags & PKT_TX_TCP_SEG) {
+ tx_offload_mask.l2_len |= ~0;
+ tx_offload_mask.l3_len |= ~0;
+ tx_offload_mask.l4_len |= ~0;
+ tx_offload_mask.tso_segsz |= ~0;
+ mss_l4len_idx |= NGBE_TXD_MSS(tx_offload.tso_segsz);
+ mss_l4len_idx |= NGBE_TXD_L4LEN(tx_offload.l4_len);
+ } else { /* no TSO, check if hardware checksum is needed */
+ if (ol_flags & PKT_TX_IP_CKSUM) {
+ tx_offload_mask.l2_len |= ~0;
+ tx_offload_mask.l3_len |= ~0;
+ }
+
+ switch (ol_flags & PKT_TX_L4_MASK) {
+ case PKT_TX_UDP_CKSUM:
+ mss_l4len_idx |=
+ NGBE_TXD_L4LEN(sizeof(struct rte_udp_hdr));
+ tx_offload_mask.l2_len |= ~0;
+ tx_offload_mask.l3_len |= ~0;
+ break;
+ case PKT_TX_TCP_CKSUM:
+ mss_l4len_idx |=
+ NGBE_TXD_L4LEN(sizeof(struct rte_tcp_hdr));
+ tx_offload_mask.l2_len |= ~0;
+ tx_offload_mask.l3_len |= ~0;
+ break;
+ case PKT_TX_SCTP_CKSUM:
+ mss_l4len_idx |=
+ NGBE_TXD_L4LEN(sizeof(struct rte_sctp_hdr));
+ tx_offload_mask.l2_len |= ~0;
+ tx_offload_mask.l3_len |= ~0;
+ break;
+ default:
+ break;
+ }
+ }
+
+ vlan_macip_lens = NGBE_TXD_IPLEN(tx_offload.l3_len >> 1);
+
+ if (ol_flags & PKT_TX_TUNNEL_MASK) {
+ tx_offload_mask.outer_tun_len |= ~0;
+ tx_offload_mask.outer_l2_len |= ~0;
+ tx_offload_mask.outer_l3_len |= ~0;
+ tx_offload_mask.l2_len |= ~0;
+ tunnel_seed = NGBE_TXD_ETUNLEN(tx_offload.outer_tun_len >> 1);
+ tunnel_seed |= NGBE_TXD_EIPLEN(tx_offload.outer_l3_len >> 2);
+
+ switch (ol_flags & PKT_TX_TUNNEL_MASK) {
+ case PKT_TX_TUNNEL_IPIP:
+ /* for non UDP / GRE tunneling, set to 0b */
+ break;
+ default:
+ PMD_TX_LOG(ERR, "Tunnel type not supported");
+ return;
+ }
+ vlan_macip_lens |= NGBE_TXD_MACLEN(tx_offload.outer_l2_len);
+ } else {
+ tunnel_seed = 0;
+ vlan_macip_lens |= NGBE_TXD_MACLEN(tx_offload.l2_len);
+ }
+
+ txq->ctx_cache[ctx_idx].flags = ol_flags;
+ txq->ctx_cache[ctx_idx].tx_offload.data[0] =
+ tx_offload_mask.data[0] & tx_offload.data[0];
+ txq->ctx_cache[ctx_idx].tx_offload.data[1] =
+ tx_offload_mask.data[1] & tx_offload.data[1];
+ txq->ctx_cache[ctx_idx].tx_offload_mask = tx_offload_mask;
+
+ ctx_txd->dw0 = rte_cpu_to_le_32(vlan_macip_lens);
+ ctx_txd->dw1 = rte_cpu_to_le_32(tunnel_seed);
+ ctx_txd->dw2 = rte_cpu_to_le_32(type_tucmd_mlhl);
+ ctx_txd->dw3 = rte_cpu_to_le_32(mss_l4len_idx);
+}
+
+/*
+ * Check which hardware context can be used. Use the existing match
+ * or create a new context descriptor.
+ */
+static inline uint32_t
+what_ctx_update(struct ngbe_tx_queue *txq, uint64_t flags,
+ union ngbe_tx_offload tx_offload)
+{
+ /* If match with the current used context */
+ if (likely(txq->ctx_cache[txq->ctx_curr].flags == flags &&
+ (txq->ctx_cache[txq->ctx_curr].tx_offload.data[0] ==
+ (txq->ctx_cache[txq->ctx_curr].tx_offload_mask.data[0]
+ & tx_offload.data[0])) &&
+ (txq->ctx_cache[txq->ctx_curr].tx_offload.data[1] ==
+ (txq->ctx_cache[txq->ctx_curr].tx_offload_mask.data[1]
+ & tx_offload.data[1]))))
+ return txq->ctx_curr;
+
+ /* What if match with the next context */
+ txq->ctx_curr ^= 1;
+ if (likely(txq->ctx_cache[txq->ctx_curr].flags == flags &&
+ (txq->ctx_cache[txq->ctx_curr].tx_offload.data[0] ==
+ (txq->ctx_cache[txq->ctx_curr].tx_offload_mask.data[0]
+ & tx_offload.data[0])) &&
+ (txq->ctx_cache[txq->ctx_curr].tx_offload.data[1] ==
+ (txq->ctx_cache[txq->ctx_curr].tx_offload_mask.data[1]
+ & tx_offload.data[1]))))
+ return txq->ctx_curr;
+
+ /* Mismatch, use the previous context */
+ return NGBE_CTX_NUM;
+}
+
+static inline uint32_t
+tx_desc_cksum_flags_to_olinfo(uint64_t ol_flags)
+{
+ uint32_t tmp = 0;
+
+ if ((ol_flags & PKT_TX_L4_MASK) != PKT_TX_L4_NO_CKSUM) {
+ tmp |= NGBE_TXD_CC;
+ tmp |= NGBE_TXD_L4CS;
+ }
+ if (ol_flags & PKT_TX_IP_CKSUM) {
+ tmp |= NGBE_TXD_CC;
+ tmp |= NGBE_TXD_IPCS;
+ }
+ if (ol_flags & PKT_TX_OUTER_IP_CKSUM) {
+ tmp |= NGBE_TXD_CC;
+ tmp |= NGBE_TXD_EIPCS;
+ }
+ if (ol_flags & PKT_TX_TCP_SEG) {
+ tmp |= NGBE_TXD_CC;
+ /* implies IPv4 cksum */
+ if (ol_flags & PKT_TX_IPV4)
+ tmp |= NGBE_TXD_IPCS;
+ tmp |= NGBE_TXD_L4CS;
+ }
+
+ return tmp;
+}
+
+static inline uint32_t
+tx_desc_ol_flags_to_cmdtype(uint64_t ol_flags)
+{
+ uint32_t cmdtype = 0;
+
+ if (ol_flags & PKT_TX_TCP_SEG)
+ cmdtype |= NGBE_TXD_TSE;
+ return cmdtype;
+}
+
+static inline uint8_t
+tx_desc_ol_flags_to_ptid(uint64_t oflags, uint32_t ptype)
+{
+ bool tun;
+
+ if (ptype)
+ return ngbe_encode_ptype(ptype);
+
+ /* Only support flags in NGBE_TX_OFFLOAD_MASK */
+ tun = !!(oflags & PKT_TX_TUNNEL_MASK);
+
+ /* L2 level */
+ ptype = RTE_PTYPE_L2_ETHER;
+
+ /* L3 level */
+ if (oflags & (PKT_TX_OUTER_IPV4 | PKT_TX_OUTER_IP_CKSUM))
+ ptype |= RTE_PTYPE_L3_IPV4;
+ else if (oflags & (PKT_TX_OUTER_IPV6))
+ ptype |= RTE_PTYPE_L3_IPV6;
+
+ if (oflags & (PKT_TX_IPV4 | PKT_TX_IP_CKSUM))
+ ptype |= (tun ? RTE_PTYPE_INNER_L3_IPV4 : RTE_PTYPE_L3_IPV4);
+ else if (oflags & (PKT_TX_IPV6))
+ ptype |= (tun ? RTE_PTYPE_INNER_L3_IPV6 : RTE_PTYPE_L3_IPV6);
+
+ /* L4 level */
+ switch (oflags & (PKT_TX_L4_MASK)) {
+ case PKT_TX_TCP_CKSUM:
+ ptype |= (tun ? RTE_PTYPE_INNER_L4_TCP : RTE_PTYPE_L4_TCP);
+ break;
+ case PKT_TX_UDP_CKSUM:
+ ptype |= (tun ? RTE_PTYPE_INNER_L4_UDP : RTE_PTYPE_L4_UDP);
+ break;
+ case PKT_TX_SCTP_CKSUM:
+ ptype |= (tun ? RTE_PTYPE_INNER_L4_SCTP : RTE_PTYPE_L4_SCTP);
+ break;
+ }
+
+ if (oflags & PKT_TX_TCP_SEG)
+ ptype |= (tun ? RTE_PTYPE_INNER_L4_TCP : RTE_PTYPE_L4_TCP);
+
+ /* Tunnel */
+ switch (oflags & PKT_TX_TUNNEL_MASK) {
+ case PKT_TX_TUNNEL_IPIP:
+ case PKT_TX_TUNNEL_IP:
+ ptype |= RTE_PTYPE_L2_ETHER |
+ RTE_PTYPE_L3_IPV4 |
+ RTE_PTYPE_TUNNEL_IP;
+ break;
+ }
+
+ return ngbe_encode_ptype(ptype);
+}
+
+/* Reset transmit descriptors after they have been used */
+static inline int
+ngbe_xmit_cleanup(struct ngbe_tx_queue *txq)
+{
+ struct ngbe_tx_entry *sw_ring = txq->sw_ring;
+ volatile struct ngbe_tx_desc *txr = txq->tx_ring;
+ uint16_t last_desc_cleaned = txq->last_desc_cleaned;
+ uint16_t nb_tx_desc = txq->nb_tx_desc;
+ uint16_t desc_to_clean_to;
+ uint16_t nb_tx_to_clean;
+ uint32_t status;
+
+ /* Determine the last descriptor needing to be cleaned */
+ desc_to_clean_to = (uint16_t)(last_desc_cleaned + txq->tx_free_thresh);
+ if (desc_to_clean_to >= nb_tx_desc)
+ desc_to_clean_to = (uint16_t)(desc_to_clean_to - nb_tx_desc);
+
+ /* Check to make sure the last descriptor to clean is done */
+ desc_to_clean_to = sw_ring[desc_to_clean_to].last_id;
+ status = txr[desc_to_clean_to].dw3;
+ if (!(status & rte_cpu_to_le_32(NGBE_TXD_DD))) {
+ PMD_TX_LOG(DEBUG,
+ "Tx descriptor %4u is not done"
+ "(port=%d queue=%d)",
+ desc_to_clean_to,
+ txq->port_id, txq->queue_id);
+ if (txq->nb_tx_free >> 1 < txq->tx_free_thresh)
+ ngbe_set32_masked(txq->tdc_reg_addr,
+ NGBE_TXCFG_FLUSH, NGBE_TXCFG_FLUSH);
+ /* Failed to clean any descriptors, better luck next time */
+ return -(1);
+ }
+
+ /* Figure out how many descriptors will be cleaned */
+ if (last_desc_cleaned > desc_to_clean_to)
+ nb_tx_to_clean = (uint16_t)((nb_tx_desc - last_desc_cleaned) +
+ desc_to_clean_to);
+ else
+ nb_tx_to_clean = (uint16_t)(desc_to_clean_to -
+ last_desc_cleaned);
+
+ PMD_TX_LOG(DEBUG,
+ "Cleaning %4u Tx descriptors: %4u to %4u (port=%d queue=%d)",
+ nb_tx_to_clean, last_desc_cleaned, desc_to_clean_to,
+ txq->port_id, txq->queue_id);
+
+ /*
+ * The last descriptor to clean is done, so that means all the
+ * descriptors from the last descriptor that was cleaned
+ * up to the last descriptor with the RS bit set
+ * are done. Only reset the threshold descriptor.
+ */
+ txr[desc_to_clean_to].dw3 = 0;
+
+ /* Update the txq to reflect the last descriptor that was cleaned */
+ txq->last_desc_cleaned = desc_to_clean_to;
+ txq->nb_tx_free = (uint16_t)(txq->nb_tx_free + nb_tx_to_clean);
+
+ /* No Error */
+ return 0;
+}
+
+uint16_t
+ngbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
+ uint16_t nb_pkts)
+{
+ struct ngbe_tx_queue *txq;
+ struct ngbe_tx_entry *sw_ring;
+ struct ngbe_tx_entry *txe, *txn;
+ volatile struct ngbe_tx_desc *txr;
+ volatile struct ngbe_tx_desc *txd;
+ struct rte_mbuf *tx_pkt;
+ struct rte_mbuf *m_seg;
+ uint64_t buf_dma_addr;
+ uint32_t olinfo_status;
+ uint32_t cmd_type_len;
+ uint32_t pkt_len;
+ uint16_t slen;
+ uint64_t ol_flags;
+ uint16_t tx_id;
+ uint16_t tx_last;
+ uint16_t nb_tx;
+ uint16_t nb_used;
+ uint64_t tx_ol_req;
+ uint32_t ctx = 0;
+ uint32_t new_ctx;
+ union ngbe_tx_offload tx_offload;
+
+ tx_offload.data[0] = 0;
+ tx_offload.data[1] = 0;
+ txq = tx_queue;
+ sw_ring = txq->sw_ring;
+ txr = txq->tx_ring;
+ tx_id = txq->tx_tail;
+ txe = &sw_ring[tx_id];
+
+ /* Determine if the descriptor ring needs to be cleaned. */
+ if (txq->nb_tx_free < txq->tx_free_thresh)
+ ngbe_xmit_cleanup(txq);
+
+ rte_prefetch0(&txe->mbuf->pool);
+
+ /* Tx loop */
+ for (nb_tx = 0; nb_tx < nb_pkts; nb_tx++) {
+ new_ctx = 0;
+ tx_pkt = *tx_pkts++;
+ pkt_len = tx_pkt->pkt_len;
+
+ /*
+ * Determine how many (if any) context descriptors
+ * are needed for offload functionality.
+ */
+ ol_flags = tx_pkt->ol_flags;
+
+ /* If hardware offload required */
+ tx_ol_req = ol_flags & NGBE_TX_OFFLOAD_MASK;
+ if (tx_ol_req) {
+ tx_offload.ptid = tx_desc_ol_flags_to_ptid(tx_ol_req,
+ tx_pkt->packet_type);
+ tx_offload.l2_len = tx_pkt->l2_len;
+ tx_offload.l3_len = tx_pkt->l3_len;
+ tx_offload.l4_len = tx_pkt->l4_len;
+ tx_offload.tso_segsz = tx_pkt->tso_segsz;
+ tx_offload.outer_l2_len = tx_pkt->outer_l2_len;
+ tx_offload.outer_l3_len = tx_pkt->outer_l3_len;
+ tx_offload.outer_tun_len = 0;
+
+ /* If new context need be built or reuse the exist ctx*/
+ ctx = what_ctx_update(txq, tx_ol_req, tx_offload);
+ /* Only allocate context descriptor if required */
+ new_ctx = (ctx == NGBE_CTX_NUM);
+ ctx = txq->ctx_curr;
+ }
+
+ /*
+ * Keep track of how many descriptors are used this loop
+ * This will always be the number of segments + the number of
+ * Context descriptors required to transmit the packet
+ */
+ nb_used = (uint16_t)(tx_pkt->nb_segs + new_ctx);
+
+ /*
+ * The number of descriptors that must be allocated for a
+ * packet is the number of segments of that packet, plus 1
+ * Context Descriptor for the hardware offload, if any.
+ * Determine the last Tx descriptor to allocate in the Tx ring
+ * for the packet, starting from the current position (tx_id)
+ * in the ring.
+ */
+ tx_last = (uint16_t)(tx_id + nb_used - 1);
+
+ /* Circular ring */
+ if (tx_last >= txq->nb_tx_desc)
+ tx_last = (uint16_t)(tx_last - txq->nb_tx_desc);
+
+ PMD_TX_LOG(DEBUG, "port_id=%u queue_id=%u pktlen=%u"
+ " tx_first=%u tx_last=%u",
+ (uint16_t)txq->port_id,
+ (uint16_t)txq->queue_id,
+ (uint32_t)pkt_len,
+ (uint16_t)tx_id,
+ (uint16_t)tx_last);
+
+ /*
+ * Make sure there are enough Tx descriptors available to
+ * transmit the entire packet.
+ * nb_used better be less than or equal to txq->tx_free_thresh
+ */
+ if (nb_used > txq->nb_tx_free) {
+ PMD_TX_LOG(DEBUG,
+ "Not enough free Tx descriptors "
+ "nb_used=%4u nb_free=%4u "
+ "(port=%d queue=%d)",
+ nb_used, txq->nb_tx_free,
+ txq->port_id, txq->queue_id);
+
+ if (ngbe_xmit_cleanup(txq) != 0) {
+ /* Could not clean any descriptors */
+ if (nb_tx == 0)
+ return 0;
+ goto end_of_tx;
+ }
+
+ /* nb_used better be <= txq->tx_free_thresh */
+ if (unlikely(nb_used > txq->tx_free_thresh)) {
+ PMD_TX_LOG(DEBUG,
+ "The number of descriptors needed to "
+ "transmit the packet exceeds the "
+ "RS bit threshold. This will impact "
+ "performance."
+ "nb_used=%4u nb_free=%4u "
+ "tx_free_thresh=%4u. "
+ "(port=%d queue=%d)",
+ nb_used, txq->nb_tx_free,
+ txq->tx_free_thresh,
+ txq->port_id, txq->queue_id);
+ /*
+ * Loop here until there are enough Tx
+ * descriptors or until the ring cannot be
+ * cleaned.
+ */
+ while (nb_used > txq->nb_tx_free) {
+ if (ngbe_xmit_cleanup(txq) != 0) {
+ /*
+ * Could not clean any
+ * descriptors
+ */
+ if (nb_tx == 0)
+ return 0;
+ goto end_of_tx;
+ }
+ }
+ }
+ }
+
+ /*
+ * By now there are enough free Tx descriptors to transmit
+ * the packet.
+ */
+
+ /*
+ * Set common flags of all Tx Data Descriptors.
+ *
+ * The following bits must be set in the first Data Descriptor
+ * and are ignored in the other ones:
+ * - NGBE_TXD_FCS
+ *
+ * The following bits must only be set in the last Data
+ * Descriptor:
+ * - NGBE_TXD_EOP
+ */
+ cmd_type_len = NGBE_TXD_FCS;
+
+ olinfo_status = 0;
+ if (tx_ol_req) {
+ if (ol_flags & PKT_TX_TCP_SEG) {
+ /* when TSO is on, paylen in descriptor is the
+ * not the packet len but the tcp payload len
+ */
+ pkt_len -= (tx_offload.l2_len +
+ tx_offload.l3_len + tx_offload.l4_len);
+ pkt_len -=
+ (tx_pkt->ol_flags & PKT_TX_TUNNEL_MASK)
+ ? tx_offload.outer_l2_len +
+ tx_offload.outer_l3_len : 0;
+ }
+
+ /*
+ * Setup the Tx Context Descriptor if required
+ */
+ if (new_ctx) {
+ volatile struct ngbe_tx_ctx_desc *ctx_txd;
+
+ ctx_txd = (volatile struct ngbe_tx_ctx_desc *)
+ &txr[tx_id];
+
+ txn = &sw_ring[txe->next_id];
+ rte_prefetch0(&txn->mbuf->pool);
+
+ if (txe->mbuf != NULL) {
+ rte_pktmbuf_free_seg(txe->mbuf);
+ txe->mbuf = NULL;
+ }
+
+ ngbe_set_xmit_ctx(txq, ctx_txd, tx_ol_req,
+ tx_offload);
+
+ txe->last_id = tx_last;
+ tx_id = txe->next_id;
+ txe = txn;
+ }
+
+ /*
+ * Setup the Tx Data Descriptor,
+ * This path will go through
+ * whatever new/reuse the context descriptor
+ */
+ cmd_type_len |= tx_desc_ol_flags_to_cmdtype(ol_flags);
+ olinfo_status |=
+ tx_desc_cksum_flags_to_olinfo(ol_flags);
+ olinfo_status |= NGBE_TXD_IDX(ctx);
+ }
+
+ olinfo_status |= NGBE_TXD_PAYLEN(pkt_len);
+
+ m_seg = tx_pkt;
+ do {
+ txd = &txr[tx_id];
+ txn = &sw_ring[txe->next_id];
+ rte_prefetch0(&txn->mbuf->pool);
+
+ if (txe->mbuf != NULL)
+ rte_pktmbuf_free_seg(txe->mbuf);
+ txe->mbuf = m_seg;
+
+ /*
+ * Set up Transmit Data Descriptor.
+ */
+ slen = m_seg->data_len;
+ buf_dma_addr = rte_mbuf_data_iova(m_seg);
+ txd->qw0 = rte_cpu_to_le_64(buf_dma_addr);
+ txd->dw2 = rte_cpu_to_le_32(cmd_type_len | slen);
+ txd->dw3 = rte_cpu_to_le_32(olinfo_status);
+ txe->last_id = tx_last;
+ tx_id = txe->next_id;
+ txe = txn;
+ m_seg = m_seg->next;
+ } while (m_seg != NULL);
+
+ /*
+ * The last packet data descriptor needs End Of Packet (EOP)
+ */
+ cmd_type_len |= NGBE_TXD_EOP;
+ txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_used);
+
+ txd->dw2 |= rte_cpu_to_le_32(cmd_type_len);
+ }
+
+end_of_tx:
+
+ rte_wmb();
+
+ /*
+ * Set the Transmit Descriptor Tail (TDT)
+ */
+ PMD_TX_LOG(DEBUG, "port_id=%u queue_id=%u tx_tail=%u nb_tx=%u",
+ (uint16_t)txq->port_id, (uint16_t)txq->queue_id,
+ (uint16_t)tx_id, (uint16_t)nb_tx);
+ ngbe_set32_relaxed(txq->tdt_reg_addr, tx_id);
+ txq->tx_tail = tx_id;
+
+ return nb_tx;
+}
+
+/*********************************************************************
+ *
+ * Tx prep functions
+ *
+ **********************************************************************/
+uint16_t
+ngbe_prep_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
+{
+ int i, ret;
+ uint64_t ol_flags;
+ struct rte_mbuf *m;
+ struct ngbe_tx_queue *txq = (struct ngbe_tx_queue *)tx_queue;
+
+ for (i = 0; i < nb_pkts; i++) {
+ m = tx_pkts[i];
+ ol_flags = m->ol_flags;
+
+ /**
+ * Check if packet meets requirements for number of segments
+ *
+ * NOTE: for ngbe it's always (40 - WTHRESH) for both TSO and
+ * non-TSO
+ */
+
+ if (m->nb_segs > NGBE_TX_MAX_SEG - txq->wthresh) {
+ rte_errno = -EINVAL;
+ return i;
+ }
+
+ if (ol_flags & NGBE_TX_OFFLOAD_NOTSUP_MASK) {
+ rte_errno = -ENOTSUP;
+ return i;
+ }
+
+#ifdef RTE_ETHDEV_DEBUG_TX
+ ret = rte_validate_tx_offload(m);
+ if (ret != 0) {
+ rte_errno = ret;
+ return i;
+ }
+#endif
+ ret = rte_net_intel_cksum_prepare(m);
+ if (ret != 0) {
+ rte_errno = ret;
+ return i;
+ }
+ }
+
+ return i;
+}
+
/*********************************************************************
*
* Rx functions
@@ -1044,6 +1665,56 @@ static const struct ngbe_txq_ops def_txq_ops = {
.reset = ngbe_reset_tx_queue,
};
+/* Takes an ethdev and a queue and sets up the tx function to be used based on
+ * the queue parameters. Used in tx_queue_setup by primary process and then
+ * in dev_init by secondary process when attaching to an existing ethdev.
+ */
+void
+ngbe_set_tx_function(struct rte_eth_dev *dev, struct ngbe_tx_queue *txq)
+{
+ /* Use a simple Tx queue (no offloads, no multi segs) if possible */
+ if (txq->offloads == 0 &&
+ txq->tx_free_thresh >= RTE_PMD_NGBE_TX_MAX_BURST) {
+ PMD_INIT_LOG(DEBUG, "Using simple tx code path");
+ dev->tx_pkt_burst = ngbe_xmit_pkts_simple;
+ dev->tx_pkt_prepare = NULL;
+ } else {
+ PMD_INIT_LOG(DEBUG, "Using full-featured tx code path");
+ PMD_INIT_LOG(DEBUG,
+ " - offloads = 0x%" PRIx64,
+ txq->offloads);
+ PMD_INIT_LOG(DEBUG,
+ " - tx_free_thresh = %lu [RTE_PMD_NGBE_TX_MAX_BURST=%lu]",
+ (unsigned long)txq->tx_free_thresh,
+ (unsigned long)RTE_PMD_NGBE_TX_MAX_BURST);
+ dev->tx_pkt_burst = ngbe_xmit_pkts;
+ dev->tx_pkt_prepare = ngbe_prep_pkts;
+ }
+}
+
+uint64_t
+ngbe_get_tx_port_offloads(struct rte_eth_dev *dev)
+{
+ uint64_t tx_offload_capa;
+
+ RTE_SET_USED(dev);
+
+ tx_offload_capa =
+ DEV_TX_OFFLOAD_IPV4_CKSUM |
+ DEV_TX_OFFLOAD_UDP_CKSUM |
+ DEV_TX_OFFLOAD_TCP_CKSUM |
+ DEV_TX_OFFLOAD_SCTP_CKSUM |
+ DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+ DEV_TX_OFFLOAD_TCP_TSO |
+ DEV_TX_OFFLOAD_UDP_TSO |
+ DEV_TX_OFFLOAD_UDP_TNL_TSO |
+ DEV_TX_OFFLOAD_IP_TNL_TSO |
+ DEV_TX_OFFLOAD_IPIP_TNL_TSO |
+ DEV_TX_OFFLOAD_MULTI_SEGS;
+
+ return tx_offload_capa;
+}
+
int
ngbe_dev_tx_queue_setup(struct rte_eth_dev *dev,
uint16_t queue_idx,
@@ -1055,10 +1726,13 @@ ngbe_dev_tx_queue_setup(struct rte_eth_dev *dev,
struct ngbe_tx_queue *txq;
struct ngbe_hw *hw;
uint16_t tx_free_thresh;
+ uint64_t offloads;
PMD_INIT_FUNC_TRACE();
hw = ngbe_dev_hw(dev);
+ offloads = tx_conf->offloads | dev->data->dev_conf.txmode.offloads;
+
/*
* The Tx descriptor ring will be cleaned after txq->tx_free_thresh
* descriptors are used or if the number of descriptors required
@@ -1120,6 +1794,7 @@ ngbe_dev_tx_queue_setup(struct rte_eth_dev *dev,
txq->queue_id = queue_idx;
txq->reg_idx = queue_idx;
txq->port_id = dev->data->port_id;
+ txq->offloads = offloads;
txq->ops = &def_txq_ops;
txq->tx_deferred_start = tx_conf->tx_deferred_start;
@@ -1141,6 +1816,9 @@ ngbe_dev_tx_queue_setup(struct rte_eth_dev *dev,
"sw_ring=%p hw_ring=%p dma_addr=0x%" PRIx64,
txq->sw_ring, txq->tx_ring, txq->tx_ring_phys_addr);
+ /* set up scalar Tx function as appropriate */
+ ngbe_set_tx_function(dev, txq);
+
txq->ops->reset(txq);
dev->data->tx_queues[queue_idx] = txq;
diff --git a/drivers/net/ngbe/ngbe_rxtx.h b/drivers/net/ngbe/ngbe_rxtx.h
index 07b5ac3fbe..27c83f45a7 100644
--- a/drivers/net/ngbe/ngbe_rxtx.h
+++ b/drivers/net/ngbe/ngbe_rxtx.h
@@ -135,8 +135,35 @@ struct ngbe_tx_ctx_desc {
rte_le32_t dw3; /* w.mss_l4len_idx */
};
+/* @ngbe_tx_ctx_desc.dw0 */
+#define NGBE_TXD_IPLEN(v) LS(v, 0, 0x1FF) /* ip/fcoe header end */
+#define NGBE_TXD_MACLEN(v) LS(v, 9, 0x7F) /* desc mac len */
+#define NGBE_TXD_VLAN(v) LS(v, 16, 0xFFFF) /* vlan tag */
+
+/* @ngbe_tx_ctx_desc.dw1 */
+/*** bit 0-31, when NGBE_TXD_DTYP_FCOE=0 ***/
+#define NGBE_TXD_IPSEC_SAIDX(v) LS(v, 0, 0x3FF) /* ipsec SA index */
+#define NGBE_TXD_ETYPE(v) LS(v, 11, 0x1) /* tunnel type */
+#define NGBE_TXD_ETYPE_UDP LS(0, 11, 0x1)
+#define NGBE_TXD_ETYPE_GRE LS(1, 11, 0x1)
+#define NGBE_TXD_EIPLEN(v) LS(v, 12, 0x7F) /* tunnel ip header */
+#define NGBE_TXD_DTYP_FCOE MS(16, 0x1) /* FCoE/IP descriptor */
+#define NGBE_TXD_ETUNLEN(v) LS(v, 21, 0xFF) /* tunnel header */
+#define NGBE_TXD_DECTTL(v) LS(v, 29, 0xF) /* decrease ip TTL */
+
+/* @ngbe_tx_ctx_desc.dw2 */
+#define NGBE_TXD_IPSEC_ESPLEN(v) LS(v, 1, 0x1FF) /* ipsec ESP length */
+#define NGBE_TXD_SNAP MS(10, 0x1) /* SNAP indication */
+#define NGBE_TXD_TPID_SEL(v) LS(v, 11, 0x7) /* vlan tag index */
+#define NGBE_TXD_IPSEC_ESP MS(14, 0x1) /* ipsec type: esp=1 ah=0 */
+#define NGBE_TXD_IPSEC_ESPENC MS(15, 0x1) /* ESP encrypt */
+#define NGBE_TXD_CTXT MS(20, 0x1) /* context descriptor */
+#define NGBE_TXD_PTID(v) LS(v, 24, 0xFF) /* packet type */
/* @ngbe_tx_ctx_desc.dw3 */
#define NGBE_TXD_DD MS(0, 0x1) /* descriptor done */
+#define NGBE_TXD_IDX(v) LS(v, 4, 0x1) /* ctxt desc index */
+#define NGBE_TXD_L4LEN(v) LS(v, 8, 0xFF) /* l4 header length */
+#define NGBE_TXD_MSS(v) LS(v, 16, 0xFFFF) /* l4 MSS */
/**
* Transmit Data Descriptor (NGBE_TXD_TYP=DATA)
@@ -259,11 +286,34 @@ enum ngbe_ctx_num {
NGBE_CTX_NUM = 2, /**< CTX NUMBER */
};
+/** Offload features */
+union ngbe_tx_offload {
+ uint64_t data[2];
+ struct {
+ uint64_t ptid:8; /**< Packet Type Identifier. */
+ uint64_t l2_len:7; /**< L2 (MAC) Header Length. */
+ uint64_t l3_len:9; /**< L3 (IP) Header Length. */
+ uint64_t l4_len:8; /**< L4 (TCP/UDP) Header Length. */
+ uint64_t tso_segsz:16; /**< TCP TSO segment size */
+ uint64_t vlan_tci:16;
+ /**< VLAN Tag Control Identifier (CPU order). */
+
+ /* fields for TX offloading of tunnels */
+ uint64_t outer_tun_len:8; /**< Outer TUN (Tunnel) Hdr Length. */
+ uint64_t outer_l2_len:8; /**< Outer L2 (MAC) Hdr Length. */
+ uint64_t outer_l3_len:16; /**< Outer L3 (IP) Hdr Length. */
+ };
+};
+
/**
* Structure to check if new context need be built
*/
struct ngbe_ctx_info {
uint64_t flags; /**< ol_flags for context build. */
+ /**< tx offload: vlan, tso, l2-l3-l4 lengths. */
+ union ngbe_tx_offload tx_offload;
+ /** compare mask for tx offload. */
+ union ngbe_tx_offload tx_offload_mask;
};
/**
@@ -295,6 +345,7 @@ struct ngbe_tx_queue {
uint8_t pthresh; /**< Prefetch threshold register */
uint8_t hthresh; /**< Host threshold register */
uint8_t wthresh; /**< Write-back threshold reg */
+ uint64_t offloads; /**< Tx offload flags */
uint32_t ctx_curr; /**< Hardware context states */
/** Hardware context0 history */
struct ngbe_ctx_info ctx_cache[NGBE_CTX_NUM];
@@ -309,8 +360,15 @@ struct ngbe_txq_ops {
void (*reset)(struct ngbe_tx_queue *txq);
};
+/* Takes an ethdev and a queue and sets up the tx function to be used based on
+ * the queue parameters. Used in tx_queue_setup by primary process and then
+ * in dev_init by secondary process when attaching to an existing ethdev.
+ */
+void ngbe_set_tx_function(struct rte_eth_dev *dev, struct ngbe_tx_queue *txq);
+
void ngbe_set_rx_function(struct rte_eth_dev *dev);
+uint64_t ngbe_get_tx_port_offloads(struct rte_eth_dev *dev);
uint64_t ngbe_get_rx_port_offloads(struct rte_eth_dev *dev);
#endif /* _NGBE_RXTX_H_ */
--
2.21.0.windows.1
^ permalink raw reply [flat|nested] 54+ messages in thread
* [dpdk-dev] [PATCH 05/32] net/ngbe: support CRC offload
2021-09-08 8:37 [dpdk-dev] [PATCH 00/32] net/ngbe: add many features Jiawen Wu
` (3 preceding siblings ...)
2021-09-08 8:37 ` [dpdk-dev] [PATCH 04/32] net/ngbe: support TSO Jiawen Wu
@ 2021-09-08 8:37 ` Jiawen Wu
2021-09-15 16:48 ` Ferruh Yigit
2021-09-08 8:37 ` [dpdk-dev] [PATCH 06/32] net/ngbe: support jumbo frame Jiawen Wu
` (26 subsequent siblings)
31 siblings, 1 reply; 54+ messages in thread
From: Jiawen Wu @ 2021-09-08 8:37 UTC (permalink / raw)
To: dev; +Cc: Jiawen Wu
Support to strip or keep CRC in Rx path.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
doc/guides/nics/features/ngbe.ini | 1 +
drivers/net/ngbe/ngbe_rxtx.c | 53 +++++++++++++++++++++++++++++--
drivers/net/ngbe/ngbe_rxtx.h | 1 +
3 files changed, 53 insertions(+), 2 deletions(-)
diff --git a/doc/guides/nics/features/ngbe.ini b/doc/guides/nics/features/ngbe.ini
index 32f74a3084..2a472d9434 100644
--- a/doc/guides/nics/features/ngbe.ini
+++ b/doc/guides/nics/features/ngbe.ini
@@ -10,6 +10,7 @@ Link status event = Y
Queue start/stop = Y
Scattered Rx = Y
TSO = Y
+CRC offload = P
L3 checksum offload = P
L4 checksum offload = P
Inner L3 checksum = P
diff --git a/drivers/net/ngbe/ngbe_rxtx.c b/drivers/net/ngbe/ngbe_rxtx.c
index 21f5808787..f9d8cf9d19 100644
--- a/drivers/net/ngbe/ngbe_rxtx.c
+++ b/drivers/net/ngbe/ngbe_rxtx.c
@@ -968,7 +968,8 @@ ngbe_rx_scan_hw_ring(struct ngbe_rx_queue *rxq)
/* Translate descriptor info to mbuf format */
for (j = 0; j < nb_dd; ++j) {
mb = rxep[j].mbuf;
- pkt_len = rte_le_to_cpu_16(rxdp[j].qw1.hi.len);
+ pkt_len = rte_le_to_cpu_16(rxdp[j].qw1.hi.len) -
+ rxq->crc_len;
mb->data_len = pkt_len;
mb->pkt_len = pkt_len;
@@ -1271,7 +1272,8 @@ ngbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
* - IP checksum flag,
* - error flags.
*/
- pkt_len = (uint16_t)(rte_le_to_cpu_16(rxd.qw1.hi.len));
+ pkt_len = (uint16_t)(rte_le_to_cpu_16(rxd.qw1.hi.len) -
+ rxq->crc_len);
rxm->data_off = RTE_PKTMBUF_HEADROOM;
rte_packet_prefetch((char *)rxm->buf_addr + rxm->data_off);
rxm->nb_segs = 1;
@@ -1521,6 +1523,22 @@ ngbe_recv_pkts_sc(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts,
/* Initialize the first mbuf of the returned packet */
ngbe_fill_cluster_head_buf(first_seg, &rxd, rxq, staterr);
+ /* Deal with the case, when HW CRC srip is disabled. */
+ first_seg->pkt_len -= rxq->crc_len;
+ if (unlikely(rxm->data_len <= rxq->crc_len)) {
+ struct rte_mbuf *lp;
+
+ for (lp = first_seg; lp->next != rxm; lp = lp->next)
+ ;
+
+ first_seg->nb_segs--;
+ lp->data_len -= rxq->crc_len - rxm->data_len;
+ lp->next = NULL;
+ rte_pktmbuf_free_seg(rxm);
+ } else {
+ rxm->data_len -= rxq->crc_len;
+ }
+
/* Prefetch data of first segment, if configured to do so. */
rte_packet_prefetch((char *)first_seg->buf_addr +
first_seg->data_off);
@@ -1989,6 +2007,7 @@ ngbe_get_rx_port_offloads(struct rte_eth_dev *dev __rte_unused)
offloads = DEV_RX_OFFLOAD_IPV4_CKSUM |
DEV_RX_OFFLOAD_UDP_CKSUM |
DEV_RX_OFFLOAD_TCP_CKSUM |
+ DEV_RX_OFFLOAD_KEEP_CRC |
DEV_RX_OFFLOAD_SCATTER;
return offloads;
@@ -2032,6 +2051,10 @@ ngbe_dev_rx_queue_setup(struct rte_eth_dev *dev,
rxq->queue_id = queue_idx;
rxq->reg_idx = queue_idx;
rxq->port_id = dev->data->port_id;
+ if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+ rxq->crc_len = RTE_ETHER_CRC_LEN;
+ else
+ rxq->crc_len = 0;
rxq->drop_en = rx_conf->rx_drop_en;
rxq->rx_deferred_start = rx_conf->rx_deferred_start;
rxq->offloads = offloads;
@@ -2259,6 +2282,7 @@ ngbe_dev_rx_init(struct rte_eth_dev *dev)
uint32_t fctrl;
uint32_t hlreg0;
uint32_t srrctl;
+ uint32_t rdrxctl;
uint32_t rxcsum;
uint16_t buf_size;
uint16_t i;
@@ -2279,7 +2303,14 @@ ngbe_dev_rx_init(struct rte_eth_dev *dev)
fctrl |= NGBE_PSRCTL_BCA;
wr32(hw, NGBE_PSRCTL, fctrl);
+ /*
+ * Configure CRC stripping, if any.
+ */
hlreg0 = rd32(hw, NGBE_SECRXCTL);
+ if (rx_conf->offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+ hlreg0 &= ~NGBE_SECRXCTL_CRCSTRIP;
+ else
+ hlreg0 |= NGBE_SECRXCTL_CRCSTRIP;
hlreg0 &= ~NGBE_SECRXCTL_XDSA;
wr32(hw, NGBE_SECRXCTL, hlreg0);
@@ -2290,6 +2321,15 @@ ngbe_dev_rx_init(struct rte_eth_dev *dev)
for (i = 0; i < dev->data->nb_rx_queues; i++) {
rxq = dev->data->rx_queues[i];
+ /*
+ * Reset crc_len in case it was changed after queue setup by a
+ * call to configure.
+ */
+ if (rx_conf->offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+ rxq->crc_len = RTE_ETHER_CRC_LEN;
+ else
+ rxq->crc_len = 0;
+
/* Setup the Base and Length of the Rx Descriptor Rings */
bus_addr = rxq->rx_ring_phys_addr;
wr32(hw, NGBE_RXBAL(rxq->reg_idx),
@@ -2334,6 +2374,15 @@ ngbe_dev_rx_init(struct rte_eth_dev *dev)
wr32(hw, NGBE_PSRCTL, rxcsum);
+ if (hw->is_pf) {
+ rdrxctl = rd32(hw, NGBE_SECRXCTL);
+ if (rx_conf->offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+ rdrxctl &= ~NGBE_SECRXCTL_CRCSTRIP;
+ else
+ rdrxctl |= NGBE_SECRXCTL_CRCSTRIP;
+ wr32(hw, NGBE_SECRXCTL, rdrxctl);
+ }
+
ngbe_set_rx_function(dev);
return 0;
diff --git a/drivers/net/ngbe/ngbe_rxtx.h b/drivers/net/ngbe/ngbe_rxtx.h
index 27c83f45a7..07b6e2374e 100644
--- a/drivers/net/ngbe/ngbe_rxtx.h
+++ b/drivers/net/ngbe/ngbe_rxtx.h
@@ -268,6 +268,7 @@ struct ngbe_rx_queue {
/** Packet type mask for different NICs */
uint16_t pkt_type_mask;
uint16_t port_id; /**< Device port identifier */
+ uint8_t crc_len; /**< 0 if CRC stripped, 4 otherwise. */
uint8_t drop_en; /**< If not 0, set SRRCTL.Drop_En */
uint8_t rx_deferred_start; /**< not in global dev start */
uint64_t offloads; /**< Rx offloads with DEV_RX_OFFLOAD_* */
--
2.21.0.windows.1
^ permalink raw reply [flat|nested] 54+ messages in thread
* [dpdk-dev] [PATCH 06/32] net/ngbe: support jumbo frame
2021-09-08 8:37 [dpdk-dev] [PATCH 00/32] net/ngbe: add many features Jiawen Wu
` (4 preceding siblings ...)
2021-09-08 8:37 ` [dpdk-dev] [PATCH 05/32] net/ngbe: support CRC offload Jiawen Wu
@ 2021-09-08 8:37 ` Jiawen Wu
2021-09-15 16:48 ` Ferruh Yigit
2021-09-08 8:37 ` [dpdk-dev] [PATCH 07/32] net/ngbe: support VLAN and QinQ offload Jiawen Wu
` (25 subsequent siblings)
31 siblings, 1 reply; 54+ messages in thread
From: Jiawen Wu @ 2021-09-08 8:37 UTC (permalink / raw)
To: dev; +Cc: Jiawen Wu
Add to support Rx jumbo frames.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
doc/guides/nics/features/ngbe.ini | 1 +
doc/guides/nics/ngbe.rst | 1 +
drivers/net/ngbe/ngbe_rxtx.c | 11 ++++++++++-
3 files changed, 12 insertions(+), 1 deletion(-)
diff --git a/doc/guides/nics/features/ngbe.ini b/doc/guides/nics/features/ngbe.ini
index 2a472d9434..30fdfe62c7 100644
--- a/doc/guides/nics/features/ngbe.ini
+++ b/doc/guides/nics/features/ngbe.ini
@@ -8,6 +8,7 @@ Speed capabilities = Y
Link status = Y
Link status event = Y
Queue start/stop = Y
+Jumbo frame = Y
Scattered Rx = Y
TSO = Y
CRC offload = P
diff --git a/doc/guides/nics/ngbe.rst b/doc/guides/nics/ngbe.rst
index 6a6ae39243..702a455041 100644
--- a/doc/guides/nics/ngbe.rst
+++ b/doc/guides/nics/ngbe.rst
@@ -14,6 +14,7 @@ Features
- Packet type information
- Checksum offload
- TSO offload
+- Jumbo frames
- Link state information
- Scattered and gather for TX and RX
diff --git a/drivers/net/ngbe/ngbe_rxtx.c b/drivers/net/ngbe/ngbe_rxtx.c
index f9d8cf9d19..4238fbe3b8 100644
--- a/drivers/net/ngbe/ngbe_rxtx.c
+++ b/drivers/net/ngbe/ngbe_rxtx.c
@@ -2008,6 +2008,7 @@ ngbe_get_rx_port_offloads(struct rte_eth_dev *dev __rte_unused)
DEV_RX_OFFLOAD_UDP_CKSUM |
DEV_RX_OFFLOAD_TCP_CKSUM |
DEV_RX_OFFLOAD_KEEP_CRC |
+ DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_SCATTER;
return offloads;
@@ -2314,8 +2315,16 @@ ngbe_dev_rx_init(struct rte_eth_dev *dev)
hlreg0 &= ~NGBE_SECRXCTL_XDSA;
wr32(hw, NGBE_SECRXCTL, hlreg0);
- wr32m(hw, NGBE_FRMSZ, NGBE_FRMSZ_MAX_MASK,
+ /*
+ * Configure jumbo frame support, if any.
+ */
+ if (rx_conf->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+ wr32m(hw, NGBE_FRMSZ, NGBE_FRMSZ_MAX_MASK,
+ NGBE_FRMSZ_MAX(rx_conf->max_rx_pkt_len));
+ } else {
+ wr32m(hw, NGBE_FRMSZ, NGBE_FRMSZ_MAX_MASK,
NGBE_FRMSZ_MAX(NGBE_FRAME_SIZE_DFT));
+ }
/* Setup Rx queues */
for (i = 0; i < dev->data->nb_rx_queues; i++) {
--
2.21.0.windows.1
^ permalink raw reply [flat|nested] 54+ messages in thread
* [dpdk-dev] [PATCH 07/32] net/ngbe: support VLAN and QinQ offload
2021-09-08 8:37 [dpdk-dev] [PATCH 00/32] net/ngbe: add many features Jiawen Wu
` (5 preceding siblings ...)
2021-09-08 8:37 ` [dpdk-dev] [PATCH 06/32] net/ngbe: support jumbo frame Jiawen Wu
@ 2021-09-08 8:37 ` Jiawen Wu
2021-09-08 8:37 ` [dpdk-dev] [PATCH 08/32] net/ngbe: support basic statistics Jiawen Wu
` (24 subsequent siblings)
31 siblings, 0 replies; 54+ messages in thread
From: Jiawen Wu @ 2021-09-08 8:37 UTC (permalink / raw)
To: dev; +Cc: Jiawen Wu
Support to set VLAN and QinQ offload.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
doc/guides/nics/features/ngbe.ini | 2 +
doc/guides/nics/ngbe.rst | 1 +
drivers/net/ngbe/ngbe_ethdev.c | 273 ++++++++++++++++++++++++++++++
drivers/net/ngbe/ngbe_ethdev.h | 42 +++++
drivers/net/ngbe/ngbe_rxtx.c | 119 ++++++++++++-
drivers/net/ngbe/ngbe_rxtx.h | 3 +
6 files changed, 434 insertions(+), 6 deletions(-)
diff --git a/doc/guides/nics/features/ngbe.ini b/doc/guides/nics/features/ngbe.ini
index 30fdfe62c7..4ae2d66d15 100644
--- a/doc/guides/nics/features/ngbe.ini
+++ b/doc/guides/nics/features/ngbe.ini
@@ -12,6 +12,8 @@ Jumbo frame = Y
Scattered Rx = Y
TSO = Y
CRC offload = P
+VLAN offload = P
+QinQ offload = P
L3 checksum offload = P
L4 checksum offload = P
Inner L3 checksum = P
diff --git a/doc/guides/nics/ngbe.rst b/doc/guides/nics/ngbe.rst
index 702a455041..9518a59443 100644
--- a/doc/guides/nics/ngbe.rst
+++ b/doc/guides/nics/ngbe.rst
@@ -13,6 +13,7 @@ Features
- Packet type information
- Checksum offload
+- VLAN/QinQ stripping and inserting
- TSO offload
- Jumbo frames
- Link state information
diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c
index e7d63f1b14..3903eb0a2c 100644
--- a/drivers/net/ngbe/ngbe_ethdev.c
+++ b/drivers/net/ngbe/ngbe_ethdev.c
@@ -17,6 +17,9 @@
static int ngbe_dev_close(struct rte_eth_dev *dev);
static int ngbe_dev_link_update(struct rte_eth_dev *dev,
int wait_to_complete);
+static void ngbe_vlan_hw_strip_enable(struct rte_eth_dev *dev, uint16_t queue);
+static void ngbe_vlan_hw_strip_disable(struct rte_eth_dev *dev,
+ uint16_t queue);
static void ngbe_dev_link_status_print(struct rte_eth_dev *dev);
static int ngbe_dev_lsc_interrupt_setup(struct rte_eth_dev *dev, uint8_t on);
@@ -27,6 +30,24 @@ static void ngbe_dev_interrupt_handler(void *param);
static void ngbe_dev_interrupt_delayed_handler(void *param);
static void ngbe_configure_msix(struct rte_eth_dev *dev);
+#define NGBE_SET_HWSTRIP(h, q) do {\
+ uint32_t idx = (q) / (sizeof((h)->bitmap[0]) * NBBY); \
+ uint32_t bit = (q) % (sizeof((h)->bitmap[0]) * NBBY); \
+ (h)->bitmap[idx] |= 1 << bit;\
+ } while (0)
+
+#define NGBE_CLEAR_HWSTRIP(h, q) do {\
+ uint32_t idx = (q) / (sizeof((h)->bitmap[0]) * NBBY); \
+ uint32_t bit = (q) % (sizeof((h)->bitmap[0]) * NBBY); \
+ (h)->bitmap[idx] &= ~(1 << bit);\
+ } while (0)
+
+#define NGBE_GET_HWSTRIP(h, q, r) do {\
+ uint32_t idx = (q) / (sizeof((h)->bitmap[0]) * NBBY); \
+ uint32_t bit = (q) % (sizeof((h)->bitmap[0]) * NBBY); \
+ (r) = (h)->bitmap[idx] >> bit & 1;\
+ } while (0)
+
/*
* The set of PCI devices this driver supports
*/
@@ -129,6 +150,8 @@ eth_ngbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
struct ngbe_hw *hw = ngbe_dev_hw(eth_dev);
+ struct ngbe_vfta *shadow_vfta = NGBE_DEV_VFTA(eth_dev);
+ struct ngbe_hwstrip *hwstrip = NGBE_DEV_HWSTRIP(eth_dev);
struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
const struct rte_memzone *mz;
uint32_t ctrl_ext;
@@ -242,6 +265,12 @@ eth_ngbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
return -ENOMEM;
}
+ /* initialize the vfta */
+ memset(shadow_vfta, 0, sizeof(*shadow_vfta));
+
+ /* initialize the hw strip bitmap*/
+ memset(hwstrip, 0, sizeof(*hwstrip));
+
ctrl_ext = rd32(hw, NGBE_PORTCTL);
/* let hardware know driver is loaded */
ctrl_ext |= NGBE_PORTCTL_DRVLOAD;
@@ -311,6 +340,237 @@ static struct rte_pci_driver rte_ngbe_pmd = {
.remove = eth_ngbe_pci_remove,
};
+void
+ngbe_vlan_hw_filter_disable(struct rte_eth_dev *dev)
+{
+ struct ngbe_hw *hw = ngbe_dev_hw(dev);
+ uint32_t vlnctrl;
+
+ PMD_INIT_FUNC_TRACE();
+
+ /* Filter Table Disable */
+ vlnctrl = rd32(hw, NGBE_VLANCTL);
+ vlnctrl &= ~NGBE_VLANCTL_VFE;
+ wr32(hw, NGBE_VLANCTL, vlnctrl);
+}
+
+void
+ngbe_vlan_hw_filter_enable(struct rte_eth_dev *dev)
+{
+ struct ngbe_hw *hw = ngbe_dev_hw(dev);
+ struct ngbe_vfta *shadow_vfta = NGBE_DEV_VFTA(dev);
+ uint32_t vlnctrl;
+ uint16_t i;
+
+ PMD_INIT_FUNC_TRACE();
+
+ /* Filter Table Enable */
+ vlnctrl = rd32(hw, NGBE_VLANCTL);
+ vlnctrl &= ~NGBE_VLANCTL_CFIENA;
+ vlnctrl |= NGBE_VLANCTL_VFE;
+ wr32(hw, NGBE_VLANCTL, vlnctrl);
+
+ /* write whatever is in local vfta copy */
+ for (i = 0; i < NGBE_VFTA_SIZE; i++)
+ wr32(hw, NGBE_VLANTBL(i), shadow_vfta->vfta[i]);
+}
+
+void
+ngbe_vlan_hw_strip_bitmap_set(struct rte_eth_dev *dev, uint16_t queue, bool on)
+{
+ struct ngbe_hwstrip *hwstrip = NGBE_DEV_HWSTRIP(dev);
+ struct ngbe_rx_queue *rxq;
+
+ if (queue >= NGBE_MAX_RX_QUEUE_NUM)
+ return;
+
+ if (on)
+ NGBE_SET_HWSTRIP(hwstrip, queue);
+ else
+ NGBE_CLEAR_HWSTRIP(hwstrip, queue);
+
+ if (queue >= dev->data->nb_rx_queues)
+ return;
+
+ rxq = dev->data->rx_queues[queue];
+
+ if (on) {
+ rxq->vlan_flags = PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
+ rxq->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+ } else {
+ rxq->vlan_flags = PKT_RX_VLAN;
+ rxq->offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+ }
+}
+
+static void
+ngbe_vlan_hw_strip_disable(struct rte_eth_dev *dev, uint16_t queue)
+{
+ struct ngbe_hw *hw = ngbe_dev_hw(dev);
+ uint32_t ctrl;
+
+ PMD_INIT_FUNC_TRACE();
+
+ ctrl = rd32(hw, NGBE_RXCFG(queue));
+ ctrl &= ~NGBE_RXCFG_VLAN;
+ wr32(hw, NGBE_RXCFG(queue), ctrl);
+
+ /* record those setting for HW strip per queue */
+ ngbe_vlan_hw_strip_bitmap_set(dev, queue, 0);
+}
+
+static void
+ngbe_vlan_hw_strip_enable(struct rte_eth_dev *dev, uint16_t queue)
+{
+ struct ngbe_hw *hw = ngbe_dev_hw(dev);
+ uint32_t ctrl;
+
+ PMD_INIT_FUNC_TRACE();
+
+ ctrl = rd32(hw, NGBE_RXCFG(queue));
+ ctrl |= NGBE_RXCFG_VLAN;
+ wr32(hw, NGBE_RXCFG(queue), ctrl);
+
+ /* record those setting for HW strip per queue */
+ ngbe_vlan_hw_strip_bitmap_set(dev, queue, 1);
+}
+
+static void
+ngbe_vlan_hw_extend_disable(struct rte_eth_dev *dev)
+{
+ struct ngbe_hw *hw = ngbe_dev_hw(dev);
+ uint32_t ctrl;
+
+ PMD_INIT_FUNC_TRACE();
+
+ ctrl = rd32(hw, NGBE_PORTCTL);
+ ctrl &= ~NGBE_PORTCTL_VLANEXT;
+ ctrl &= ~NGBE_PORTCTL_QINQ;
+ wr32(hw, NGBE_PORTCTL, ctrl);
+}
+
+static void
+ngbe_vlan_hw_extend_enable(struct rte_eth_dev *dev)
+{
+ struct ngbe_hw *hw = ngbe_dev_hw(dev);
+ uint32_t ctrl;
+
+ PMD_INIT_FUNC_TRACE();
+
+ ctrl = rd32(hw, NGBE_PORTCTL);
+ ctrl |= NGBE_PORTCTL_VLANEXT | NGBE_PORTCTL_QINQ;
+ wr32(hw, NGBE_PORTCTL, ctrl);
+}
+
+static void
+ngbe_qinq_hw_strip_disable(struct rte_eth_dev *dev)
+{
+ struct ngbe_hw *hw = ngbe_dev_hw(dev);
+ uint32_t ctrl;
+
+ PMD_INIT_FUNC_TRACE();
+
+ ctrl = rd32(hw, NGBE_PORTCTL);
+ ctrl &= ~NGBE_PORTCTL_QINQ;
+ wr32(hw, NGBE_PORTCTL, ctrl);
+}
+
+static void
+ngbe_qinq_hw_strip_enable(struct rte_eth_dev *dev)
+{
+ struct ngbe_hw *hw = ngbe_dev_hw(dev);
+ uint32_t ctrl;
+
+ PMD_INIT_FUNC_TRACE();
+
+ ctrl = rd32(hw, NGBE_PORTCTL);
+ ctrl |= NGBE_PORTCTL_QINQ | NGBE_PORTCTL_VLANEXT;
+ wr32(hw, NGBE_PORTCTL, ctrl);
+}
+
+void
+ngbe_vlan_hw_strip_config(struct rte_eth_dev *dev)
+{
+ struct ngbe_rx_queue *rxq;
+ uint16_t i;
+
+ PMD_INIT_FUNC_TRACE();
+
+ for (i = 0; i < dev->data->nb_rx_queues; i++) {
+ rxq = dev->data->rx_queues[i];
+
+ if (rxq->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+ ngbe_vlan_hw_strip_enable(dev, i);
+ else
+ ngbe_vlan_hw_strip_disable(dev, i);
+ }
+}
+
+void
+ngbe_config_vlan_strip_on_all_queues(struct rte_eth_dev *dev, int mask)
+{
+ uint16_t i;
+ struct rte_eth_rxmode *rxmode;
+ struct ngbe_rx_queue *rxq;
+
+ if (mask & ETH_VLAN_STRIP_MASK) {
+ rxmode = &dev->data->dev_conf.rxmode;
+ if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+ for (i = 0; i < dev->data->nb_rx_queues; i++) {
+ rxq = dev->data->rx_queues[i];
+ rxq->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+ }
+ else
+ for (i = 0; i < dev->data->nb_rx_queues; i++) {
+ rxq = dev->data->rx_queues[i];
+ rxq->offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+ }
+ }
+}
+
+static int
+ngbe_vlan_offload_config(struct rte_eth_dev *dev, int mask)
+{
+ struct rte_eth_rxmode *rxmode;
+ rxmode = &dev->data->dev_conf.rxmode;
+
+ if (mask & ETH_VLAN_STRIP_MASK)
+ ngbe_vlan_hw_strip_config(dev);
+
+ if (mask & ETH_VLAN_FILTER_MASK) {
+ if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+ ngbe_vlan_hw_filter_enable(dev);
+ else
+ ngbe_vlan_hw_filter_disable(dev);
+ }
+
+ if (mask & ETH_VLAN_EXTEND_MASK) {
+ if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_EXTEND)
+ ngbe_vlan_hw_extend_enable(dev);
+ else
+ ngbe_vlan_hw_extend_disable(dev);
+ }
+
+ if (mask & ETH_QINQ_STRIP_MASK) {
+ if (rxmode->offloads & DEV_RX_OFFLOAD_QINQ_STRIP)
+ ngbe_qinq_hw_strip_enable(dev);
+ else
+ ngbe_qinq_hw_strip_disable(dev);
+ }
+
+ return 0;
+}
+
+static int
+ngbe_vlan_offload_set(struct rte_eth_dev *dev, int mask)
+{
+ ngbe_config_vlan_strip_on_all_queues(dev, mask);
+
+ ngbe_vlan_offload_config(dev, mask);
+
+ return 0;
+}
+
static int
ngbe_dev_configure(struct rte_eth_dev *dev)
{
@@ -363,6 +623,7 @@ ngbe_dev_start(struct rte_eth_dev *dev)
bool link_up = false, negotiate = false;
uint32_t speed = 0;
uint32_t allowed_speeds = 0;
+ int mask = 0;
int status;
uint32_t *link_speeds;
@@ -420,6 +681,16 @@ ngbe_dev_start(struct rte_eth_dev *dev)
goto error;
}
+ mask = ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK |
+ ETH_VLAN_EXTEND_MASK;
+ err = ngbe_vlan_offload_config(dev, mask);
+ if (err != 0) {
+ PMD_INIT_LOG(ERR, "Unable to set VLAN offload");
+ goto error;
+ }
+
+ ngbe_configure_port(dev);
+
err = ngbe_dev_rxtx_start(dev);
if (err < 0) {
PMD_INIT_LOG(ERR, "Unable to start rxtx queues");
@@ -654,6 +925,7 @@ ngbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->max_tx_queues = (uint16_t)hw->mac.max_tx_queues;
dev_info->min_rx_bufsize = 1024;
dev_info->max_rx_pktlen = 15872;
+ dev_info->rx_queue_offload_capa = ngbe_get_rx_queue_offloads(dev);
dev_info->rx_offload_capa = (ngbe_get_rx_port_offloads(dev) |
dev_info->rx_queue_offload_capa);
dev_info->tx_queue_offload_capa = 0;
@@ -1190,6 +1462,7 @@ static const struct eth_dev_ops ngbe_eth_dev_ops = {
.dev_close = ngbe_dev_close,
.dev_reset = ngbe_dev_reset,
.link_update = ngbe_dev_link_update,
+ .vlan_offload_set = ngbe_vlan_offload_set,
.rx_queue_start = ngbe_dev_rx_queue_start,
.rx_queue_stop = ngbe_dev_rx_queue_stop,
.tx_queue_start = ngbe_dev_tx_queue_start,
diff --git a/drivers/net/ngbe/ngbe_ethdev.h b/drivers/net/ngbe/ngbe_ethdev.h
index cbf3ab558f..8b3a1cdc3d 100644
--- a/drivers/net/ngbe/ngbe_ethdev.h
+++ b/drivers/net/ngbe/ngbe_ethdev.h
@@ -15,6 +15,17 @@
#define NGBE_FLAG_MACSEC ((uint32_t)(1 << 3))
#define NGBE_FLAG_NEED_LINK_CONFIG ((uint32_t)(1 << 4))
+#define NGBE_VFTA_SIZE 128
+#define NGBE_VLAN_TAG_SIZE 4
+/*Default value of Max Rx Queue*/
+#define NGBE_MAX_RX_QUEUE_NUM 8
+
+#ifndef NBBY
+#define NBBY 8 /* number of bits in a byte */
+#endif
+#define NGBE_HWSTRIP_BITMAP_SIZE \
+ (NGBE_MAX_RX_QUEUE_NUM / (sizeof(uint32_t) * NBBY))
+
#define NGBE_QUEUE_ITR_INTERVAL_DEFAULT 500 /* 500us */
#define NGBE_MISC_VEC_ID RTE_INTR_VEC_ZERO_OFFSET
@@ -29,12 +40,22 @@ struct ngbe_interrupt {
uint64_t mask_orig; /* save mask during delayed handler */
};
+struct ngbe_vfta {
+ uint32_t vfta[NGBE_VFTA_SIZE];
+};
+
+struct ngbe_hwstrip {
+ uint32_t bitmap[NGBE_HWSTRIP_BITMAP_SIZE];
+};
+
/*
* Structure to store private data for each driver instance (for each port).
*/
struct ngbe_adapter {
struct ngbe_hw hw;
struct ngbe_interrupt intr;
+ struct ngbe_vfta shadow_vfta;
+ struct ngbe_hwstrip hwstrip;
bool rx_bulk_alloc_allowed;
};
@@ -64,6 +85,12 @@ ngbe_dev_intr(struct rte_eth_dev *dev)
return intr;
}
+#define NGBE_DEV_VFTA(dev) \
+ (&((struct ngbe_adapter *)(dev)->data->dev_private)->shadow_vfta)
+
+#define NGBE_DEV_HWSTRIP(dev) \
+ (&((struct ngbe_adapter *)(dev)->data->dev_private)->hwstrip)
+
/*
* Rx/Tx function prototypes
*/
@@ -126,10 +153,21 @@ uint16_t ngbe_prep_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
void ngbe_set_ivar_map(struct ngbe_hw *hw, int8_t direction,
uint8_t queue, uint8_t msix_vector);
+void ngbe_configure_port(struct rte_eth_dev *dev);
+
int
ngbe_dev_link_update_share(struct rte_eth_dev *dev,
int wait_to_complete);
+/*
+ * misc function prototypes
+ */
+void ngbe_vlan_hw_filter_enable(struct rte_eth_dev *dev);
+
+void ngbe_vlan_hw_filter_disable(struct rte_eth_dev *dev);
+
+void ngbe_vlan_hw_strip_config(struct rte_eth_dev *dev);
+
#define NGBE_LINK_DOWN_CHECK_TIMEOUT 4000 /* ms */
#define NGBE_LINK_UP_CHECK_TIMEOUT 1000 /* ms */
#define NGBE_VMDQ_NUM_UC_MAC 4096 /* Maximum nb. of UC MAC addr. */
@@ -148,5 +186,9 @@ ngbe_dev_link_update_share(struct rte_eth_dev *dev,
#define NGBE_DEFAULT_TX_WTHRESH 0
const uint32_t *ngbe_dev_supported_ptypes_get(struct rte_eth_dev *dev);
+void ngbe_vlan_hw_strip_bitmap_set(struct rte_eth_dev *dev,
+ uint16_t queue, bool on);
+void ngbe_config_vlan_strip_on_all_queues(struct rte_eth_dev *dev,
+ int mask);
#endif /* _NGBE_ETHDEV_H_ */
diff --git a/drivers/net/ngbe/ngbe_rxtx.c b/drivers/net/ngbe/ngbe_rxtx.c
index 4238fbe3b8..1151173b02 100644
--- a/drivers/net/ngbe/ngbe_rxtx.c
+++ b/drivers/net/ngbe/ngbe_rxtx.c
@@ -21,6 +21,7 @@ static const u64 NGBE_TX_OFFLOAD_MASK = (PKT_TX_IP_CKSUM |
PKT_TX_OUTER_IPV4 |
PKT_TX_IPV6 |
PKT_TX_IPV4 |
+ PKT_TX_VLAN_PKT |
PKT_TX_L4_MASK |
PKT_TX_TCP_SEG |
PKT_TX_TUNNEL_MASK |
@@ -346,6 +347,11 @@ ngbe_set_xmit_ctx(struct ngbe_tx_queue *txq,
vlan_macip_lens |= NGBE_TXD_MACLEN(tx_offload.l2_len);
}
+ if (ol_flags & PKT_TX_VLAN_PKT) {
+ tx_offload_mask.vlan_tci |= ~0;
+ vlan_macip_lens |= NGBE_TXD_VLAN(tx_offload.vlan_tci);
+ }
+
txq->ctx_cache[ctx_idx].flags = ol_flags;
txq->ctx_cache[ctx_idx].tx_offload.data[0] =
tx_offload_mask.data[0] & tx_offload.data[0];
@@ -416,6 +422,8 @@ tx_desc_cksum_flags_to_olinfo(uint64_t ol_flags)
tmp |= NGBE_TXD_IPCS;
tmp |= NGBE_TXD_L4CS;
}
+ if (ol_flags & PKT_TX_VLAN_PKT)
+ tmp |= NGBE_TXD_CC;
return tmp;
}
@@ -425,6 +433,8 @@ tx_desc_ol_flags_to_cmdtype(uint64_t ol_flags)
{
uint32_t cmdtype = 0;
+ if (ol_flags & PKT_TX_VLAN_PKT)
+ cmdtype |= NGBE_TXD_VLE;
if (ol_flags & PKT_TX_TCP_SEG)
cmdtype |= NGBE_TXD_TSE;
return cmdtype;
@@ -443,6 +453,8 @@ tx_desc_ol_flags_to_ptid(uint64_t oflags, uint32_t ptype)
/* L2 level */
ptype = RTE_PTYPE_L2_ETHER;
+ if (oflags & PKT_TX_VLAN)
+ ptype |= RTE_PTYPE_L2_ETHER_VLAN;
/* L3 level */
if (oflags & (PKT_TX_OUTER_IPV4 | PKT_TX_OUTER_IP_CKSUM))
@@ -606,6 +618,7 @@ ngbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_offload.l2_len = tx_pkt->l2_len;
tx_offload.l3_len = tx_pkt->l3_len;
tx_offload.l4_len = tx_pkt->l4_len;
+ tx_offload.vlan_tci = tx_pkt->vlan_tci;
tx_offload.tso_segsz = tx_pkt->tso_segsz;
tx_offload.outer_l2_len = tx_pkt->outer_l2_len;
tx_offload.outer_l3_len = tx_pkt->outer_l3_len;
@@ -884,6 +897,23 @@ ngbe_rxd_pkt_info_to_pkt_type(uint32_t pkt_info, uint16_t ptid_mask)
return ngbe_decode_ptype(ptid);
}
+static inline uint64_t
+rx_desc_status_to_pkt_flags(uint32_t rx_status, uint64_t vlan_flags)
+{
+ uint64_t pkt_flags;
+
+ /*
+ * Check if VLAN present only.
+ * Do not check whether L3/L4 rx checksum done by NIC or not,
+ * That can be found from rte_eth_rxmode.offloads flag
+ */
+ pkt_flags = (rx_status & NGBE_RXD_STAT_VLAN &&
+ vlan_flags & PKT_RX_VLAN_STRIPPED)
+ ? vlan_flags : 0;
+
+ return pkt_flags;
+}
+
static inline uint64_t
rx_desc_error_to_pkt_flags(uint32_t rx_status)
{
@@ -972,9 +1002,12 @@ ngbe_rx_scan_hw_ring(struct ngbe_rx_queue *rxq)
rxq->crc_len;
mb->data_len = pkt_len;
mb->pkt_len = pkt_len;
+ mb->vlan_tci = rte_le_to_cpu_16(rxdp[j].qw1.hi.tag);
/* convert descriptor fields to rte mbuf flags */
- pkt_flags = rx_desc_error_to_pkt_flags(s[j]);
+ pkt_flags = rx_desc_status_to_pkt_flags(s[j],
+ rxq->vlan_flags);
+ pkt_flags |= rx_desc_error_to_pkt_flags(s[j]);
mb->ol_flags = pkt_flags;
mb->packet_type =
ngbe_rxd_pkt_info_to_pkt_type(pkt_info[j],
@@ -1270,6 +1303,7 @@ ngbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
* - Rx port identifier.
* 2) integrate hardware offload data, if any:
* - IP checksum flag,
+ * - VLAN TCI, if any,
* - error flags.
*/
pkt_len = (uint16_t)(rte_le_to_cpu_16(rxd.qw1.hi.len) -
@@ -1283,7 +1317,12 @@ ngbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
rxm->port = rxq->port_id;
pkt_info = rte_le_to_cpu_32(rxd.qw0.dw0);
- pkt_flags = rx_desc_error_to_pkt_flags(staterr);
+ /* Only valid if PKT_RX_VLAN set in pkt_flags */
+ rxm->vlan_tci = rte_le_to_cpu_16(rxd.qw1.hi.tag);
+
+ pkt_flags = rx_desc_status_to_pkt_flags(staterr,
+ rxq->vlan_flags);
+ pkt_flags |= rx_desc_error_to_pkt_flags(staterr);
rxm->ol_flags = pkt_flags;
rxm->packet_type = ngbe_rxd_pkt_info_to_pkt_type(pkt_info,
rxq->pkt_type_mask);
@@ -1328,6 +1367,7 @@ ngbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
* - RX port identifier
* - hardware offload data, if any:
* - IP checksum flag
+ * - VLAN TCI, if any
* - error flags
* @head HEAD of the packet cluster
* @desc HW descriptor to get data from
@@ -1342,8 +1382,13 @@ ngbe_fill_cluster_head_buf(struct rte_mbuf *head, struct ngbe_rx_desc *desc,
head->port = rxq->port_id;
+ /* The vlan_tci field is only valid when PKT_RX_VLAN is
+ * set in the pkt_flags field.
+ */
+ head->vlan_tci = rte_le_to_cpu_16(desc->qw1.hi.tag);
pkt_info = rte_le_to_cpu_32(desc->qw0.dw0);
- pkt_flags = rx_desc_error_to_pkt_flags(staterr);
+ pkt_flags = rx_desc_status_to_pkt_flags(staterr, rxq->vlan_flags);
+ pkt_flags |= rx_desc_error_to_pkt_flags(staterr);
head->ol_flags = pkt_flags;
head->packet_type = ngbe_rxd_pkt_info_to_pkt_type(pkt_info,
rxq->pkt_type_mask);
@@ -1714,10 +1759,10 @@ uint64_t
ngbe_get_tx_port_offloads(struct rte_eth_dev *dev)
{
uint64_t tx_offload_capa;
-
- RTE_SET_USED(dev);
+ struct ngbe_hw *hw = ngbe_dev_hw(dev);
tx_offload_capa =
+ DEV_TX_OFFLOAD_VLAN_INSERT |
DEV_TX_OFFLOAD_IPV4_CKSUM |
DEV_TX_OFFLOAD_UDP_CKSUM |
DEV_TX_OFFLOAD_TCP_CKSUM |
@@ -1730,6 +1775,9 @@ ngbe_get_tx_port_offloads(struct rte_eth_dev *dev)
DEV_TX_OFFLOAD_IPIP_TNL_TSO |
DEV_TX_OFFLOAD_MULTI_SEGS;
+ if (hw->is_pf)
+ tx_offload_capa |= DEV_TX_OFFLOAD_QINQ_INSERT;
+
return tx_offload_capa;
}
@@ -2000,17 +2048,29 @@ ngbe_reset_rx_queue(struct ngbe_adapter *adapter, struct ngbe_rx_queue *rxq)
}
uint64_t
-ngbe_get_rx_port_offloads(struct rte_eth_dev *dev __rte_unused)
+ngbe_get_rx_queue_offloads(struct rte_eth_dev *dev __rte_unused)
+{
+ return DEV_RX_OFFLOAD_VLAN_STRIP;
+}
+
+uint64_t
+ngbe_get_rx_port_offloads(struct rte_eth_dev *dev)
{
uint64_t offloads;
+ struct ngbe_hw *hw = ngbe_dev_hw(dev);
offloads = DEV_RX_OFFLOAD_IPV4_CKSUM |
DEV_RX_OFFLOAD_UDP_CKSUM |
DEV_RX_OFFLOAD_TCP_CKSUM |
DEV_RX_OFFLOAD_KEEP_CRC |
DEV_RX_OFFLOAD_JUMBO_FRAME |
+ DEV_RX_OFFLOAD_VLAN_FILTER |
DEV_RX_OFFLOAD_SCATTER;
+ if (hw->is_pf)
+ offloads |= (DEV_RX_OFFLOAD_QINQ_STRIP |
+ DEV_RX_OFFLOAD_VLAN_EXTEND);
+
return offloads;
}
@@ -2189,6 +2249,40 @@ ngbe_dev_free_queues(struct rte_eth_dev *dev)
dev->data->nb_tx_queues = 0;
}
+void ngbe_configure_port(struct rte_eth_dev *dev)
+{
+ struct ngbe_hw *hw = ngbe_dev_hw(dev);
+ int i = 0;
+ uint16_t tpids[8] = {RTE_ETHER_TYPE_VLAN, RTE_ETHER_TYPE_QINQ,
+ 0x9100, 0x9200,
+ 0x0000, 0x0000,
+ 0x0000, 0x0000};
+
+ PMD_INIT_FUNC_TRACE();
+
+ /* default outer vlan tpid */
+ wr32(hw, NGBE_EXTAG,
+ NGBE_EXTAG_ETAG(RTE_ETHER_TYPE_ETAG) |
+ NGBE_EXTAG_VLAN(RTE_ETHER_TYPE_QINQ));
+
+ /* default inner vlan tpid */
+ wr32m(hw, NGBE_VLANCTL,
+ NGBE_VLANCTL_TPID_MASK,
+ NGBE_VLANCTL_TPID(RTE_ETHER_TYPE_VLAN));
+ wr32m(hw, NGBE_DMATXCTRL,
+ NGBE_DMATXCTRL_TPID_MASK,
+ NGBE_DMATXCTRL_TPID(RTE_ETHER_TYPE_VLAN));
+
+ /* default vlan tpid filters */
+ for (i = 0; i < 8; i++) {
+ wr32m(hw, NGBE_TAGTPID(i / 2),
+ (i % 2 ? NGBE_TAGTPID_MSB_MASK
+ : NGBE_TAGTPID_LSB_MASK),
+ (i % 2 ? NGBE_TAGTPID_MSB(tpids[i])
+ : NGBE_TAGTPID_LSB(tpids[i])));
+ }
+}
+
static int
ngbe_alloc_rx_queue_mbufs(struct ngbe_rx_queue *rxq)
{
@@ -2326,6 +2420,12 @@ ngbe_dev_rx_init(struct rte_eth_dev *dev)
NGBE_FRMSZ_MAX(NGBE_FRAME_SIZE_DFT));
}
+ /*
+ * Assume no header split and no VLAN strip support
+ * on any Rx queue first .
+ */
+ rx_conf->offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+
/* Setup Rx queues */
for (i = 0; i < dev->data->nb_rx_queues; i++) {
rxq = dev->data->rx_queues[i];
@@ -2366,6 +2466,13 @@ ngbe_dev_rx_init(struct rte_eth_dev *dev)
srrctl |= NGBE_RXCFG_PKTLEN(buf_size);
wr32(hw, NGBE_RXCFG(rxq->reg_idx), srrctl);
+
+ /* It adds dual VLAN length for supporting dual VLAN */
+ if (dev->data->dev_conf.rxmode.max_rx_pkt_len +
+ 2 * NGBE_VLAN_TAG_SIZE > buf_size)
+ dev->data->scattered_rx = 1;
+ if (rxq->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+ rx_conf->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
}
if (rx_conf->offloads & DEV_RX_OFFLOAD_SCATTER)
diff --git a/drivers/net/ngbe/ngbe_rxtx.h b/drivers/net/ngbe/ngbe_rxtx.h
index 07b6e2374e..812bc57c9e 100644
--- a/drivers/net/ngbe/ngbe_rxtx.h
+++ b/drivers/net/ngbe/ngbe_rxtx.h
@@ -271,6 +271,8 @@ struct ngbe_rx_queue {
uint8_t crc_len; /**< 0 if CRC stripped, 4 otherwise. */
uint8_t drop_en; /**< If not 0, set SRRCTL.Drop_En */
uint8_t rx_deferred_start; /**< not in global dev start */
+ /** flags to set in mbuf when a vlan is detected */
+ uint64_t vlan_flags;
uint64_t offloads; /**< Rx offloads with DEV_RX_OFFLOAD_* */
/** need to alloc dummy mbuf, for wraparound when scanning hw ring */
struct rte_mbuf fake_mbuf;
@@ -370,6 +372,7 @@ void ngbe_set_tx_function(struct rte_eth_dev *dev, struct ngbe_tx_queue *txq);
void ngbe_set_rx_function(struct rte_eth_dev *dev);
uint64_t ngbe_get_tx_port_offloads(struct rte_eth_dev *dev);
+uint64_t ngbe_get_rx_queue_offloads(struct rte_eth_dev *dev);
uint64_t ngbe_get_rx_port_offloads(struct rte_eth_dev *dev);
#endif /* _NGBE_RXTX_H_ */
--
2.21.0.windows.1
^ permalink raw reply [flat|nested] 54+ messages in thread
* [dpdk-dev] [PATCH 08/32] net/ngbe: support basic statistics
2021-09-08 8:37 [dpdk-dev] [PATCH 00/32] net/ngbe: add many features Jiawen Wu
` (6 preceding siblings ...)
2021-09-08 8:37 ` [dpdk-dev] [PATCH 07/32] net/ngbe: support VLAN and QinQ offload Jiawen Wu
@ 2021-09-08 8:37 ` Jiawen Wu
2021-09-15 16:50 ` Ferruh Yigit
2021-09-08 8:37 ` [dpdk-dev] [PATCH 09/32] net/ngbe: support device xstats Jiawen Wu
` (23 subsequent siblings)
31 siblings, 1 reply; 54+ messages in thread
From: Jiawen Wu @ 2021-09-08 8:37 UTC (permalink / raw)
To: dev; +Cc: Jiawen Wu
Support to read and clear basic statistics, and configure per-queue
stats counter mapping.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
doc/guides/nics/features/ngbe.ini | 2 +
doc/guides/nics/ngbe.rst | 1 +
drivers/net/ngbe/base/ngbe_dummy.h | 5 +
drivers/net/ngbe/base/ngbe_hw.c | 101 ++++++++++
drivers/net/ngbe/base/ngbe_hw.h | 1 +
drivers/net/ngbe/base/ngbe_type.h | 134 +++++++++++++
drivers/net/ngbe/ngbe_ethdev.c | 300 +++++++++++++++++++++++++++++
drivers/net/ngbe/ngbe_ethdev.h | 19 ++
8 files changed, 563 insertions(+)
diff --git a/doc/guides/nics/features/ngbe.ini b/doc/guides/nics/features/ngbe.ini
index 4ae2d66d15..f310fb102a 100644
--- a/doc/guides/nics/features/ngbe.ini
+++ b/doc/guides/nics/features/ngbe.ini
@@ -19,6 +19,8 @@ L4 checksum offload = P
Inner L3 checksum = P
Inner L4 checksum = P
Packet type parsing = Y
+Basic stats = Y
+Stats per queue = Y
Multiprocess aware = Y
Linux = Y
ARMv8 = Y
diff --git a/doc/guides/nics/ngbe.rst b/doc/guides/nics/ngbe.rst
index 9518a59443..64c07e4741 100644
--- a/doc/guides/nics/ngbe.rst
+++ b/doc/guides/nics/ngbe.rst
@@ -15,6 +15,7 @@ Features
- Checksum offload
- VLAN/QinQ stripping and inserting
- TSO offload
+- Port hardware statistics
- Jumbo frames
- Link state information
- Scattered and gather for TX and RX
diff --git a/drivers/net/ngbe/base/ngbe_dummy.h b/drivers/net/ngbe/base/ngbe_dummy.h
index 8863acef0d..0def116c53 100644
--- a/drivers/net/ngbe/base/ngbe_dummy.h
+++ b/drivers/net/ngbe/base/ngbe_dummy.h
@@ -55,6 +55,10 @@ static inline s32 ngbe_mac_stop_hw_dummy(struct ngbe_hw *TUP0)
{
return NGBE_ERR_OPS_DUMMY;
}
+static inline s32 ngbe_mac_clear_hw_cntrs_dummy(struct ngbe_hw *TUP0)
+{
+ return NGBE_ERR_OPS_DUMMY;
+}
static inline s32 ngbe_mac_get_mac_addr_dummy(struct ngbe_hw *TUP0, u8 *TUP1)
{
return NGBE_ERR_OPS_DUMMY;
@@ -178,6 +182,7 @@ static inline void ngbe_init_ops_dummy(struct ngbe_hw *hw)
hw->mac.reset_hw = ngbe_mac_reset_hw_dummy;
hw->mac.start_hw = ngbe_mac_start_hw_dummy;
hw->mac.stop_hw = ngbe_mac_stop_hw_dummy;
+ hw->mac.clear_hw_cntrs = ngbe_mac_clear_hw_cntrs_dummy;
hw->mac.get_mac_addr = ngbe_mac_get_mac_addr_dummy;
hw->mac.enable_rx_dma = ngbe_mac_enable_rx_dma_dummy;
hw->mac.disable_sec_rx_path = ngbe_mac_disable_sec_rx_path_dummy;
diff --git a/drivers/net/ngbe/base/ngbe_hw.c b/drivers/net/ngbe/base/ngbe_hw.c
index 6b575fc67b..f302df5d9d 100644
--- a/drivers/net/ngbe/base/ngbe_hw.c
+++ b/drivers/net/ngbe/base/ngbe_hw.c
@@ -19,6 +19,9 @@ s32 ngbe_start_hw(struct ngbe_hw *hw)
{
DEBUGFUNC("ngbe_start_hw");
+ /* Clear statistics registers */
+ hw->mac.clear_hw_cntrs(hw);
+
/* Clear adapter stopped flag */
hw->adapter_stopped = false;
@@ -159,6 +162,7 @@ s32 ngbe_reset_hw_em(struct ngbe_hw *hw)
msec_delay(50);
ngbe_reset_misc_em(hw);
+ hw->mac.clear_hw_cntrs(hw);
msec_delay(50);
@@ -175,6 +179,102 @@ s32 ngbe_reset_hw_em(struct ngbe_hw *hw)
return status;
}
+/**
+ * ngbe_clear_hw_cntrs - Generic clear hardware counters
+ * @hw: pointer to hardware structure
+ *
+ * Clears all hardware statistics counters by reading them from the hardware
+ * Statistics counters are clear on read.
+ **/
+s32 ngbe_clear_hw_cntrs(struct ngbe_hw *hw)
+{
+ u16 i = 0;
+
+ DEBUGFUNC("ngbe_clear_hw_cntrs");
+
+ /* QP Stats */
+ /* don't write clear queue stats */
+ for (i = 0; i < NGBE_MAX_QP; i++) {
+ hw->qp_last[i].rx_qp_packets = 0;
+ hw->qp_last[i].tx_qp_packets = 0;
+ hw->qp_last[i].rx_qp_bytes = 0;
+ hw->qp_last[i].tx_qp_bytes = 0;
+ hw->qp_last[i].rx_qp_mc_packets = 0;
+ hw->qp_last[i].tx_qp_mc_packets = 0;
+ hw->qp_last[i].rx_qp_bc_packets = 0;
+ hw->qp_last[i].tx_qp_bc_packets = 0;
+ }
+
+ /* PB Stats */
+ rd32(hw, NGBE_PBRXLNKXON);
+ rd32(hw, NGBE_PBRXLNKXOFF);
+ rd32(hw, NGBE_PBTXLNKXON);
+ rd32(hw, NGBE_PBTXLNKXOFF);
+
+ /* DMA Stats */
+ rd32(hw, NGBE_DMARXPKT);
+ rd32(hw, NGBE_DMATXPKT);
+
+ rd64(hw, NGBE_DMARXOCTL);
+ rd64(hw, NGBE_DMATXOCTL);
+
+ /* MAC Stats */
+ rd64(hw, NGBE_MACRXERRCRCL);
+ rd64(hw, NGBE_MACRXMPKTL);
+ rd64(hw, NGBE_MACTXMPKTL);
+
+ rd64(hw, NGBE_MACRXPKTL);
+ rd64(hw, NGBE_MACTXPKTL);
+ rd64(hw, NGBE_MACRXGBOCTL);
+
+ rd64(hw, NGBE_MACRXOCTL);
+ rd32(hw, NGBE_MACTXOCTL);
+
+ rd64(hw, NGBE_MACRX1TO64L);
+ rd64(hw, NGBE_MACRX65TO127L);
+ rd64(hw, NGBE_MACRX128TO255L);
+ rd64(hw, NGBE_MACRX256TO511L);
+ rd64(hw, NGBE_MACRX512TO1023L);
+ rd64(hw, NGBE_MACRX1024TOMAXL);
+ rd64(hw, NGBE_MACTX1TO64L);
+ rd64(hw, NGBE_MACTX65TO127L);
+ rd64(hw, NGBE_MACTX128TO255L);
+ rd64(hw, NGBE_MACTX256TO511L);
+ rd64(hw, NGBE_MACTX512TO1023L);
+ rd64(hw, NGBE_MACTX1024TOMAXL);
+
+ rd64(hw, NGBE_MACRXERRLENL);
+ rd32(hw, NGBE_MACRXOVERSIZE);
+ rd32(hw, NGBE_MACRXJABBER);
+
+ /* MACsec Stats */
+ rd32(hw, NGBE_LSECTX_UTPKT);
+ rd32(hw, NGBE_LSECTX_ENCPKT);
+ rd32(hw, NGBE_LSECTX_PROTPKT);
+ rd32(hw, NGBE_LSECTX_ENCOCT);
+ rd32(hw, NGBE_LSECTX_PROTOCT);
+ rd32(hw, NGBE_LSECRX_UTPKT);
+ rd32(hw, NGBE_LSECRX_BTPKT);
+ rd32(hw, NGBE_LSECRX_NOSCIPKT);
+ rd32(hw, NGBE_LSECRX_UNSCIPKT);
+ rd32(hw, NGBE_LSECRX_DECOCT);
+ rd32(hw, NGBE_LSECRX_VLDOCT);
+ rd32(hw, NGBE_LSECRX_UNCHKPKT);
+ rd32(hw, NGBE_LSECRX_DLYPKT);
+ rd32(hw, NGBE_LSECRX_LATEPKT);
+ for (i = 0; i < 2; i++) {
+ rd32(hw, NGBE_LSECRX_OKPKT(i));
+ rd32(hw, NGBE_LSECRX_INVPKT(i));
+ rd32(hw, NGBE_LSECRX_BADPKT(i));
+ }
+ for (i = 0; i < 4; i++) {
+ rd32(hw, NGBE_LSECRX_INVSAPKT(i));
+ rd32(hw, NGBE_LSECRX_BADSAPKT(i));
+ }
+
+ return 0;
+}
+
/**
* ngbe_get_mac_addr - Generic get MAC address
* @hw: pointer to hardware structure
@@ -988,6 +1088,7 @@ s32 ngbe_init_ops_pf(struct ngbe_hw *hw)
mac->init_hw = ngbe_init_hw;
mac->reset_hw = ngbe_reset_hw_em;
mac->start_hw = ngbe_start_hw;
+ mac->clear_hw_cntrs = ngbe_clear_hw_cntrs;
mac->enable_rx_dma = ngbe_enable_rx_dma;
mac->get_mac_addr = ngbe_get_mac_addr;
mac->stop_hw = ngbe_stop_hw;
diff --git a/drivers/net/ngbe/base/ngbe_hw.h b/drivers/net/ngbe/base/ngbe_hw.h
index 17a0a03c88..6a08c02bee 100644
--- a/drivers/net/ngbe/base/ngbe_hw.h
+++ b/drivers/net/ngbe/base/ngbe_hw.h
@@ -17,6 +17,7 @@ s32 ngbe_init_hw(struct ngbe_hw *hw);
s32 ngbe_start_hw(struct ngbe_hw *hw);
s32 ngbe_reset_hw_em(struct ngbe_hw *hw);
s32 ngbe_stop_hw(struct ngbe_hw *hw);
+s32 ngbe_clear_hw_cntrs(struct ngbe_hw *hw);
s32 ngbe_get_mac_addr(struct ngbe_hw *hw, u8 *mac_addr);
void ngbe_set_lan_id_multi_port(struct ngbe_hw *hw);
diff --git a/drivers/net/ngbe/base/ngbe_type.h b/drivers/net/ngbe/base/ngbe_type.h
index 28540e4ba0..c13f0208fd 100644
--- a/drivers/net/ngbe/base/ngbe_type.h
+++ b/drivers/net/ngbe/base/ngbe_type.h
@@ -9,6 +9,7 @@
#define NGBE_LINK_UP_TIME 90 /* 9.0 Seconds */
#define NGBE_FRAME_SIZE_DFT (1522) /* Default frame size, +FCS */
+#define NGBE_MAX_QP (8)
#define NGBE_ALIGN 128 /* as intel did */
#define NGBE_ISB_SIZE 16
@@ -77,6 +78,127 @@ struct ngbe_bus_info {
u8 lan_id;
};
+/* Statistics counters collected by the MAC */
+/* PB[] RxTx */
+struct ngbe_pb_stats {
+ u64 tx_pb_xon_packets;
+ u64 rx_pb_xon_packets;
+ u64 tx_pb_xoff_packets;
+ u64 rx_pb_xoff_packets;
+ u64 rx_pb_dropped;
+ u64 rx_pb_mbuf_alloc_errors;
+ u64 tx_pb_xon2off_packets;
+};
+
+/* QP[] RxTx */
+struct ngbe_qp_stats {
+ u64 rx_qp_packets;
+ u64 tx_qp_packets;
+ u64 rx_qp_bytes;
+ u64 tx_qp_bytes;
+ u64 rx_qp_mc_packets;
+};
+
+struct ngbe_hw_stats {
+ /* MNG RxTx */
+ u64 mng_bmc2host_packets;
+ u64 mng_host2bmc_packets;
+ /* Basix RxTx */
+ u64 rx_drop_packets;
+ u64 tx_drop_packets;
+ u64 rx_dma_drop;
+ u64 tx_secdrp_packets;
+ u64 rx_packets;
+ u64 tx_packets;
+ u64 rx_bytes;
+ u64 tx_bytes;
+ u64 rx_total_bytes;
+ u64 rx_total_packets;
+ u64 tx_total_packets;
+ u64 rx_total_missed_packets;
+ u64 rx_broadcast_packets;
+ u64 tx_broadcast_packets;
+ u64 rx_multicast_packets;
+ u64 tx_multicast_packets;
+ u64 rx_management_packets;
+ u64 tx_management_packets;
+ u64 rx_management_dropped;
+
+ /* Basic Error */
+ u64 rx_crc_errors;
+ u64 rx_illegal_byte_errors;
+ u64 rx_error_bytes;
+ u64 rx_mac_short_packet_dropped;
+ u64 rx_length_errors;
+ u64 rx_undersize_errors;
+ u64 rx_fragment_errors;
+ u64 rx_oversize_errors;
+ u64 rx_jabber_errors;
+ u64 rx_l3_l4_xsum_error;
+ u64 mac_local_errors;
+ u64 mac_remote_errors;
+
+ /* MACSEC */
+ u64 tx_macsec_pkts_untagged;
+ u64 tx_macsec_pkts_encrypted;
+ u64 tx_macsec_pkts_protected;
+ u64 tx_macsec_octets_encrypted;
+ u64 tx_macsec_octets_protected;
+ u64 rx_macsec_pkts_untagged;
+ u64 rx_macsec_pkts_badtag;
+ u64 rx_macsec_pkts_nosci;
+ u64 rx_macsec_pkts_unknownsci;
+ u64 rx_macsec_octets_decrypted;
+ u64 rx_macsec_octets_validated;
+ u64 rx_macsec_sc_pkts_unchecked;
+ u64 rx_macsec_sc_pkts_delayed;
+ u64 rx_macsec_sc_pkts_late;
+ u64 rx_macsec_sa_pkts_ok;
+ u64 rx_macsec_sa_pkts_invalid;
+ u64 rx_macsec_sa_pkts_notvalid;
+ u64 rx_macsec_sa_pkts_unusedsa;
+ u64 rx_macsec_sa_pkts_notusingsa;
+
+ /* MAC RxTx */
+ u64 rx_size_64_packets;
+ u64 rx_size_65_to_127_packets;
+ u64 rx_size_128_to_255_packets;
+ u64 rx_size_256_to_511_packets;
+ u64 rx_size_512_to_1023_packets;
+ u64 rx_size_1024_to_max_packets;
+ u64 tx_size_64_packets;
+ u64 tx_size_65_to_127_packets;
+ u64 tx_size_128_to_255_packets;
+ u64 tx_size_256_to_511_packets;
+ u64 tx_size_512_to_1023_packets;
+ u64 tx_size_1024_to_max_packets;
+
+ /* Flow Control */
+ u64 tx_xon_packets;
+ u64 rx_xon_packets;
+ u64 tx_xoff_packets;
+ u64 rx_xoff_packets;
+
+ u64 rx_up_dropped;
+
+ u64 rdb_pkt_cnt;
+ u64 rdb_repli_cnt;
+ u64 rdb_drp_cnt;
+
+ /* QP[] RxTx */
+ struct {
+ u64 rx_qp_packets;
+ u64 tx_qp_packets;
+ u64 rx_qp_bytes;
+ u64 tx_qp_bytes;
+ u64 rx_qp_mc_packets;
+ u64 tx_qp_mc_packets;
+ u64 rx_qp_bc_packets;
+ u64 tx_qp_bc_packets;
+ } qp[NGBE_MAX_QP];
+
+};
+
struct ngbe_rom_info {
s32 (*init_params)(struct ngbe_hw *hw);
s32 (*validate_checksum)(struct ngbe_hw *hw, u16 *checksum_val);
@@ -96,6 +218,7 @@ struct ngbe_mac_info {
s32 (*reset_hw)(struct ngbe_hw *hw);
s32 (*start_hw)(struct ngbe_hw *hw);
s32 (*stop_hw)(struct ngbe_hw *hw);
+ s32 (*clear_hw_cntrs)(struct ngbe_hw *hw);
s32 (*get_mac_addr)(struct ngbe_hw *hw, u8 *mac_addr);
s32 (*enable_rx_dma)(struct ngbe_hw *hw, u32 regval);
s32 (*disable_sec_rx_path)(struct ngbe_hw *hw);
@@ -195,7 +318,18 @@ struct ngbe_hw {
u32 q_rx_regs[8 * 4];
u32 q_tx_regs[8 * 4];
+ bool offset_loaded;
bool is_pf;
+ struct {
+ u64 rx_qp_packets;
+ u64 tx_qp_packets;
+ u64 rx_qp_bytes;
+ u64 tx_qp_bytes;
+ u64 rx_qp_mc_packets;
+ u64 tx_qp_mc_packets;
+ u64 rx_qp_bc_packets;
+ u64 tx_qp_bc_packets;
+ } qp_last[NGBE_MAX_QP];
};
#include "ngbe_regs.h"
diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c
index 3903eb0a2c..3d459718b1 100644
--- a/drivers/net/ngbe/ngbe_ethdev.c
+++ b/drivers/net/ngbe/ngbe_ethdev.c
@@ -17,6 +17,7 @@
static int ngbe_dev_close(struct rte_eth_dev *dev);
static int ngbe_dev_link_update(struct rte_eth_dev *dev,
int wait_to_complete);
+static int ngbe_dev_stats_reset(struct rte_eth_dev *dev);
static void ngbe_vlan_hw_strip_enable(struct rte_eth_dev *dev, uint16_t queue);
static void ngbe_vlan_hw_strip_disable(struct rte_eth_dev *dev,
uint16_t queue);
@@ -122,6 +123,56 @@ ngbe_disable_intr(struct ngbe_hw *hw)
ngbe_flush(hw);
}
+static int
+ngbe_dev_queue_stats_mapping_set(struct rte_eth_dev *eth_dev,
+ uint16_t queue_id,
+ uint8_t stat_idx,
+ uint8_t is_rx)
+{
+ struct ngbe_stat_mappings *stat_mappings =
+ NGBE_DEV_STAT_MAPPINGS(eth_dev);
+ uint32_t qsmr_mask = 0;
+ uint32_t clearing_mask = QMAP_FIELD_RESERVED_BITS_MASK;
+ uint32_t q_map;
+ uint8_t n, offset;
+
+ if (stat_idx & !QMAP_FIELD_RESERVED_BITS_MASK)
+ return -EIO;
+
+ PMD_INIT_LOG(DEBUG, "Setting port %d, %s queue_id %d to stat index %d",
+ (int)(eth_dev->data->port_id), is_rx ? "RX" : "TX",
+ queue_id, stat_idx);
+
+ n = (uint8_t)(queue_id / NB_QMAP_FIELDS_PER_QSM_REG);
+ if (n >= NGBE_NB_STAT_MAPPING) {
+ PMD_INIT_LOG(ERR, "Nb of stat mapping registers exceeded");
+ return -EIO;
+ }
+ offset = (uint8_t)(queue_id % NB_QMAP_FIELDS_PER_QSM_REG);
+
+ /* Now clear any previous stat_idx set */
+ clearing_mask <<= (QSM_REG_NB_BITS_PER_QMAP_FIELD * offset);
+ if (!is_rx)
+ stat_mappings->tqsm[n] &= ~clearing_mask;
+ else
+ stat_mappings->rqsm[n] &= ~clearing_mask;
+
+ q_map = (uint32_t)stat_idx;
+ q_map &= QMAP_FIELD_RESERVED_BITS_MASK;
+ qsmr_mask = q_map << (QSM_REG_NB_BITS_PER_QMAP_FIELD * offset);
+ if (!is_rx)
+ stat_mappings->tqsm[n] |= qsmr_mask;
+ else
+ stat_mappings->rqsm[n] |= qsmr_mask;
+
+ PMD_INIT_LOG(DEBUG, "Set port %d, %s queue_id %d to stat index %d",
+ (int)(eth_dev->data->port_id), is_rx ? "RX" : "TX",
+ queue_id, stat_idx);
+ PMD_INIT_LOG(DEBUG, "%s[%d] = 0x%08x", is_rx ? "RQSMR" : "TQSM", n,
+ is_rx ? stat_mappings->rqsm[n] : stat_mappings->tqsm[n]);
+ return 0;
+}
+
/*
* Ensure that all locks are released before first NVM or PHY access
*/
@@ -236,6 +287,9 @@ eth_ngbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
return -EIO;
}
+ /* Reset the hw statistics */
+ ngbe_dev_stats_reset(eth_dev);
+
/* disable interrupt */
ngbe_disable_intr(hw);
@@ -616,6 +670,7 @@ static int
ngbe_dev_start(struct rte_eth_dev *dev)
{
struct ngbe_hw *hw = ngbe_dev_hw(dev);
+ struct ngbe_hw_stats *hw_stats = NGBE_DEV_STATS(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
uint32_t intr_vector = 0;
@@ -780,6 +835,9 @@ ngbe_dev_start(struct rte_eth_dev *dev)
*/
ngbe_dev_link_update(dev, 0);
+ ngbe_read_stats_registers(hw, hw_stats);
+ hw->offset_loaded = 1;
+
return 0;
error:
@@ -916,6 +974,245 @@ ngbe_dev_reset(struct rte_eth_dev *dev)
return ret;
}
+#define UPDATE_QP_COUNTER_32bit(reg, last_counter, counter) \
+ { \
+ uint32_t current_counter = rd32(hw, reg); \
+ if (current_counter < last_counter) \
+ current_counter += 0x100000000LL; \
+ if (!hw->offset_loaded) \
+ last_counter = current_counter; \
+ counter = current_counter - last_counter; \
+ counter &= 0xFFFFFFFFLL; \
+ }
+
+#define UPDATE_QP_COUNTER_36bit(reg_lsb, reg_msb, last_counter, counter) \
+ { \
+ uint64_t current_counter_lsb = rd32(hw, reg_lsb); \
+ uint64_t current_counter_msb = rd32(hw, reg_msb); \
+ uint64_t current_counter = (current_counter_msb << 32) | \
+ current_counter_lsb; \
+ if (current_counter < last_counter) \
+ current_counter += 0x1000000000LL; \
+ if (!hw->offset_loaded) \
+ last_counter = current_counter; \
+ counter = current_counter - last_counter; \
+ counter &= 0xFFFFFFFFFLL; \
+ }
+
+void
+ngbe_read_stats_registers(struct ngbe_hw *hw,
+ struct ngbe_hw_stats *hw_stats)
+{
+ unsigned int i;
+
+ /* QP Stats */
+ for (i = 0; i < hw->nb_rx_queues; i++) {
+ UPDATE_QP_COUNTER_32bit(NGBE_QPRXPKT(i),
+ hw->qp_last[i].rx_qp_packets,
+ hw_stats->qp[i].rx_qp_packets);
+ UPDATE_QP_COUNTER_36bit(NGBE_QPRXOCTL(i), NGBE_QPRXOCTH(i),
+ hw->qp_last[i].rx_qp_bytes,
+ hw_stats->qp[i].rx_qp_bytes);
+ UPDATE_QP_COUNTER_32bit(NGBE_QPRXMPKT(i),
+ hw->qp_last[i].rx_qp_mc_packets,
+ hw_stats->qp[i].rx_qp_mc_packets);
+ UPDATE_QP_COUNTER_32bit(NGBE_QPRXBPKT(i),
+ hw->qp_last[i].rx_qp_bc_packets,
+ hw_stats->qp[i].rx_qp_bc_packets);
+ }
+
+ for (i = 0; i < hw->nb_tx_queues; i++) {
+ UPDATE_QP_COUNTER_32bit(NGBE_QPTXPKT(i),
+ hw->qp_last[i].tx_qp_packets,
+ hw_stats->qp[i].tx_qp_packets);
+ UPDATE_QP_COUNTER_36bit(NGBE_QPTXOCTL(i), NGBE_QPTXOCTH(i),
+ hw->qp_last[i].tx_qp_bytes,
+ hw_stats->qp[i].tx_qp_bytes);
+ UPDATE_QP_COUNTER_32bit(NGBE_QPTXMPKT(i),
+ hw->qp_last[i].tx_qp_mc_packets,
+ hw_stats->qp[i].tx_qp_mc_packets);
+ UPDATE_QP_COUNTER_32bit(NGBE_QPTXBPKT(i),
+ hw->qp_last[i].tx_qp_bc_packets,
+ hw_stats->qp[i].tx_qp_bc_packets);
+ }
+
+ /* PB Stats */
+ hw_stats->rx_up_dropped += rd32(hw, NGBE_PBRXMISS);
+ hw_stats->rdb_pkt_cnt += rd32(hw, NGBE_PBRXPKT);
+ hw_stats->rdb_repli_cnt += rd32(hw, NGBE_PBRXREP);
+ hw_stats->rdb_drp_cnt += rd32(hw, NGBE_PBRXDROP);
+ hw_stats->tx_xoff_packets += rd32(hw, NGBE_PBTXLNKXOFF);
+ hw_stats->tx_xon_packets += rd32(hw, NGBE_PBTXLNKXON);
+
+ hw_stats->rx_xon_packets += rd32(hw, NGBE_PBRXLNKXON);
+ hw_stats->rx_xoff_packets += rd32(hw, NGBE_PBRXLNKXOFF);
+
+ /* DMA Stats */
+ hw_stats->rx_drop_packets += rd32(hw, NGBE_DMARXDROP);
+ hw_stats->tx_drop_packets += rd32(hw, NGBE_DMATXDROP);
+ hw_stats->rx_dma_drop += rd32(hw, NGBE_DMARXDROP);
+ hw_stats->tx_secdrp_packets += rd32(hw, NGBE_DMATXSECDROP);
+ hw_stats->rx_packets += rd32(hw, NGBE_DMARXPKT);
+ hw_stats->tx_packets += rd32(hw, NGBE_DMATXPKT);
+ hw_stats->rx_bytes += rd64(hw, NGBE_DMARXOCTL);
+ hw_stats->tx_bytes += rd64(hw, NGBE_DMATXOCTL);
+
+ /* MAC Stats */
+ hw_stats->rx_crc_errors += rd64(hw, NGBE_MACRXERRCRCL);
+ hw_stats->rx_multicast_packets += rd64(hw, NGBE_MACRXMPKTL);
+ hw_stats->tx_multicast_packets += rd64(hw, NGBE_MACTXMPKTL);
+
+ hw_stats->rx_total_packets += rd64(hw, NGBE_MACRXPKTL);
+ hw_stats->tx_total_packets += rd64(hw, NGBE_MACTXPKTL);
+ hw_stats->rx_total_bytes += rd64(hw, NGBE_MACRXGBOCTL);
+
+ hw_stats->rx_broadcast_packets += rd64(hw, NGBE_MACRXOCTL);
+ hw_stats->tx_broadcast_packets += rd32(hw, NGBE_MACTXOCTL);
+
+ hw_stats->rx_size_64_packets += rd64(hw, NGBE_MACRX1TO64L);
+ hw_stats->rx_size_65_to_127_packets += rd64(hw, NGBE_MACRX65TO127L);
+ hw_stats->rx_size_128_to_255_packets += rd64(hw, NGBE_MACRX128TO255L);
+ hw_stats->rx_size_256_to_511_packets += rd64(hw, NGBE_MACRX256TO511L);
+ hw_stats->rx_size_512_to_1023_packets +=
+ rd64(hw, NGBE_MACRX512TO1023L);
+ hw_stats->rx_size_1024_to_max_packets +=
+ rd64(hw, NGBE_MACRX1024TOMAXL);
+ hw_stats->tx_size_64_packets += rd64(hw, NGBE_MACTX1TO64L);
+ hw_stats->tx_size_65_to_127_packets += rd64(hw, NGBE_MACTX65TO127L);
+ hw_stats->tx_size_128_to_255_packets += rd64(hw, NGBE_MACTX128TO255L);
+ hw_stats->tx_size_256_to_511_packets += rd64(hw, NGBE_MACTX256TO511L);
+ hw_stats->tx_size_512_to_1023_packets +=
+ rd64(hw, NGBE_MACTX512TO1023L);
+ hw_stats->tx_size_1024_to_max_packets +=
+ rd64(hw, NGBE_MACTX1024TOMAXL);
+
+ hw_stats->rx_undersize_errors += rd64(hw, NGBE_MACRXERRLENL);
+ hw_stats->rx_oversize_errors += rd32(hw, NGBE_MACRXOVERSIZE);
+ hw_stats->rx_jabber_errors += rd32(hw, NGBE_MACRXJABBER);
+
+ /* MNG Stats */
+ hw_stats->mng_bmc2host_packets = rd32(hw, NGBE_MNGBMC2OS);
+ hw_stats->mng_host2bmc_packets = rd32(hw, NGBE_MNGOS2BMC);
+ hw_stats->rx_management_packets = rd32(hw, NGBE_DMARXMNG);
+ hw_stats->tx_management_packets = rd32(hw, NGBE_DMATXMNG);
+
+ /* MACsec Stats */
+ hw_stats->tx_macsec_pkts_untagged += rd32(hw, NGBE_LSECTX_UTPKT);
+ hw_stats->tx_macsec_pkts_encrypted +=
+ rd32(hw, NGBE_LSECTX_ENCPKT);
+ hw_stats->tx_macsec_pkts_protected +=
+ rd32(hw, NGBE_LSECTX_PROTPKT);
+ hw_stats->tx_macsec_octets_encrypted +=
+ rd32(hw, NGBE_LSECTX_ENCOCT);
+ hw_stats->tx_macsec_octets_protected +=
+ rd32(hw, NGBE_LSECTX_PROTOCT);
+ hw_stats->rx_macsec_pkts_untagged += rd32(hw, NGBE_LSECRX_UTPKT);
+ hw_stats->rx_macsec_pkts_badtag += rd32(hw, NGBE_LSECRX_BTPKT);
+ hw_stats->rx_macsec_pkts_nosci += rd32(hw, NGBE_LSECRX_NOSCIPKT);
+ hw_stats->rx_macsec_pkts_unknownsci += rd32(hw, NGBE_LSECRX_UNSCIPKT);
+ hw_stats->rx_macsec_octets_decrypted += rd32(hw, NGBE_LSECRX_DECOCT);
+ hw_stats->rx_macsec_octets_validated += rd32(hw, NGBE_LSECRX_VLDOCT);
+ hw_stats->rx_macsec_sc_pkts_unchecked +=
+ rd32(hw, NGBE_LSECRX_UNCHKPKT);
+ hw_stats->rx_macsec_sc_pkts_delayed += rd32(hw, NGBE_LSECRX_DLYPKT);
+ hw_stats->rx_macsec_sc_pkts_late += rd32(hw, NGBE_LSECRX_LATEPKT);
+ for (i = 0; i < 2; i++) {
+ hw_stats->rx_macsec_sa_pkts_ok +=
+ rd32(hw, NGBE_LSECRX_OKPKT(i));
+ hw_stats->rx_macsec_sa_pkts_invalid +=
+ rd32(hw, NGBE_LSECRX_INVPKT(i));
+ hw_stats->rx_macsec_sa_pkts_notvalid +=
+ rd32(hw, NGBE_LSECRX_BADPKT(i));
+ }
+ for (i = 0; i < 4; i++) {
+ hw_stats->rx_macsec_sa_pkts_unusedsa +=
+ rd32(hw, NGBE_LSECRX_INVSAPKT(i));
+ hw_stats->rx_macsec_sa_pkts_notusingsa +=
+ rd32(hw, NGBE_LSECRX_BADSAPKT(i));
+ }
+ hw_stats->rx_total_missed_packets =
+ hw_stats->rx_up_dropped;
+}
+
+static int
+ngbe_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
+{
+ struct ngbe_hw *hw = ngbe_dev_hw(dev);
+ struct ngbe_hw_stats *hw_stats = NGBE_DEV_STATS(dev);
+ struct ngbe_stat_mappings *stat_mappings =
+ NGBE_DEV_STAT_MAPPINGS(dev);
+ uint32_t i, j;
+
+ ngbe_read_stats_registers(hw, hw_stats);
+
+ if (stats == NULL)
+ return -EINVAL;
+
+ /* Fill out the rte_eth_stats statistics structure */
+ stats->ipackets = hw_stats->rx_packets;
+ stats->ibytes = hw_stats->rx_bytes;
+ stats->opackets = hw_stats->tx_packets;
+ stats->obytes = hw_stats->tx_bytes;
+
+ memset(&stats->q_ipackets, 0, sizeof(stats->q_ipackets));
+ memset(&stats->q_opackets, 0, sizeof(stats->q_opackets));
+ memset(&stats->q_ibytes, 0, sizeof(stats->q_ibytes));
+ memset(&stats->q_obytes, 0, sizeof(stats->q_obytes));
+ memset(&stats->q_errors, 0, sizeof(stats->q_errors));
+ for (i = 0; i < NGBE_MAX_QP; i++) {
+ uint32_t n = i / NB_QMAP_FIELDS_PER_QSM_REG;
+ uint32_t offset = (i % NB_QMAP_FIELDS_PER_QSM_REG) * 8;
+ uint32_t q_map;
+
+ q_map = (stat_mappings->rqsm[n] >> offset)
+ & QMAP_FIELD_RESERVED_BITS_MASK;
+ j = (q_map < RTE_ETHDEV_QUEUE_STAT_CNTRS
+ ? q_map : q_map % RTE_ETHDEV_QUEUE_STAT_CNTRS);
+ stats->q_ipackets[j] += hw_stats->qp[i].rx_qp_packets;
+ stats->q_ibytes[j] += hw_stats->qp[i].rx_qp_bytes;
+
+ q_map = (stat_mappings->tqsm[n] >> offset)
+ & QMAP_FIELD_RESERVED_BITS_MASK;
+ j = (q_map < RTE_ETHDEV_QUEUE_STAT_CNTRS
+ ? q_map : q_map % RTE_ETHDEV_QUEUE_STAT_CNTRS);
+ stats->q_opackets[j] += hw_stats->qp[i].tx_qp_packets;
+ stats->q_obytes[j] += hw_stats->qp[i].tx_qp_bytes;
+ }
+
+ /* Rx Errors */
+ stats->imissed = hw_stats->rx_total_missed_packets +
+ hw_stats->rx_dma_drop;
+ stats->ierrors = hw_stats->rx_crc_errors +
+ hw_stats->rx_mac_short_packet_dropped +
+ hw_stats->rx_length_errors +
+ hw_stats->rx_undersize_errors +
+ hw_stats->rx_oversize_errors +
+ hw_stats->rx_illegal_byte_errors +
+ hw_stats->rx_error_bytes +
+ hw_stats->rx_fragment_errors;
+
+ /* Tx Errors */
+ stats->oerrors = 0;
+ return 0;
+}
+
+static int
+ngbe_dev_stats_reset(struct rte_eth_dev *dev)
+{
+ struct ngbe_hw *hw = ngbe_dev_hw(dev);
+ struct ngbe_hw_stats *hw_stats = NGBE_DEV_STATS(dev);
+
+ /* HW registers are cleared on read */
+ hw->offset_loaded = 0;
+ ngbe_dev_stats_get(dev, NULL);
+ hw->offset_loaded = 1;
+
+ /* Reset software totals */
+ memset(hw_stats, 0, sizeof(*hw_stats));
+
+ return 0;
+}
+
static int
ngbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
{
@@ -1462,6 +1759,9 @@ static const struct eth_dev_ops ngbe_eth_dev_ops = {
.dev_close = ngbe_dev_close,
.dev_reset = ngbe_dev_reset,
.link_update = ngbe_dev_link_update,
+ .stats_get = ngbe_dev_stats_get,
+ .stats_reset = ngbe_dev_stats_reset,
+ .queue_stats_mapping_set = ngbe_dev_queue_stats_mapping_set,
.vlan_offload_set = ngbe_vlan_offload_set,
.rx_queue_start = ngbe_dev_rx_queue_start,
.rx_queue_stop = ngbe_dev_rx_queue_stop,
diff --git a/drivers/net/ngbe/ngbe_ethdev.h b/drivers/net/ngbe/ngbe_ethdev.h
index 8b3a1cdc3d..c0f1a50c66 100644
--- a/drivers/net/ngbe/ngbe_ethdev.h
+++ b/drivers/net/ngbe/ngbe_ethdev.h
@@ -40,6 +40,15 @@ struct ngbe_interrupt {
uint64_t mask_orig; /* save mask during delayed handler */
};
+#define NGBE_NB_STAT_MAPPING 32
+#define QSM_REG_NB_BITS_PER_QMAP_FIELD 8
+#define NB_QMAP_FIELDS_PER_QSM_REG 4
+#define QMAP_FIELD_RESERVED_BITS_MASK 0x0f
+struct ngbe_stat_mappings {
+ uint32_t tqsm[NGBE_NB_STAT_MAPPING];
+ uint32_t rqsm[NGBE_NB_STAT_MAPPING];
+};
+
struct ngbe_vfta {
uint32_t vfta[NGBE_VFTA_SIZE];
};
@@ -53,7 +62,9 @@ struct ngbe_hwstrip {
*/
struct ngbe_adapter {
struct ngbe_hw hw;
+ struct ngbe_hw_stats stats;
struct ngbe_interrupt intr;
+ struct ngbe_stat_mappings stat_mappings;
struct ngbe_vfta shadow_vfta;
struct ngbe_hwstrip hwstrip;
bool rx_bulk_alloc_allowed;
@@ -76,6 +87,9 @@ ngbe_dev_hw(struct rte_eth_dev *dev)
return hw;
}
+#define NGBE_DEV_STATS(dev) \
+ (&((struct ngbe_adapter *)(dev)->data->dev_private)->stats)
+
static inline struct ngbe_interrupt *
ngbe_dev_intr(struct rte_eth_dev *dev)
{
@@ -85,6 +99,9 @@ ngbe_dev_intr(struct rte_eth_dev *dev)
return intr;
}
+#define NGBE_DEV_STAT_MAPPINGS(dev) \
+ (&((struct ngbe_adapter *)(dev)->data->dev_private)->stat_mappings)
+
#define NGBE_DEV_VFTA(dev) \
(&((struct ngbe_adapter *)(dev)->data->dev_private)->shadow_vfta)
@@ -190,5 +207,7 @@ void ngbe_vlan_hw_strip_bitmap_set(struct rte_eth_dev *dev,
uint16_t queue, bool on);
void ngbe_config_vlan_strip_on_all_queues(struct rte_eth_dev *dev,
int mask);
+void ngbe_read_stats_registers(struct ngbe_hw *hw,
+ struct ngbe_hw_stats *hw_stats);
#endif /* _NGBE_ETHDEV_H_ */
--
2.21.0.windows.1
^ permalink raw reply [flat|nested] 54+ messages in thread
* [dpdk-dev] [PATCH 09/32] net/ngbe: support device xstats
2021-09-08 8:37 [dpdk-dev] [PATCH 00/32] net/ngbe: add many features Jiawen Wu
` (7 preceding siblings ...)
2021-09-08 8:37 ` [dpdk-dev] [PATCH 08/32] net/ngbe: support basic statistics Jiawen Wu
@ 2021-09-08 8:37 ` Jiawen Wu
2021-09-08 8:37 ` [dpdk-dev] [PATCH 10/32] net/ngbe: support MTU set Jiawen Wu
` (22 subsequent siblings)
31 siblings, 0 replies; 54+ messages in thread
From: Jiawen Wu @ 2021-09-08 8:37 UTC (permalink / raw)
To: dev; +Cc: Jiawen Wu
Add device extended stats get from reading hardware registers.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
doc/guides/nics/features/ngbe.ini | 1 +
drivers/net/ngbe/ngbe_ethdev.c | 316 ++++++++++++++++++++++++++++++
drivers/net/ngbe/ngbe_ethdev.h | 6 +
3 files changed, 323 insertions(+)
diff --git a/doc/guides/nics/features/ngbe.ini b/doc/guides/nics/features/ngbe.ini
index f310fb102a..42101020dd 100644
--- a/doc/guides/nics/features/ngbe.ini
+++ b/doc/guides/nics/features/ngbe.ini
@@ -20,6 +20,7 @@ Inner L3 checksum = P
Inner L4 checksum = P
Packet type parsing = Y
Basic stats = Y
+Extended stats = Y
Stats per queue = Y
Multiprocess aware = Y
Linux = Y
diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c
index 3d459718b1..45d7c48011 100644
--- a/drivers/net/ngbe/ngbe_ethdev.c
+++ b/drivers/net/ngbe/ngbe_ethdev.c
@@ -84,6 +84,104 @@ static const struct rte_eth_desc_lim tx_desc_lim = {
static const struct eth_dev_ops ngbe_eth_dev_ops;
+#define HW_XSTAT(m) {#m, offsetof(struct ngbe_hw_stats, m)}
+#define HW_XSTAT_NAME(m, n) {n, offsetof(struct ngbe_hw_stats, m)}
+static const struct rte_ngbe_xstats_name_off rte_ngbe_stats_strings[] = {
+ /* MNG RxTx */
+ HW_XSTAT(mng_bmc2host_packets),
+ HW_XSTAT(mng_host2bmc_packets),
+ /* Basic RxTx */
+ HW_XSTAT(rx_packets),
+ HW_XSTAT(tx_packets),
+ HW_XSTAT(rx_bytes),
+ HW_XSTAT(tx_bytes),
+ HW_XSTAT(rx_total_bytes),
+ HW_XSTAT(rx_total_packets),
+ HW_XSTAT(tx_total_packets),
+ HW_XSTAT(rx_total_missed_packets),
+ HW_XSTAT(rx_broadcast_packets),
+ HW_XSTAT(rx_multicast_packets),
+ HW_XSTAT(rx_management_packets),
+ HW_XSTAT(tx_management_packets),
+ HW_XSTAT(rx_management_dropped),
+
+ /* Basic Error */
+ HW_XSTAT(rx_crc_errors),
+ HW_XSTAT(rx_illegal_byte_errors),
+ HW_XSTAT(rx_error_bytes),
+ HW_XSTAT(rx_mac_short_packet_dropped),
+ HW_XSTAT(rx_length_errors),
+ HW_XSTAT(rx_undersize_errors),
+ HW_XSTAT(rx_fragment_errors),
+ HW_XSTAT(rx_oversize_errors),
+ HW_XSTAT(rx_jabber_errors),
+ HW_XSTAT(rx_l3_l4_xsum_error),
+ HW_XSTAT(mac_local_errors),
+ HW_XSTAT(mac_remote_errors),
+
+ /* MACSEC */
+ HW_XSTAT(tx_macsec_pkts_untagged),
+ HW_XSTAT(tx_macsec_pkts_encrypted),
+ HW_XSTAT(tx_macsec_pkts_protected),
+ HW_XSTAT(tx_macsec_octets_encrypted),
+ HW_XSTAT(tx_macsec_octets_protected),
+ HW_XSTAT(rx_macsec_pkts_untagged),
+ HW_XSTAT(rx_macsec_pkts_badtag),
+ HW_XSTAT(rx_macsec_pkts_nosci),
+ HW_XSTAT(rx_macsec_pkts_unknownsci),
+ HW_XSTAT(rx_macsec_octets_decrypted),
+ HW_XSTAT(rx_macsec_octets_validated),
+ HW_XSTAT(rx_macsec_sc_pkts_unchecked),
+ HW_XSTAT(rx_macsec_sc_pkts_delayed),
+ HW_XSTAT(rx_macsec_sc_pkts_late),
+ HW_XSTAT(rx_macsec_sa_pkts_ok),
+ HW_XSTAT(rx_macsec_sa_pkts_invalid),
+ HW_XSTAT(rx_macsec_sa_pkts_notvalid),
+ HW_XSTAT(rx_macsec_sa_pkts_unusedsa),
+ HW_XSTAT(rx_macsec_sa_pkts_notusingsa),
+
+ /* MAC RxTx */
+ HW_XSTAT(rx_size_64_packets),
+ HW_XSTAT(rx_size_65_to_127_packets),
+ HW_XSTAT(rx_size_128_to_255_packets),
+ HW_XSTAT(rx_size_256_to_511_packets),
+ HW_XSTAT(rx_size_512_to_1023_packets),
+ HW_XSTAT(rx_size_1024_to_max_packets),
+ HW_XSTAT(tx_size_64_packets),
+ HW_XSTAT(tx_size_65_to_127_packets),
+ HW_XSTAT(tx_size_128_to_255_packets),
+ HW_XSTAT(tx_size_256_to_511_packets),
+ HW_XSTAT(tx_size_512_to_1023_packets),
+ HW_XSTAT(tx_size_1024_to_max_packets),
+
+ /* Flow Control */
+ HW_XSTAT(tx_xon_packets),
+ HW_XSTAT(rx_xon_packets),
+ HW_XSTAT(tx_xoff_packets),
+ HW_XSTAT(rx_xoff_packets),
+
+ HW_XSTAT_NAME(tx_xon_packets, "tx_flow_control_xon_packets"),
+ HW_XSTAT_NAME(rx_xon_packets, "rx_flow_control_xon_packets"),
+ HW_XSTAT_NAME(tx_xoff_packets, "tx_flow_control_xoff_packets"),
+ HW_XSTAT_NAME(rx_xoff_packets, "rx_flow_control_xoff_packets"),
+};
+
+#define NGBE_NB_HW_STATS (sizeof(rte_ngbe_stats_strings) / \
+ sizeof(rte_ngbe_stats_strings[0]))
+
+/* Per-queue statistics */
+#define QP_XSTAT(m) {#m, offsetof(struct ngbe_hw_stats, qp[0].m)}
+static const struct rte_ngbe_xstats_name_off rte_ngbe_qp_strings[] = {
+ QP_XSTAT(rx_qp_packets),
+ QP_XSTAT(tx_qp_packets),
+ QP_XSTAT(rx_qp_bytes),
+ QP_XSTAT(tx_qp_bytes),
+ QP_XSTAT(rx_qp_mc_packets),
+};
+
+#define NGBE_NB_QP_STATS (sizeof(rte_ngbe_qp_strings) / \
+ sizeof(rte_ngbe_qp_strings[0]))
+
static inline int32_t
ngbe_pf_reset_hw(struct ngbe_hw *hw)
{
@@ -1213,6 +1311,219 @@ ngbe_dev_stats_reset(struct rte_eth_dev *dev)
return 0;
}
+/* This function calculates the number of xstats based on the current config */
+static unsigned
+ngbe_xstats_calc_num(struct rte_eth_dev *dev)
+{
+ int nb_queues = max(dev->data->nb_rx_queues, dev->data->nb_tx_queues);
+ return NGBE_NB_HW_STATS +
+ NGBE_NB_QP_STATS * nb_queues;
+}
+
+static inline int
+ngbe_get_name_by_id(uint32_t id, char *name, uint32_t size)
+{
+ int nb, st;
+
+ /* Extended stats from ngbe_hw_stats */
+ if (id < NGBE_NB_HW_STATS) {
+ snprintf(name, size, "[hw]%s",
+ rte_ngbe_stats_strings[id].name);
+ return 0;
+ }
+ id -= NGBE_NB_HW_STATS;
+
+ /* Queue Stats */
+ if (id < NGBE_NB_QP_STATS * NGBE_MAX_QP) {
+ nb = id / NGBE_NB_QP_STATS;
+ st = id % NGBE_NB_QP_STATS;
+ snprintf(name, size, "[q%u]%s", nb,
+ rte_ngbe_qp_strings[st].name);
+ return 0;
+ }
+ id -= NGBE_NB_QP_STATS * NGBE_MAX_QP;
+
+ return -(int)(id + 1);
+}
+
+static inline int
+ngbe_get_offset_by_id(uint32_t id, uint32_t *offset)
+{
+ int nb, st;
+
+ /* Extended stats from ngbe_hw_stats */
+ if (id < NGBE_NB_HW_STATS) {
+ *offset = rte_ngbe_stats_strings[id].offset;
+ return 0;
+ }
+ id -= NGBE_NB_HW_STATS;
+
+ /* Queue Stats */
+ if (id < NGBE_NB_QP_STATS * NGBE_MAX_QP) {
+ nb = id / NGBE_NB_QP_STATS;
+ st = id % NGBE_NB_QP_STATS;
+ *offset = rte_ngbe_qp_strings[st].offset +
+ nb * (NGBE_NB_QP_STATS * sizeof(uint64_t));
+ return 0;
+ }
+
+ return -1;
+}
+
+static int ngbe_dev_xstats_get_names(struct rte_eth_dev *dev,
+ struct rte_eth_xstat_name *xstats_names, unsigned int limit)
+{
+ unsigned int i, count;
+
+ count = ngbe_xstats_calc_num(dev);
+ if (xstats_names == NULL)
+ return count;
+
+ /* Note: limit >= cnt_stats checked upstream
+ * in rte_eth_xstats_names()
+ */
+ limit = min(limit, count);
+
+ /* Extended stats from ngbe_hw_stats */
+ for (i = 0; i < limit; i++) {
+ if (ngbe_get_name_by_id(i, xstats_names[i].name,
+ sizeof(xstats_names[i].name))) {
+ PMD_INIT_LOG(WARNING, "id value %d isn't valid", i);
+ break;
+ }
+ }
+
+ return i;
+}
+
+static int ngbe_dev_xstats_get_names_by_id(struct rte_eth_dev *dev,
+ struct rte_eth_xstat_name *xstats_names,
+ const uint64_t *ids,
+ unsigned int limit)
+{
+ unsigned int i;
+
+ if (ids == NULL)
+ return ngbe_dev_xstats_get_names(dev, xstats_names, limit);
+
+ for (i = 0; i < limit; i++) {
+ if (ngbe_get_name_by_id(ids[i], xstats_names[i].name,
+ sizeof(xstats_names[i].name))) {
+ PMD_INIT_LOG(WARNING, "id value %d isn't valid", i);
+ return -1;
+ }
+ }
+
+ return i;
+}
+
+static int
+ngbe_dev_xstats_get(struct rte_eth_dev *dev, struct rte_eth_xstat *xstats,
+ unsigned int limit)
+{
+ struct ngbe_hw *hw = ngbe_dev_hw(dev);
+ struct ngbe_hw_stats *hw_stats = NGBE_DEV_STATS(dev);
+ unsigned int i, count;
+
+ ngbe_read_stats_registers(hw, hw_stats);
+
+ /* If this is a reset xstats is NULL, and we have cleared the
+ * registers by reading them.
+ */
+ count = ngbe_xstats_calc_num(dev);
+ if (xstats == NULL)
+ return count;
+
+ limit = min(limit, ngbe_xstats_calc_num(dev));
+
+ /* Extended stats from ngbe_hw_stats */
+ for (i = 0; i < limit; i++) {
+ uint32_t offset = 0;
+
+ if (ngbe_get_offset_by_id(i, &offset)) {
+ PMD_INIT_LOG(WARNING, "id value %d isn't valid", i);
+ break;
+ }
+ xstats[i].value = *(uint64_t *)(((char *)hw_stats) + offset);
+ xstats[i].id = i;
+ }
+
+ return i;
+}
+
+static int
+ngbe_dev_xstats_get_(struct rte_eth_dev *dev, uint64_t *values,
+ unsigned int limit)
+{
+ struct ngbe_hw *hw = ngbe_dev_hw(dev);
+ struct ngbe_hw_stats *hw_stats = NGBE_DEV_STATS(dev);
+ unsigned int i, count;
+
+ ngbe_read_stats_registers(hw, hw_stats);
+
+ /* If this is a reset xstats is NULL, and we have cleared the
+ * registers by reading them.
+ */
+ count = ngbe_xstats_calc_num(dev);
+ if (values == NULL)
+ return count;
+
+ limit = min(limit, ngbe_xstats_calc_num(dev));
+
+ /* Extended stats from ngbe_hw_stats */
+ for (i = 0; i < limit; i++) {
+ uint32_t offset;
+
+ if (ngbe_get_offset_by_id(i, &offset)) {
+ PMD_INIT_LOG(WARNING, "id value %d isn't valid", i);
+ break;
+ }
+ values[i] = *(uint64_t *)(((char *)hw_stats) + offset);
+ }
+
+ return i;
+}
+
+static int
+ngbe_dev_xstats_get_by_id(struct rte_eth_dev *dev, const uint64_t *ids,
+ uint64_t *values, unsigned int limit)
+{
+ struct ngbe_hw_stats *hw_stats = NGBE_DEV_STATS(dev);
+ unsigned int i;
+
+ if (ids == NULL)
+ return ngbe_dev_xstats_get_(dev, values, limit);
+
+ for (i = 0; i < limit; i++) {
+ uint32_t offset;
+
+ if (ngbe_get_offset_by_id(ids[i], &offset)) {
+ PMD_INIT_LOG(WARNING, "id value %d isn't valid", i);
+ break;
+ }
+ values[i] = *(uint64_t *)(((char *)hw_stats) + offset);
+ }
+
+ return i;
+}
+
+static int
+ngbe_dev_xstats_reset(struct rte_eth_dev *dev)
+{
+ struct ngbe_hw *hw = ngbe_dev_hw(dev);
+ struct ngbe_hw_stats *hw_stats = NGBE_DEV_STATS(dev);
+
+ /* HW registers are cleared on read */
+ hw->offset_loaded = 0;
+ ngbe_read_stats_registers(hw, hw_stats);
+ hw->offset_loaded = 1;
+
+ /* Reset software totals */
+ memset(hw_stats, 0, sizeof(*hw_stats));
+
+ return 0;
+}
+
static int
ngbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
{
@@ -1760,7 +2071,12 @@ static const struct eth_dev_ops ngbe_eth_dev_ops = {
.dev_reset = ngbe_dev_reset,
.link_update = ngbe_dev_link_update,
.stats_get = ngbe_dev_stats_get,
+ .xstats_get = ngbe_dev_xstats_get,
+ .xstats_get_by_id = ngbe_dev_xstats_get_by_id,
.stats_reset = ngbe_dev_stats_reset,
+ .xstats_reset = ngbe_dev_xstats_reset,
+ .xstats_get_names = ngbe_dev_xstats_get_names,
+ .xstats_get_names_by_id = ngbe_dev_xstats_get_names_by_id,
.queue_stats_mapping_set = ngbe_dev_queue_stats_mapping_set,
.vlan_offload_set = ngbe_vlan_offload_set,
.rx_queue_start = ngbe_dev_rx_queue_start,
diff --git a/drivers/net/ngbe/ngbe_ethdev.h b/drivers/net/ngbe/ngbe_ethdev.h
index c0f1a50c66..1527dcc022 100644
--- a/drivers/net/ngbe/ngbe_ethdev.h
+++ b/drivers/net/ngbe/ngbe_ethdev.h
@@ -202,6 +202,12 @@ void ngbe_vlan_hw_strip_config(struct rte_eth_dev *dev);
#define NGBE_DEFAULT_TX_HTHRESH 0
#define NGBE_DEFAULT_TX_WTHRESH 0
+/* store statistics names and its offset in stats structure */
+struct rte_ngbe_xstats_name_off {
+ char name[RTE_ETH_XSTATS_NAME_SIZE];
+ unsigned int offset;
+};
+
const uint32_t *ngbe_dev_supported_ptypes_get(struct rte_eth_dev *dev);
void ngbe_vlan_hw_strip_bitmap_set(struct rte_eth_dev *dev,
uint16_t queue, bool on);
--
2.21.0.windows.1
^ permalink raw reply [flat|nested] 54+ messages in thread
* [dpdk-dev] [PATCH 10/32] net/ngbe: support MTU set
2021-09-08 8:37 [dpdk-dev] [PATCH 00/32] net/ngbe: add many features Jiawen Wu
` (8 preceding siblings ...)
2021-09-08 8:37 ` [dpdk-dev] [PATCH 09/32] net/ngbe: support device xstats Jiawen Wu
@ 2021-09-08 8:37 ` Jiawen Wu
2021-09-15 16:52 ` Ferruh Yigit
2021-09-08 8:37 ` [dpdk-dev] [PATCH 11/32] net/ngbe: add device promiscuous and allmulticast mode Jiawen Wu
` (21 subsequent siblings)
31 siblings, 1 reply; 54+ messages in thread
From: Jiawen Wu @ 2021-09-08 8:37 UTC (permalink / raw)
To: dev; +Cc: Jiawen Wu
Support updating port MTU.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
doc/guides/nics/features/ngbe.ini | 1 +
drivers/net/ngbe/base/ngbe_type.h | 3 +++
drivers/net/ngbe/ngbe_ethdev.c | 41 +++++++++++++++++++++++++++++++
3 files changed, 45 insertions(+)
diff --git a/doc/guides/nics/features/ngbe.ini b/doc/guides/nics/features/ngbe.ini
index 42101020dd..bdb06916e1 100644
--- a/doc/guides/nics/features/ngbe.ini
+++ b/doc/guides/nics/features/ngbe.ini
@@ -8,6 +8,7 @@ Speed capabilities = Y
Link status = Y
Link status event = Y
Queue start/stop = Y
+MTU update = Y
Jumbo frame = Y
Scattered Rx = Y
TSO = Y
diff --git a/drivers/net/ngbe/base/ngbe_type.h b/drivers/net/ngbe/base/ngbe_type.h
index c13f0208fd..78fb0da7fa 100644
--- a/drivers/net/ngbe/base/ngbe_type.h
+++ b/drivers/net/ngbe/base/ngbe_type.h
@@ -8,6 +8,7 @@
#define NGBE_LINK_UP_TIME 90 /* 9.0 Seconds */
+#define NGBE_FRAME_SIZE_MAX (9728) /* Maximum frame size, +FCS */
#define NGBE_FRAME_SIZE_DFT (1522) /* Default frame size, +FCS */
#define NGBE_MAX_QP (8)
@@ -316,6 +317,8 @@ struct ngbe_hw {
u16 nb_rx_queues;
u16 nb_tx_queues;
+ u32 mode;
+
u32 q_rx_regs[8 * 4];
u32 q_tx_regs[8 * 4];
bool offset_loaded;
diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c
index 45d7c48011..29f35d9e8d 100644
--- a/drivers/net/ngbe/ngbe_ethdev.c
+++ b/drivers/net/ngbe/ngbe_ethdev.c
@@ -1970,6 +1970,46 @@ ngbe_dev_interrupt_handler(void *param)
ngbe_dev_interrupt_action(dev);
}
+static int
+ngbe_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
+{
+ struct ngbe_hw *hw = ngbe_dev_hw(dev);
+ struct rte_eth_dev_info dev_info;
+ uint32_t frame_size = mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN + 4;
+ struct rte_eth_dev_data *dev_data = dev->data;
+ int ret;
+
+ ret = ngbe_dev_info_get(dev, &dev_info);
+ if (ret != 0)
+ return ret;
+
+ /* check that mtu is within the allowed range */
+ if (mtu < RTE_ETHER_MIN_MTU || frame_size > dev_info.max_rx_pktlen)
+ return -EINVAL;
+
+ /* If device is started, refuse mtu that requires the support of
+ * scattered packets when this feature has not been enabled before.
+ */
+ if (dev_data->dev_started && !dev_data->scattered_rx &&
+ (frame_size + 2 * NGBE_VLAN_TAG_SIZE >
+ dev->data->min_rx_buf_size - RTE_PKTMBUF_HEADROOM)) {
+ PMD_INIT_LOG(ERR, "Stop port first.");
+ return -EINVAL;
+ }
+
+ /* update max frame size */
+ dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
+
+ if (hw->mode)
+ wr32m(hw, NGBE_FRMSZ, NGBE_FRMSZ_MAX_MASK,
+ NGBE_FRAME_SIZE_MAX);
+ else
+ wr32m(hw, NGBE_FRMSZ, NGBE_FRMSZ_MAX_MASK,
+ NGBE_FRMSZ_MAX(frame_size));
+
+ return 0;
+}
+
/**
* Set the IVAR registers, mapping interrupt causes to vectors
* @param hw
@@ -2078,6 +2118,7 @@ static const struct eth_dev_ops ngbe_eth_dev_ops = {
.xstats_get_names = ngbe_dev_xstats_get_names,
.xstats_get_names_by_id = ngbe_dev_xstats_get_names_by_id,
.queue_stats_mapping_set = ngbe_dev_queue_stats_mapping_set,
+ .mtu_set = ngbe_dev_mtu_set,
.vlan_offload_set = ngbe_vlan_offload_set,
.rx_queue_start = ngbe_dev_rx_queue_start,
.rx_queue_stop = ngbe_dev_rx_queue_stop,
--
2.21.0.windows.1
^ permalink raw reply [flat|nested] 54+ messages in thread
* [dpdk-dev] [PATCH 11/32] net/ngbe: add device promiscuous and allmulticast mode
2021-09-08 8:37 [dpdk-dev] [PATCH 00/32] net/ngbe: add many features Jiawen Wu
` (9 preceding siblings ...)
2021-09-08 8:37 ` [dpdk-dev] [PATCH 10/32] net/ngbe: support MTU set Jiawen Wu
@ 2021-09-08 8:37 ` Jiawen Wu
2021-09-08 8:37 ` [dpdk-dev] [PATCH 12/32] net/ngbe: support getting FW version Jiawen Wu
` (20 subsequent siblings)
31 siblings, 0 replies; 54+ messages in thread
From: Jiawen Wu @ 2021-09-08 8:37 UTC (permalink / raw)
To: dev; +Cc: Jiawen Wu
Support to enable/disable promiscuous and allmulticast mode for a port.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
doc/guides/nics/features/ngbe.ini | 2 +
doc/guides/nics/ngbe.rst | 2 +
drivers/net/ngbe/ngbe_ethdev.c | 63 +++++++++++++++++++++++++++++++
3 files changed, 67 insertions(+)
diff --git a/doc/guides/nics/features/ngbe.ini b/doc/guides/nics/features/ngbe.ini
index bdb06916e1..2f38f1e843 100644
--- a/doc/guides/nics/features/ngbe.ini
+++ b/doc/guides/nics/features/ngbe.ini
@@ -12,6 +12,8 @@ MTU update = Y
Jumbo frame = Y
Scattered Rx = Y
TSO = Y
+Promiscuous mode = Y
+Allmulticast mode = Y
CRC offload = P
VLAN offload = P
QinQ offload = P
diff --git a/doc/guides/nics/ngbe.rst b/doc/guides/nics/ngbe.rst
index 64c07e4741..8333fba9cd 100644
--- a/doc/guides/nics/ngbe.rst
+++ b/doc/guides/nics/ngbe.rst
@@ -15,6 +15,8 @@ Features
- Checksum offload
- VLAN/QinQ stripping and inserting
- TSO offload
+- Promiscuous mode
+- Multicast mode
- Port hardware statistics
- Jumbo frames
- Link state information
diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c
index 29f35d9e8d..ce71edd6d8 100644
--- a/drivers/net/ngbe/ngbe_ethdev.c
+++ b/drivers/net/ngbe/ngbe_ethdev.c
@@ -1674,6 +1674,65 @@ ngbe_dev_link_update(struct rte_eth_dev *dev, int wait_to_complete)
return ngbe_dev_link_update_share(dev, wait_to_complete);
}
+static int
+ngbe_dev_promiscuous_enable(struct rte_eth_dev *dev)
+{
+ struct ngbe_hw *hw = ngbe_dev_hw(dev);
+ uint32_t fctrl;
+
+ fctrl = rd32(hw, NGBE_PSRCTL);
+ fctrl |= (NGBE_PSRCTL_UCP | NGBE_PSRCTL_MCP);
+ wr32(hw, NGBE_PSRCTL, fctrl);
+
+ return 0;
+}
+
+static int
+ngbe_dev_promiscuous_disable(struct rte_eth_dev *dev)
+{
+ struct ngbe_hw *hw = ngbe_dev_hw(dev);
+ uint32_t fctrl;
+
+ fctrl = rd32(hw, NGBE_PSRCTL);
+ fctrl &= (~NGBE_PSRCTL_UCP);
+ if (dev->data->all_multicast == 1)
+ fctrl |= NGBE_PSRCTL_MCP;
+ else
+ fctrl &= (~NGBE_PSRCTL_MCP);
+ wr32(hw, NGBE_PSRCTL, fctrl);
+
+ return 0;
+}
+
+static int
+ngbe_dev_allmulticast_enable(struct rte_eth_dev *dev)
+{
+ struct ngbe_hw *hw = ngbe_dev_hw(dev);
+ uint32_t fctrl;
+
+ fctrl = rd32(hw, NGBE_PSRCTL);
+ fctrl |= NGBE_PSRCTL_MCP;
+ wr32(hw, NGBE_PSRCTL, fctrl);
+
+ return 0;
+}
+
+static int
+ngbe_dev_allmulticast_disable(struct rte_eth_dev *dev)
+{
+ struct ngbe_hw *hw = ngbe_dev_hw(dev);
+ uint32_t fctrl;
+
+ if (dev->data->promiscuous == 1)
+ return 0; /* must remain in all_multicast mode */
+
+ fctrl = rd32(hw, NGBE_PSRCTL);
+ fctrl &= (~NGBE_PSRCTL_MCP);
+ wr32(hw, NGBE_PSRCTL, fctrl);
+
+ return 0;
+}
+
/**
* It clears the interrupt causes and enables the interrupt.
* It will be called once only during NIC initialized.
@@ -2109,6 +2168,10 @@ static const struct eth_dev_ops ngbe_eth_dev_ops = {
.dev_stop = ngbe_dev_stop,
.dev_close = ngbe_dev_close,
.dev_reset = ngbe_dev_reset,
+ .promiscuous_enable = ngbe_dev_promiscuous_enable,
+ .promiscuous_disable = ngbe_dev_promiscuous_disable,
+ .allmulticast_enable = ngbe_dev_allmulticast_enable,
+ .allmulticast_disable = ngbe_dev_allmulticast_disable,
.link_update = ngbe_dev_link_update,
.stats_get = ngbe_dev_stats_get,
.xstats_get = ngbe_dev_xstats_get,
--
2.21.0.windows.1
^ permalink raw reply [flat|nested] 54+ messages in thread
* [dpdk-dev] [PATCH 12/32] net/ngbe: support getting FW version
2021-09-08 8:37 [dpdk-dev] [PATCH 00/32] net/ngbe: add many features Jiawen Wu
` (10 preceding siblings ...)
2021-09-08 8:37 ` [dpdk-dev] [PATCH 11/32] net/ngbe: add device promiscuous and allmulticast mode Jiawen Wu
@ 2021-09-08 8:37 ` Jiawen Wu
2021-09-15 16:53 ` Ferruh Yigit
2021-09-08 8:37 ` [dpdk-dev] [PATCH 13/32] net/ngbe: add loopback mode Jiawen Wu
` (19 subsequent siblings)
31 siblings, 1 reply; 54+ messages in thread
From: Jiawen Wu @ 2021-09-08 8:37 UTC (permalink / raw)
To: dev; +Cc: Jiawen Wu
Add firmware version get operation.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
doc/guides/nics/features/ngbe.ini | 1 +
doc/guides/nics/ngbe.rst | 1 +
drivers/net/ngbe/base/ngbe_dummy.h | 6 ++++
drivers/net/ngbe/base/ngbe_eeprom.c | 56 +++++++++++++++++++++++++++++
drivers/net/ngbe/base/ngbe_eeprom.h | 5 +++
drivers/net/ngbe/base/ngbe_hw.c | 3 ++
drivers/net/ngbe/base/ngbe_mng.c | 44 +++++++++++++++++++++++
drivers/net/ngbe/base/ngbe_mng.h | 5 +++
drivers/net/ngbe/base/ngbe_type.h | 2 ++
drivers/net/ngbe/ngbe_ethdev.c | 21 +++++++++++
10 files changed, 144 insertions(+)
diff --git a/doc/guides/nics/features/ngbe.ini b/doc/guides/nics/features/ngbe.ini
index 2f38f1e843..1006c3935b 100644
--- a/doc/guides/nics/features/ngbe.ini
+++ b/doc/guides/nics/features/ngbe.ini
@@ -25,6 +25,7 @@ Packet type parsing = Y
Basic stats = Y
Extended stats = Y
Stats per queue = Y
+FW version = Y
Multiprocess aware = Y
Linux = Y
ARMv8 = Y
diff --git a/doc/guides/nics/ngbe.rst b/doc/guides/nics/ngbe.rst
index 8333fba9cd..50a6e85c49 100644
--- a/doc/guides/nics/ngbe.rst
+++ b/doc/guides/nics/ngbe.rst
@@ -21,6 +21,7 @@ Features
- Jumbo frames
- Link state information
- Scattered and gather for TX and RX
+- FW version
Prerequisites
diff --git a/drivers/net/ngbe/base/ngbe_dummy.h b/drivers/net/ngbe/base/ngbe_dummy.h
index 0def116c53..689480cc9a 100644
--- a/drivers/net/ngbe/base/ngbe_dummy.h
+++ b/drivers/net/ngbe/base/ngbe_dummy.h
@@ -33,6 +33,11 @@ static inline s32 ngbe_rom_init_params_dummy(struct ngbe_hw *TUP0)
{
return NGBE_ERR_OPS_DUMMY;
}
+static inline s32 ngbe_rom_read32_dummy(struct ngbe_hw *TUP0, u32 TUP1,
+ u32 *TUP2)
+{
+ return NGBE_ERR_OPS_DUMMY;
+}
static inline s32 ngbe_rom_validate_checksum_dummy(struct ngbe_hw *TUP0,
u16 *TUP1)
{
@@ -177,6 +182,7 @@ static inline void ngbe_init_ops_dummy(struct ngbe_hw *hw)
{
hw->bus.set_lan_id = ngbe_bus_set_lan_id_dummy;
hw->rom.init_params = ngbe_rom_init_params_dummy;
+ hw->rom.read32 = ngbe_rom_read32_dummy;
hw->rom.validate_checksum = ngbe_rom_validate_checksum_dummy;
hw->mac.init_hw = ngbe_mac_init_hw_dummy;
hw->mac.reset_hw = ngbe_mac_reset_hw_dummy;
diff --git a/drivers/net/ngbe/base/ngbe_eeprom.c b/drivers/net/ngbe/base/ngbe_eeprom.c
index 3dcd5c2f6c..9ae2f0badb 100644
--- a/drivers/net/ngbe/base/ngbe_eeprom.c
+++ b/drivers/net/ngbe/base/ngbe_eeprom.c
@@ -161,6 +161,30 @@ void ngbe_release_eeprom_semaphore(struct ngbe_hw *hw)
ngbe_flush(hw);
}
+/**
+ * ngbe_ee_read32 - Read EEPROM word using a host interface cmd
+ * @hw: pointer to hardware structure
+ * @offset: offset of word in the EEPROM to read
+ * @data: word read from the EEPROM
+ *
+ * Reads a 32 bit word from the EEPROM using the hostif.
+ **/
+s32 ngbe_ee_read32(struct ngbe_hw *hw, u32 addr, u32 *data)
+{
+ const u32 mask = NGBE_MNGSEM_SWMBX | NGBE_MNGSEM_SWFLASH;
+ int err;
+
+ err = hw->mac.acquire_swfw_sync(hw, mask);
+ if (err)
+ return err;
+
+ err = ngbe_hic_sr_read(hw, addr, (u8 *)data, 4);
+
+ hw->mac.release_swfw_sync(hw, mask);
+
+ return err;
+}
+
/**
* ngbe_validate_eeprom_checksum_em - Validate EEPROM checksum
* @hw: pointer to hardware structure
@@ -201,3 +225,35 @@ s32 ngbe_validate_eeprom_checksum_em(struct ngbe_hw *hw,
return err;
}
+/**
+ * ngbe_save_eeprom_version
+ * @hw: pointer to hardware structure
+ *
+ * Save off EEPROM version number and Option Rom version which
+ * together make a unique identify for the eeprom
+ */
+s32 ngbe_save_eeprom_version(struct ngbe_hw *hw)
+{
+ u32 eeprom_verl = 0;
+ u32 etrack_id = 0;
+ u32 offset = (hw->rom.sw_addr + NGBE_EEPROM_VERSION_L) << 1;
+
+ DEBUGFUNC("ngbe_save_eeprom_version");
+
+ if (hw->bus.lan_id == 0) {
+ hw->rom.read32(hw, offset, &eeprom_verl);
+ etrack_id = eeprom_verl;
+ wr32(hw, NGBE_EEPROM_VERSION_STORE_REG, etrack_id);
+ wr32(hw, NGBE_CALSUM_CAP_STATUS,
+ hw->rom.cksum_devcap | 0x10000);
+ } else if (hw->rom.cksum_devcap) {
+ etrack_id = hw->rom.saved_version;
+ } else {
+ hw->rom.read32(hw, offset, &eeprom_verl);
+ etrack_id = eeprom_verl;
+ }
+
+ hw->eeprom_id = etrack_id;
+
+ return 0;
+}
diff --git a/drivers/net/ngbe/base/ngbe_eeprom.h b/drivers/net/ngbe/base/ngbe_eeprom.h
index b433077629..5f27425913 100644
--- a/drivers/net/ngbe/base/ngbe_eeprom.h
+++ b/drivers/net/ngbe/base/ngbe_eeprom.h
@@ -6,6 +6,8 @@
#ifndef _NGBE_EEPROM_H_
#define _NGBE_EEPROM_H_
+#define NGBE_EEPROM_VERSION_L 0x1D
+#define NGBE_EEPROM_VERSION_H 0x1E
#define NGBE_CALSUM_CAP_STATUS 0x10224
#define NGBE_EEPROM_VERSION_STORE_REG 0x1022C
@@ -13,5 +15,8 @@ s32 ngbe_init_eeprom_params(struct ngbe_hw *hw);
s32 ngbe_validate_eeprom_checksum_em(struct ngbe_hw *hw, u16 *checksum_val);
s32 ngbe_get_eeprom_semaphore(struct ngbe_hw *hw);
void ngbe_release_eeprom_semaphore(struct ngbe_hw *hw);
+s32 ngbe_save_eeprom_version(struct ngbe_hw *hw);
+
+s32 ngbe_ee_read32(struct ngbe_hw *hw, u32 addr, u32 *data);
#endif /* _NGBE_EEPROM_H_ */
diff --git a/drivers/net/ngbe/base/ngbe_hw.c b/drivers/net/ngbe/base/ngbe_hw.c
index f302df5d9d..0dabb6c1c7 100644
--- a/drivers/net/ngbe/base/ngbe_hw.c
+++ b/drivers/net/ngbe/base/ngbe_hw.c
@@ -44,6 +44,8 @@ s32 ngbe_init_hw(struct ngbe_hw *hw)
DEBUGFUNC("ngbe_init_hw");
+ ngbe_save_eeprom_version(hw);
+
/* Reset the hardware */
status = hw->mac.reset_hw(hw);
if (status == 0) {
@@ -1115,6 +1117,7 @@ s32 ngbe_init_ops_pf(struct ngbe_hw *hw)
/* EEPROM */
rom->init_params = ngbe_init_eeprom_params;
+ rom->read32 = ngbe_ee_read32;
rom->validate_checksum = ngbe_validate_eeprom_checksum_em;
mac->mcft_size = NGBE_EM_MC_TBL_SIZE;
diff --git a/drivers/net/ngbe/base/ngbe_mng.c b/drivers/net/ngbe/base/ngbe_mng.c
index 6ad2838ea7..9416ea4c8d 100644
--- a/drivers/net/ngbe/base/ngbe_mng.c
+++ b/drivers/net/ngbe/base/ngbe_mng.c
@@ -158,6 +158,50 @@ ngbe_host_interface_command(struct ngbe_hw *hw, u32 *buffer,
return err;
}
+/**
+ * ngbe_hic_sr_read - Read EEPROM word using a host interface cmd
+ * assuming that the semaphore is already obtained.
+ * @hw: pointer to hardware structure
+ * @offset: offset of word in the EEPROM to read
+ * @data: word read from the EEPROM
+ *
+ * Reads a 16 bit word from the EEPROM using the hostif.
+ **/
+s32 ngbe_hic_sr_read(struct ngbe_hw *hw, u32 addr, u8 *buf, int len)
+{
+ struct ngbe_hic_read_shadow_ram command;
+ u32 value;
+ int err, i = 0, j = 0;
+
+ if (len > NGBE_PMMBX_DATA_SIZE)
+ return NGBE_ERR_HOST_INTERFACE_COMMAND;
+
+ memset(&command, 0, sizeof(command));
+ command.hdr.req.cmd = FW_READ_SHADOW_RAM_CMD;
+ command.hdr.req.buf_lenh = 0;
+ command.hdr.req.buf_lenl = FW_READ_SHADOW_RAM_LEN;
+ command.hdr.req.checksum = FW_DEFAULT_CHECKSUM;
+ command.address = cpu_to_be32(addr);
+ command.length = cpu_to_be16(len);
+
+ err = ngbe_hic_unlocked(hw, (u32 *)&command,
+ sizeof(command), NGBE_HI_COMMAND_TIMEOUT);
+ if (err)
+ return err;
+
+ while (i < (len >> 2)) {
+ value = rd32a(hw, NGBE_MNGMBX, FW_NVM_DATA_OFFSET + i);
+ ((u32 *)buf)[i] = value;
+ i++;
+ }
+
+ value = rd32a(hw, NGBE_MNGMBX, FW_NVM_DATA_OFFSET + i);
+ for (i <<= 2; i < len; i++)
+ ((u8 *)buf)[i] = ((u8 *)&value)[j++];
+
+ return 0;
+}
+
s32 ngbe_hic_check_cap(struct ngbe_hw *hw)
{
struct ngbe_hic_read_shadow_ram command;
diff --git a/drivers/net/ngbe/base/ngbe_mng.h b/drivers/net/ngbe/base/ngbe_mng.h
index e86893101b..6f368b028f 100644
--- a/drivers/net/ngbe/base/ngbe_mng.h
+++ b/drivers/net/ngbe/base/ngbe_mng.h
@@ -10,12 +10,16 @@
#define NGBE_PMMBX_QSIZE 64 /* Num of dwords in range */
#define NGBE_PMMBX_BSIZE (NGBE_PMMBX_QSIZE * 4)
+#define NGBE_PMMBX_DATA_SIZE (NGBE_PMMBX_BSIZE - FW_NVM_DATA_OFFSET * 4)
#define NGBE_HI_COMMAND_TIMEOUT 5000 /* Process HI command limit */
/* CEM Support */
#define FW_CEM_MAX_RETRIES 3
#define FW_CEM_RESP_STATUS_SUCCESS 0x1
+#define FW_READ_SHADOW_RAM_CMD 0x31
+#define FW_READ_SHADOW_RAM_LEN 0x6
#define FW_DEFAULT_CHECKSUM 0xFF /* checksum always 0xFF */
+#define FW_NVM_DATA_OFFSET 3
#define FW_EEPROM_CHECK_STATUS 0xE9
#define FW_CHECKSUM_CAP_ST_PASS 0x80658383
@@ -61,5 +65,6 @@ struct ngbe_hic_read_shadow_ram {
u16 pad3;
};
+s32 ngbe_hic_sr_read(struct ngbe_hw *hw, u32 addr, u8 *buf, int len);
s32 ngbe_hic_check_cap(struct ngbe_hw *hw);
#endif /* _NGBE_MNG_H_ */
diff --git a/drivers/net/ngbe/base/ngbe_type.h b/drivers/net/ngbe/base/ngbe_type.h
index 78fb0da7fa..2586eaf36a 100644
--- a/drivers/net/ngbe/base/ngbe_type.h
+++ b/drivers/net/ngbe/base/ngbe_type.h
@@ -202,6 +202,7 @@ struct ngbe_hw_stats {
struct ngbe_rom_info {
s32 (*init_params)(struct ngbe_hw *hw);
+ s32 (*read32)(struct ngbe_hw *hw, u32 addr, u32 *data);
s32 (*validate_checksum)(struct ngbe_hw *hw, u16 *checksum_val);
enum ngbe_eeprom_type type;
@@ -310,6 +311,7 @@ struct ngbe_hw {
u16 vendor_id;
u16 sub_device_id;
u16 sub_system_id;
+ u32 eeprom_id;
bool adapter_stopped;
uint64_t isb_dma;
diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c
index ce71edd6d8..5566bf26a9 100644
--- a/drivers/net/ngbe/ngbe_ethdev.c
+++ b/drivers/net/ngbe/ngbe_ethdev.c
@@ -1524,6 +1524,26 @@ ngbe_dev_xstats_reset(struct rte_eth_dev *dev)
return 0;
}
+static int
+ngbe_fw_version_get(struct rte_eth_dev *dev, char *fw_version, size_t fw_size)
+{
+ struct ngbe_hw *hw = ngbe_dev_hw(dev);
+ int ret;
+
+ ret = snprintf(fw_version, fw_size, "0x%08x", hw->eeprom_id);
+
+ if (ret < 0)
+ return -EINVAL;
+
+ ret += 1; /* add the size of '\0' */
+ if (fw_size < (size_t)ret)
+ return ret;
+ else
+ return 0;
+
+ return 0;
+}
+
static int
ngbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
{
@@ -2181,6 +2201,7 @@ static const struct eth_dev_ops ngbe_eth_dev_ops = {
.xstats_get_names = ngbe_dev_xstats_get_names,
.xstats_get_names_by_id = ngbe_dev_xstats_get_names_by_id,
.queue_stats_mapping_set = ngbe_dev_queue_stats_mapping_set,
+ .fw_version_get = ngbe_fw_version_get,
.mtu_set = ngbe_dev_mtu_set,
.vlan_offload_set = ngbe_vlan_offload_set,
.rx_queue_start = ngbe_dev_rx_queue_start,
--
2.21.0.windows.1
^ permalink raw reply [flat|nested] 54+ messages in thread
* [dpdk-dev] [PATCH 13/32] net/ngbe: add loopback mode
2021-09-08 8:37 [dpdk-dev] [PATCH 00/32] net/ngbe: add many features Jiawen Wu
` (11 preceding siblings ...)
2021-09-08 8:37 ` [dpdk-dev] [PATCH 12/32] net/ngbe: support getting FW version Jiawen Wu
@ 2021-09-08 8:37 ` Jiawen Wu
2021-09-08 8:37 ` [dpdk-dev] [PATCH 14/32] net/ngbe: support Rx interrupt Jiawen Wu
` (18 subsequent siblings)
31 siblings, 0 replies; 54+ messages in thread
From: Jiawen Wu @ 2021-09-08 8:37 UTC (permalink / raw)
To: dev; +Cc: Jiawen Wu
Support loopback operation mode.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
drivers/net/ngbe/ngbe_ethdev.c | 6 ++++++
drivers/net/ngbe/ngbe_rxtx.c | 28 ++++++++++++++++++++++++++++
2 files changed, 34 insertions(+)
diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c
index 5566bf26a9..9caca55df3 100644
--- a/drivers/net/ngbe/ngbe_ethdev.c
+++ b/drivers/net/ngbe/ngbe_ethdev.c
@@ -850,6 +850,10 @@ ngbe_dev_start(struct rte_eth_dev *dev)
goto error;
}
+ /* Skip link setup if loopback mode is enabled. */
+ if (hw->is_pf && dev->data->dev_conf.lpbk_mode)
+ goto skip_link_setup;
+
err = hw->mac.check_link(hw, &speed, &link_up, 0);
if (err != 0)
goto error;
@@ -893,6 +897,8 @@ ngbe_dev_start(struct rte_eth_dev *dev)
if (err != 0)
goto error;
+skip_link_setup:
+
if (rte_intr_allow_others(intr_handle)) {
ngbe_dev_misc_interrupt_setup(dev);
/* check if lsc interrupt is enabled */
diff --git a/drivers/net/ngbe/ngbe_rxtx.c b/drivers/net/ngbe/ngbe_rxtx.c
index 1151173b02..22693c144a 100644
--- a/drivers/net/ngbe/ngbe_rxtx.c
+++ b/drivers/net/ngbe/ngbe_rxtx.c
@@ -2420,6 +2420,17 @@ ngbe_dev_rx_init(struct rte_eth_dev *dev)
NGBE_FRMSZ_MAX(NGBE_FRAME_SIZE_DFT));
}
+ /*
+ * If loopback mode is configured, set LPBK bit.
+ */
+ hlreg0 = rd32(hw, NGBE_PSRCTL);
+ if (hw->is_pf && dev->data->dev_conf.lpbk_mode)
+ hlreg0 |= NGBE_PSRCTL_LBENA;
+ else
+ hlreg0 &= ~NGBE_PSRCTL_LBENA;
+
+ wr32(hw, NGBE_PSRCTL, hlreg0);
+
/*
* Assume no header split and no VLAN strip support
* on any Rx queue first .
@@ -2538,6 +2549,19 @@ ngbe_dev_tx_init(struct rte_eth_dev *dev)
}
}
+/*
+ * Set up link loopback mode Tx->Rx.
+ */
+static inline void
+ngbe_setup_loopback_link(struct ngbe_hw *hw)
+{
+ PMD_INIT_FUNC_TRACE();
+
+ wr32m(hw, NGBE_MACRXCFG, NGBE_MACRXCFG_LB, NGBE_MACRXCFG_LB);
+
+ msec_delay(50);
+}
+
/*
* Start Transmit and Receive Units.
*/
@@ -2592,6 +2616,10 @@ ngbe_dev_rxtx_start(struct rte_eth_dev *dev)
rxctrl |= NGBE_PBRXCTL_ENA;
hw->mac.enable_rx_dma(hw, rxctrl);
+ /* If loopback mode is enabled, set up the link accordingly */
+ if (hw->is_pf && dev->data->dev_conf.lpbk_mode)
+ ngbe_setup_loopback_link(hw);
+
return 0;
}
--
2.21.0.windows.1
^ permalink raw reply [flat|nested] 54+ messages in thread
* [dpdk-dev] [PATCH 14/32] net/ngbe: support Rx interrupt
2021-09-08 8:37 [dpdk-dev] [PATCH 00/32] net/ngbe: add many features Jiawen Wu
` (12 preceding siblings ...)
2021-09-08 8:37 ` [dpdk-dev] [PATCH 13/32] net/ngbe: add loopback mode Jiawen Wu
@ 2021-09-08 8:37 ` Jiawen Wu
2021-09-15 16:53 ` Ferruh Yigit
2021-09-08 8:37 ` [dpdk-dev] [PATCH 15/32] net/ngbe: support MAC filters Jiawen Wu
` (17 subsequent siblings)
31 siblings, 1 reply; 54+ messages in thread
From: Jiawen Wu @ 2021-09-08 8:37 UTC (permalink / raw)
To: dev; +Cc: Jiawen Wu
Support Rx queue interrupt.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
doc/guides/nics/features/ngbe.ini | 1 +
doc/guides/nics/ngbe.rst | 1 +
drivers/net/ngbe/ngbe_ethdev.c | 35 +++++++++++++++++++++++++++++++
3 files changed, 37 insertions(+)
diff --git a/doc/guides/nics/features/ngbe.ini b/doc/guides/nics/features/ngbe.ini
index 1006c3935b..d14469eb43 100644
--- a/doc/guides/nics/features/ngbe.ini
+++ b/doc/guides/nics/features/ngbe.ini
@@ -7,6 +7,7 @@
Speed capabilities = Y
Link status = Y
Link status event = Y
+Rx interrupt = Y
Queue start/stop = Y
MTU update = Y
Jumbo frame = Y
diff --git a/doc/guides/nics/ngbe.rst b/doc/guides/nics/ngbe.rst
index 50a6e85c49..2783c4a3c4 100644
--- a/doc/guides/nics/ngbe.rst
+++ b/doc/guides/nics/ngbe.rst
@@ -20,6 +20,7 @@ Features
- Port hardware statistics
- Jumbo frames
- Link state information
+- Interrupt mode for RX
- Scattered and gather for TX and RX
- FW version
diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c
index 9caca55df3..52642161b7 100644
--- a/drivers/net/ngbe/ngbe_ethdev.c
+++ b/drivers/net/ngbe/ngbe_ethdev.c
@@ -2095,6 +2095,39 @@ ngbe_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
return 0;
}
+static int
+ngbe_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
+{
+ struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
+ struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ uint32_t mask;
+ struct ngbe_hw *hw = ngbe_dev_hw(dev);
+
+ if (queue_id < 32) {
+ mask = rd32(hw, NGBE_IMS(0));
+ mask &= (1 << queue_id);
+ wr32(hw, NGBE_IMS(0), mask);
+ }
+ rte_intr_enable(intr_handle);
+
+ return 0;
+}
+
+static int
+ngbe_dev_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id)
+{
+ uint32_t mask;
+ struct ngbe_hw *hw = ngbe_dev_hw(dev);
+
+ if (queue_id < 32) {
+ mask = rd32(hw, NGBE_IMS(0));
+ mask &= ~(1 << queue_id);
+ wr32(hw, NGBE_IMS(0), mask);
+ }
+
+ return 0;
+}
+
/**
* Set the IVAR registers, mapping interrupt causes to vectors
* @param hw
@@ -2215,6 +2248,8 @@ static const struct eth_dev_ops ngbe_eth_dev_ops = {
.tx_queue_start = ngbe_dev_tx_queue_start,
.tx_queue_stop = ngbe_dev_tx_queue_stop,
.rx_queue_setup = ngbe_dev_rx_queue_setup,
+ .rx_queue_intr_enable = ngbe_dev_rx_queue_intr_enable,
+ .rx_queue_intr_disable = ngbe_dev_rx_queue_intr_disable,
.rx_queue_release = ngbe_dev_rx_queue_release,
.tx_queue_setup = ngbe_dev_tx_queue_setup,
.tx_queue_release = ngbe_dev_tx_queue_release,
--
2.21.0.windows.1
^ permalink raw reply [flat|nested] 54+ messages in thread
* [dpdk-dev] [PATCH 15/32] net/ngbe: support MAC filters
2021-09-08 8:37 [dpdk-dev] [PATCH 00/32] net/ngbe: add many features Jiawen Wu
` (13 preceding siblings ...)
2021-09-08 8:37 ` [dpdk-dev] [PATCH 14/32] net/ngbe: support Rx interrupt Jiawen Wu
@ 2021-09-08 8:37 ` Jiawen Wu
2021-09-08 8:37 ` [dpdk-dev] [PATCH 16/32] net/ngbe: support VLAN filter Jiawen Wu
` (16 subsequent siblings)
31 siblings, 0 replies; 54+ messages in thread
From: Jiawen Wu @ 2021-09-08 8:37 UTC (permalink / raw)
To: dev; +Cc: Jiawen Wu
Add MAC addresses to filter incoming packets, support to set
multicast addresses to filter. And support to set unicast table array.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
doc/guides/nics/features/ngbe.ini | 2 +
doc/guides/nics/ngbe.rst | 1 +
drivers/net/ngbe/base/ngbe_dummy.h | 6 +
drivers/net/ngbe/base/ngbe_hw.c | 135 +++++++++++++++++++++-
drivers/net/ngbe/base/ngbe_hw.h | 4 +
drivers/net/ngbe/base/ngbe_type.h | 11 ++
drivers/net/ngbe/ngbe_ethdev.c | 175 +++++++++++++++++++++++++++++
drivers/net/ngbe/ngbe_ethdev.h | 13 +++
8 files changed, 346 insertions(+), 1 deletion(-)
diff --git a/doc/guides/nics/features/ngbe.ini b/doc/guides/nics/features/ngbe.ini
index d14469eb43..4b22dc683a 100644
--- a/doc/guides/nics/features/ngbe.ini
+++ b/doc/guides/nics/features/ngbe.ini
@@ -15,6 +15,8 @@ Scattered Rx = Y
TSO = Y
Promiscuous mode = Y
Allmulticast mode = Y
+Unicast MAC filter = Y
+Multicast MAC filter = Y
CRC offload = P
VLAN offload = P
QinQ offload = P
diff --git a/doc/guides/nics/ngbe.rst b/doc/guides/nics/ngbe.rst
index 2783c4a3c4..4d01c27064 100644
--- a/doc/guides/nics/ngbe.rst
+++ b/doc/guides/nics/ngbe.rst
@@ -11,6 +11,7 @@ for Wangxun 1 Gigabit Ethernet NICs.
Features
--------
+- MAC filtering
- Packet type information
- Checksum offload
- VLAN/QinQ stripping and inserting
diff --git a/drivers/net/ngbe/base/ngbe_dummy.h b/drivers/net/ngbe/base/ngbe_dummy.h
index 689480cc9a..fe2d53f312 100644
--- a/drivers/net/ngbe/base/ngbe_dummy.h
+++ b/drivers/net/ngbe/base/ngbe_dummy.h
@@ -127,6 +127,11 @@ static inline s32 ngbe_mac_init_rx_addrs_dummy(struct ngbe_hw *TUP0)
{
return NGBE_ERR_OPS_DUMMY;
}
+static inline s32 ngbe_mac_update_mc_addr_list_dummy(struct ngbe_hw *TUP0,
+ u8 *TUP1, u32 TUP2, ngbe_mc_addr_itr TUP3, bool TUP4)
+{
+ return NGBE_ERR_OPS_DUMMY;
+}
static inline s32 ngbe_mac_init_thermal_ssth_dummy(struct ngbe_hw *TUP0)
{
return NGBE_ERR_OPS_DUMMY;
@@ -203,6 +208,7 @@ static inline void ngbe_init_ops_dummy(struct ngbe_hw *hw)
hw->mac.set_vmdq = ngbe_mac_set_vmdq_dummy;
hw->mac.clear_vmdq = ngbe_mac_clear_vmdq_dummy;
hw->mac.init_rx_addrs = ngbe_mac_init_rx_addrs_dummy;
+ hw->mac.update_mc_addr_list = ngbe_mac_update_mc_addr_list_dummy;
hw->mac.init_thermal_sensor_thresh = ngbe_mac_init_thermal_ssth_dummy;
hw->mac.check_overtemp = ngbe_mac_check_overtemp_dummy;
hw->phy.identify = ngbe_phy_identify_dummy;
diff --git a/drivers/net/ngbe/base/ngbe_hw.c b/drivers/net/ngbe/base/ngbe_hw.c
index 0dabb6c1c7..897baf179d 100644
--- a/drivers/net/ngbe/base/ngbe_hw.c
+++ b/drivers/net/ngbe/base/ngbe_hw.c
@@ -567,6 +567,138 @@ s32 ngbe_init_rx_addrs(struct ngbe_hw *hw)
return 0;
}
+/**
+ * ngbe_mta_vector - Determines bit-vector in multicast table to set
+ * @hw: pointer to hardware structure
+ * @mc_addr: the multicast address
+ *
+ * Extracts the 12 bits, from a multicast address, to determine which
+ * bit-vector to set in the multicast table. The hardware uses 12 bits, from
+ * incoming rx multicast addresses, to determine the bit-vector to check in
+ * the MTA. Which of the 4 combination, of 12-bits, the hardware uses is set
+ * by the MO field of the PSRCTRL. The MO field is set during initialization
+ * to mc_filter_type.
+ **/
+static s32 ngbe_mta_vector(struct ngbe_hw *hw, u8 *mc_addr)
+{
+ u32 vector = 0;
+
+ DEBUGFUNC("ngbe_mta_vector");
+
+ switch (hw->mac.mc_filter_type) {
+ case 0: /* use bits [47:36] of the address */
+ vector = ((mc_addr[4] >> 4) | (((u16)mc_addr[5]) << 4));
+ break;
+ case 1: /* use bits [46:35] of the address */
+ vector = ((mc_addr[4] >> 3) | (((u16)mc_addr[5]) << 5));
+ break;
+ case 2: /* use bits [45:34] of the address */
+ vector = ((mc_addr[4] >> 2) | (((u16)mc_addr[5]) << 6));
+ break;
+ case 3: /* use bits [43:32] of the address */
+ vector = ((mc_addr[4]) | (((u16)mc_addr[5]) << 8));
+ break;
+ default: /* Invalid mc_filter_type */
+ DEBUGOUT("MC filter type param set incorrectly\n");
+ ASSERT(0);
+ break;
+ }
+
+ /* vector can only be 12-bits or boundary will be exceeded */
+ vector &= 0xFFF;
+ return vector;
+}
+
+/**
+ * ngbe_set_mta - Set bit-vector in multicast table
+ * @hw: pointer to hardware structure
+ * @mc_addr: Multicast address
+ *
+ * Sets the bit-vector in the multicast table.
+ **/
+void ngbe_set_mta(struct ngbe_hw *hw, u8 *mc_addr)
+{
+ u32 vector;
+ u32 vector_bit;
+ u32 vector_reg;
+
+ DEBUGFUNC("ngbe_set_mta");
+
+ hw->addr_ctrl.mta_in_use++;
+
+ vector = ngbe_mta_vector(hw, mc_addr);
+ DEBUGOUT(" bit-vector = 0x%03X\n", vector);
+
+ /*
+ * The MTA is a register array of 128 32-bit registers. It is treated
+ * like an array of 4096 bits. We want to set bit
+ * BitArray[vector_value]. So we figure out what register the bit is
+ * in, read it, OR in the new bit, then write back the new value. The
+ * register is determined by the upper 7 bits of the vector value and
+ * the bit within that register are determined by the lower 5 bits of
+ * the value.
+ */
+ vector_reg = (vector >> 5) & 0x7F;
+ vector_bit = vector & 0x1F;
+ hw->mac.mta_shadow[vector_reg] |= (1 << vector_bit);
+}
+
+/**
+ * ngbe_update_mc_addr_list - Updates MAC list of multicast addresses
+ * @hw: pointer to hardware structure
+ * @mc_addr_list: the list of new multicast addresses
+ * @mc_addr_count: number of addresses
+ * @next: iterator function to walk the multicast address list
+ * @clear: flag, when set clears the table beforehand
+ *
+ * When the clear flag is set, the given list replaces any existing list.
+ * Hashes the given addresses into the multicast table.
+ **/
+s32 ngbe_update_mc_addr_list(struct ngbe_hw *hw, u8 *mc_addr_list,
+ u32 mc_addr_count, ngbe_mc_addr_itr next,
+ bool clear)
+{
+ u32 i;
+ u32 vmdq;
+
+ DEBUGFUNC("ngbe_update_mc_addr_list");
+
+ /*
+ * Set the new number of MC addresses that we are being requested to
+ * use.
+ */
+ hw->addr_ctrl.num_mc_addrs = mc_addr_count;
+ hw->addr_ctrl.mta_in_use = 0;
+
+ /* Clear mta_shadow */
+ if (clear) {
+ DEBUGOUT(" Clearing MTA\n");
+ memset(&hw->mac.mta_shadow, 0, sizeof(hw->mac.mta_shadow));
+ }
+
+ /* Update mta_shadow */
+ for (i = 0; i < mc_addr_count; i++) {
+ DEBUGOUT(" Adding the multicast addresses:\n");
+ ngbe_set_mta(hw, next(hw, &mc_addr_list, &vmdq));
+ }
+
+ /* Enable mta */
+ for (i = 0; i < hw->mac.mcft_size; i++)
+ wr32a(hw, NGBE_MCADDRTBL(0), i,
+ hw->mac.mta_shadow[i]);
+
+ if (hw->addr_ctrl.mta_in_use > 0) {
+ u32 psrctl = rd32(hw, NGBE_PSRCTL);
+ psrctl &= ~(NGBE_PSRCTL_ADHF12_MASK | NGBE_PSRCTL_MCHFENA);
+ psrctl |= NGBE_PSRCTL_MCHFENA |
+ NGBE_PSRCTL_ADHF12(hw->mac.mc_filter_type);
+ wr32(hw, NGBE_PSRCTL, psrctl);
+ }
+
+ DEBUGOUT("ngbe update mc addr list complete\n");
+ return 0;
+}
+
/**
* ngbe_acquire_swfw_sync - Acquire SWFW semaphore
* @hw: pointer to hardware structure
@@ -1099,10 +1231,11 @@ s32 ngbe_init_ops_pf(struct ngbe_hw *hw)
mac->disable_sec_rx_path = ngbe_disable_sec_rx_path;
mac->enable_sec_rx_path = ngbe_enable_sec_rx_path;
- /* RAR */
+ /* RAR, Multicast */
mac->set_rar = ngbe_set_rar;
mac->clear_rar = ngbe_clear_rar;
mac->init_rx_addrs = ngbe_init_rx_addrs;
+ mac->update_mc_addr_list = ngbe_update_mc_addr_list;
mac->set_vmdq = ngbe_set_vmdq;
mac->clear_vmdq = ngbe_clear_vmdq;
diff --git a/drivers/net/ngbe/base/ngbe_hw.h b/drivers/net/ngbe/base/ngbe_hw.h
index 6a08c02bee..f06baa4395 100644
--- a/drivers/net/ngbe/base/ngbe_hw.h
+++ b/drivers/net/ngbe/base/ngbe_hw.h
@@ -35,6 +35,9 @@ s32 ngbe_set_rar(struct ngbe_hw *hw, u32 index, u8 *addr, u32 vmdq,
u32 enable_addr);
s32 ngbe_clear_rar(struct ngbe_hw *hw, u32 index);
s32 ngbe_init_rx_addrs(struct ngbe_hw *hw);
+s32 ngbe_update_mc_addr_list(struct ngbe_hw *hw, u8 *mc_addr_list,
+ u32 mc_addr_count,
+ ngbe_mc_addr_itr func, bool clear);
s32 ngbe_disable_sec_rx_path(struct ngbe_hw *hw);
s32 ngbe_enable_sec_rx_path(struct ngbe_hw *hw);
@@ -50,6 +53,7 @@ s32 ngbe_init_thermal_sensor_thresh(struct ngbe_hw *hw);
s32 ngbe_mac_check_overtemp(struct ngbe_hw *hw);
void ngbe_disable_rx(struct ngbe_hw *hw);
void ngbe_enable_rx(struct ngbe_hw *hw);
+void ngbe_set_mta(struct ngbe_hw *hw, u8 *mc_addr);
s32 ngbe_init_shared_code(struct ngbe_hw *hw);
s32 ngbe_set_mac_type(struct ngbe_hw *hw);
s32 ngbe_init_ops_pf(struct ngbe_hw *hw);
diff --git a/drivers/net/ngbe/base/ngbe_type.h b/drivers/net/ngbe/base/ngbe_type.h
index 2586eaf36a..3e62dde707 100644
--- a/drivers/net/ngbe/base/ngbe_type.h
+++ b/drivers/net/ngbe/base/ngbe_type.h
@@ -11,6 +11,7 @@
#define NGBE_FRAME_SIZE_MAX (9728) /* Maximum frame size, +FCS */
#define NGBE_FRAME_SIZE_DFT (1522) /* Default frame size, +FCS */
#define NGBE_MAX_QP (8)
+#define NGBE_MAX_UTA 128
#define NGBE_ALIGN 128 /* as intel did */
#define NGBE_ISB_SIZE 16
@@ -68,6 +69,7 @@ enum ngbe_media_type {
struct ngbe_hw;
struct ngbe_addr_filter_info {
+ u32 num_mc_addrs;
u32 mta_in_use;
};
@@ -200,6 +202,10 @@ struct ngbe_hw_stats {
};
+/* iterator type for walking multicast address lists */
+typedef u8* (*ngbe_mc_addr_itr) (struct ngbe_hw *hw, u8 **mc_addr_ptr,
+ u32 *vmdq);
+
struct ngbe_rom_info {
s32 (*init_params)(struct ngbe_hw *hw);
s32 (*read32)(struct ngbe_hw *hw, u32 addr, u32 *data);
@@ -243,6 +249,9 @@ struct ngbe_mac_info {
s32 (*set_vmdq)(struct ngbe_hw *hw, u32 rar, u32 vmdq);
s32 (*clear_vmdq)(struct ngbe_hw *hw, u32 rar, u32 vmdq);
s32 (*init_rx_addrs)(struct ngbe_hw *hw);
+ s32 (*update_mc_addr_list)(struct ngbe_hw *hw, u8 *mc_addr_list,
+ u32 mc_addr_count,
+ ngbe_mc_addr_itr func, bool clear);
/* Manageability interface */
s32 (*init_thermal_sensor_thresh)(struct ngbe_hw *hw);
@@ -251,6 +260,8 @@ struct ngbe_mac_info {
enum ngbe_mac_type type;
u8 addr[ETH_ADDR_LEN];
u8 perm_addr[ETH_ADDR_LEN];
+#define NGBE_MAX_MTA 128
+ u32 mta_shadow[NGBE_MAX_MTA];
s32 mc_filter_type;
u32 mcft_size;
u32 num_rar_entries;
diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c
index 52642161b7..d076ba8036 100644
--- a/drivers/net/ngbe/ngbe_ethdev.c
+++ b/drivers/net/ngbe/ngbe_ethdev.c
@@ -1553,12 +1553,16 @@ ngbe_fw_version_get(struct rte_eth_dev *dev, char *fw_version, size_t fw_size)
static int
ngbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
{
+ struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
struct ngbe_hw *hw = ngbe_dev_hw(dev);
dev_info->max_rx_queues = (uint16_t)hw->mac.max_rx_queues;
dev_info->max_tx_queues = (uint16_t)hw->mac.max_tx_queues;
dev_info->min_rx_bufsize = 1024;
dev_info->max_rx_pktlen = 15872;
+ dev_info->max_mac_addrs = hw->mac.num_rar_entries;
+ dev_info->max_hash_mac_addrs = NGBE_VMDQ_NUM_UC_MAC;
+ dev_info->max_vfs = pci_dev->max_vfs;
dev_info->rx_queue_offload_capa = ngbe_get_rx_queue_offloads(dev);
dev_info->rx_offload_capa = (ngbe_get_rx_port_offloads(dev) |
dev_info->rx_queue_offload_capa);
@@ -2055,6 +2059,36 @@ ngbe_dev_interrupt_handler(void *param)
ngbe_dev_interrupt_action(dev);
}
+static int
+ngbe_add_rar(struct rte_eth_dev *dev, struct rte_ether_addr *mac_addr,
+ uint32_t index, uint32_t pool)
+{
+ struct ngbe_hw *hw = ngbe_dev_hw(dev);
+ uint32_t enable_addr = 1;
+
+ return ngbe_set_rar(hw, index, mac_addr->addr_bytes,
+ pool, enable_addr);
+}
+
+static void
+ngbe_remove_rar(struct rte_eth_dev *dev, uint32_t index)
+{
+ struct ngbe_hw *hw = ngbe_dev_hw(dev);
+
+ ngbe_clear_rar(hw, index);
+}
+
+static int
+ngbe_set_default_mac_addr(struct rte_eth_dev *dev, struct rte_ether_addr *addr)
+{
+ struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
+
+ ngbe_remove_rar(dev, 0);
+ ngbe_add_rar(dev, addr, 0, pci_dev->max_vfs);
+
+ return 0;
+}
+
static int
ngbe_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
{
@@ -2095,6 +2129,116 @@ ngbe_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
return 0;
}
+static uint32_t
+ngbe_uta_vector(struct ngbe_hw *hw, struct rte_ether_addr *uc_addr)
+{
+ uint32_t vector = 0;
+
+ switch (hw->mac.mc_filter_type) {
+ case 0: /* use bits [47:36] of the address */
+ vector = ((uc_addr->addr_bytes[4] >> 4) |
+ (((uint16_t)uc_addr->addr_bytes[5]) << 4));
+ break;
+ case 1: /* use bits [46:35] of the address */
+ vector = ((uc_addr->addr_bytes[4] >> 3) |
+ (((uint16_t)uc_addr->addr_bytes[5]) << 5));
+ break;
+ case 2: /* use bits [45:34] of the address */
+ vector = ((uc_addr->addr_bytes[4] >> 2) |
+ (((uint16_t)uc_addr->addr_bytes[5]) << 6));
+ break;
+ case 3: /* use bits [43:32] of the address */
+ vector = ((uc_addr->addr_bytes[4]) |
+ (((uint16_t)uc_addr->addr_bytes[5]) << 8));
+ break;
+ default: /* Invalid mc_filter_type */
+ break;
+ }
+
+ /* vector can only be 12-bits or boundary will be exceeded */
+ vector &= 0xFFF;
+ return vector;
+}
+
+static int
+ngbe_uc_hash_table_set(struct rte_eth_dev *dev,
+ struct rte_ether_addr *mac_addr, uint8_t on)
+{
+ uint32_t vector;
+ uint32_t uta_idx;
+ uint32_t reg_val;
+ uint32_t uta_mask;
+ uint32_t psrctl;
+
+ struct ngbe_hw *hw = ngbe_dev_hw(dev);
+ struct ngbe_uta_info *uta_info = NGBE_DEV_UTA_INFO(dev);
+
+ vector = ngbe_uta_vector(hw, mac_addr);
+ uta_idx = (vector >> 5) & 0x7F;
+ uta_mask = 0x1UL << (vector & 0x1F);
+
+ if (!!on == !!(uta_info->uta_shadow[uta_idx] & uta_mask))
+ return 0;
+
+ reg_val = rd32(hw, NGBE_UCADDRTBL(uta_idx));
+ if (on) {
+ uta_info->uta_in_use++;
+ reg_val |= uta_mask;
+ uta_info->uta_shadow[uta_idx] |= uta_mask;
+ } else {
+ uta_info->uta_in_use--;
+ reg_val &= ~uta_mask;
+ uta_info->uta_shadow[uta_idx] &= ~uta_mask;
+ }
+
+ wr32(hw, NGBE_UCADDRTBL(uta_idx), reg_val);
+
+ psrctl = rd32(hw, NGBE_PSRCTL);
+ if (uta_info->uta_in_use > 0)
+ psrctl |= NGBE_PSRCTL_UCHFENA;
+ else
+ psrctl &= ~NGBE_PSRCTL_UCHFENA;
+
+ psrctl &= ~NGBE_PSRCTL_ADHF12_MASK;
+ psrctl |= NGBE_PSRCTL_ADHF12(hw->mac.mc_filter_type);
+ wr32(hw, NGBE_PSRCTL, psrctl);
+
+ return 0;
+}
+
+static int
+ngbe_uc_all_hash_table_set(struct rte_eth_dev *dev, uint8_t on)
+{
+ struct ngbe_hw *hw = ngbe_dev_hw(dev);
+ struct ngbe_uta_info *uta_info = NGBE_DEV_UTA_INFO(dev);
+ uint32_t psrctl;
+ int i;
+
+ if (on) {
+ for (i = 0; i < ETH_VMDQ_NUM_UC_HASH_ARRAY; i++) {
+ uta_info->uta_shadow[i] = ~0;
+ wr32(hw, NGBE_UCADDRTBL(i), ~0);
+ }
+ } else {
+ for (i = 0; i < ETH_VMDQ_NUM_UC_HASH_ARRAY; i++) {
+ uta_info->uta_shadow[i] = 0;
+ wr32(hw, NGBE_UCADDRTBL(i), 0);
+ }
+ }
+
+ psrctl = rd32(hw, NGBE_PSRCTL);
+ if (on)
+ psrctl |= NGBE_PSRCTL_UCHFENA;
+ else
+ psrctl &= ~NGBE_PSRCTL_UCHFENA;
+
+ psrctl &= ~NGBE_PSRCTL_ADHF12_MASK;
+ psrctl |= NGBE_PSRCTL_ADHF12(hw->mac.mc_filter_type);
+ wr32(hw, NGBE_PSRCTL, psrctl);
+
+ return 0;
+}
+
static int
ngbe_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
{
@@ -2220,6 +2364,31 @@ ngbe_configure_msix(struct rte_eth_dev *dev)
| NGBE_ITR_WRDSA);
}
+static u8 *
+ngbe_dev_addr_list_itr(__rte_unused struct ngbe_hw *hw,
+ u8 **mc_addr_ptr, u32 *vmdq)
+{
+ u8 *mc_addr;
+
+ *vmdq = 0;
+ mc_addr = *mc_addr_ptr;
+ *mc_addr_ptr = (mc_addr + sizeof(struct rte_ether_addr));
+ return mc_addr;
+}
+
+int
+ngbe_dev_set_mc_addr_list(struct rte_eth_dev *dev,
+ struct rte_ether_addr *mc_addr_set,
+ uint32_t nb_mc_addr)
+{
+ struct ngbe_hw *hw = ngbe_dev_hw(dev);
+ u8 *mc_addr_list;
+
+ mc_addr_list = (u8 *)mc_addr_set;
+ return hw->mac.update_mc_addr_list(hw, mc_addr_list, nb_mc_addr,
+ ngbe_dev_addr_list_itr, TRUE);
+}
+
static const struct eth_dev_ops ngbe_eth_dev_ops = {
.dev_configure = ngbe_dev_configure,
.dev_infos_get = ngbe_dev_info_get,
@@ -2253,6 +2422,12 @@ static const struct eth_dev_ops ngbe_eth_dev_ops = {
.rx_queue_release = ngbe_dev_rx_queue_release,
.tx_queue_setup = ngbe_dev_tx_queue_setup,
.tx_queue_release = ngbe_dev_tx_queue_release,
+ .mac_addr_add = ngbe_add_rar,
+ .mac_addr_remove = ngbe_remove_rar,
+ .mac_addr_set = ngbe_set_default_mac_addr,
+ .uc_hash_table_set = ngbe_uc_hash_table_set,
+ .uc_all_hash_table_set = ngbe_uc_all_hash_table_set,
+ .set_mc_addr_list = ngbe_dev_set_mc_addr_list,
};
RTE_PMD_REGISTER_PCI(net_ngbe, rte_ngbe_pmd);
diff --git a/drivers/net/ngbe/ngbe_ethdev.h b/drivers/net/ngbe/ngbe_ethdev.h
index 1527dcc022..65dad4a72b 100644
--- a/drivers/net/ngbe/ngbe_ethdev.h
+++ b/drivers/net/ngbe/ngbe_ethdev.h
@@ -57,6 +57,12 @@ struct ngbe_hwstrip {
uint32_t bitmap[NGBE_HWSTRIP_BITMAP_SIZE];
};
+struct ngbe_uta_info {
+ uint8_t uc_filter_type;
+ uint16_t uta_in_use;
+ uint32_t uta_shadow[NGBE_MAX_UTA];
+};
+
/*
* Structure to store private data for each driver instance (for each port).
*/
@@ -67,6 +73,7 @@ struct ngbe_adapter {
struct ngbe_stat_mappings stat_mappings;
struct ngbe_vfta shadow_vfta;
struct ngbe_hwstrip hwstrip;
+ struct ngbe_uta_info uta_info;
bool rx_bulk_alloc_allowed;
};
@@ -107,6 +114,9 @@ ngbe_dev_intr(struct rte_eth_dev *dev)
#define NGBE_DEV_HWSTRIP(dev) \
(&((struct ngbe_adapter *)(dev)->data->dev_private)->hwstrip)
+#define NGBE_DEV_UTA_INFO(dev) \
+ (&((struct ngbe_adapter *)(dev)->data->dev_private)->uta_info)
+
/*
* Rx/Tx function prototypes
@@ -209,6 +219,9 @@ struct rte_ngbe_xstats_name_off {
};
const uint32_t *ngbe_dev_supported_ptypes_get(struct rte_eth_dev *dev);
+int ngbe_dev_set_mc_addr_list(struct rte_eth_dev *dev,
+ struct rte_ether_addr *mc_addr_set,
+ uint32_t nb_mc_addr);
void ngbe_vlan_hw_strip_bitmap_set(struct rte_eth_dev *dev,
uint16_t queue, bool on);
void ngbe_config_vlan_strip_on_all_queues(struct rte_eth_dev *dev,
--
2.21.0.windows.1
^ permalink raw reply [flat|nested] 54+ messages in thread
* [dpdk-dev] [PATCH 16/32] net/ngbe: support VLAN filter
2021-09-08 8:37 [dpdk-dev] [PATCH 00/32] net/ngbe: add many features Jiawen Wu
` (14 preceding siblings ...)
2021-09-08 8:37 ` [dpdk-dev] [PATCH 15/32] net/ngbe: support MAC filters Jiawen Wu
@ 2021-09-08 8:37 ` Jiawen Wu
2021-09-15 16:54 ` Ferruh Yigit
2021-09-08 8:37 ` [dpdk-dev] [PATCH 17/32] net/ngbe: support RSS hash Jiawen Wu
` (15 subsequent siblings)
31 siblings, 1 reply; 54+ messages in thread
From: Jiawen Wu @ 2021-09-08 8:37 UTC (permalink / raw)
To: dev; +Cc: Jiawen Wu
Support to filter of a VLAN tag identifier.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
doc/guides/nics/features/ngbe.ini | 1 +
doc/guides/nics/ngbe.rst | 2 +-
drivers/net/ngbe/base/ngbe_dummy.h | 5 ++
drivers/net/ngbe/base/ngbe_hw.c | 29 +++++++
drivers/net/ngbe/base/ngbe_hw.h | 2 +
drivers/net/ngbe/base/ngbe_type.h | 3 +
drivers/net/ngbe/ngbe_ethdev.c | 128 +++++++++++++++++++++++++++++
7 files changed, 169 insertions(+), 1 deletion(-)
diff --git a/doc/guides/nics/features/ngbe.ini b/doc/guides/nics/features/ngbe.ini
index 4b22dc683a..265edba361 100644
--- a/doc/guides/nics/features/ngbe.ini
+++ b/doc/guides/nics/features/ngbe.ini
@@ -17,6 +17,7 @@ Promiscuous mode = Y
Allmulticast mode = Y
Unicast MAC filter = Y
Multicast MAC filter = Y
+VLAN filter = Y
CRC offload = P
VLAN offload = P
QinQ offload = P
diff --git a/doc/guides/nics/ngbe.rst b/doc/guides/nics/ngbe.rst
index 4d01c27064..3683862fd1 100644
--- a/doc/guides/nics/ngbe.rst
+++ b/doc/guides/nics/ngbe.rst
@@ -11,7 +11,7 @@ for Wangxun 1 Gigabit Ethernet NICs.
Features
--------
-- MAC filtering
+- MAC/VLAN filtering
- Packet type information
- Checksum offload
- VLAN/QinQ stripping and inserting
diff --git a/drivers/net/ngbe/base/ngbe_dummy.h b/drivers/net/ngbe/base/ngbe_dummy.h
index fe2d53f312..7814fd6226 100644
--- a/drivers/net/ngbe/base/ngbe_dummy.h
+++ b/drivers/net/ngbe/base/ngbe_dummy.h
@@ -132,6 +132,10 @@ static inline s32 ngbe_mac_update_mc_addr_list_dummy(struct ngbe_hw *TUP0,
{
return NGBE_ERR_OPS_DUMMY;
}
+static inline s32 ngbe_mac_clear_vfta_dummy(struct ngbe_hw *TUP0)
+{
+ return NGBE_ERR_OPS_DUMMY;
+}
static inline s32 ngbe_mac_init_thermal_ssth_dummy(struct ngbe_hw *TUP0)
{
return NGBE_ERR_OPS_DUMMY;
@@ -209,6 +213,7 @@ static inline void ngbe_init_ops_dummy(struct ngbe_hw *hw)
hw->mac.clear_vmdq = ngbe_mac_clear_vmdq_dummy;
hw->mac.init_rx_addrs = ngbe_mac_init_rx_addrs_dummy;
hw->mac.update_mc_addr_list = ngbe_mac_update_mc_addr_list_dummy;
+ hw->mac.clear_vfta = ngbe_mac_clear_vfta_dummy;
hw->mac.init_thermal_sensor_thresh = ngbe_mac_init_thermal_ssth_dummy;
hw->mac.check_overtemp = ngbe_mac_check_overtemp_dummy;
hw->phy.identify = ngbe_phy_identify_dummy;
diff --git a/drivers/net/ngbe/base/ngbe_hw.c b/drivers/net/ngbe/base/ngbe_hw.c
index 897baf179d..ce0867575a 100644
--- a/drivers/net/ngbe/base/ngbe_hw.c
+++ b/drivers/net/ngbe/base/ngbe_hw.c
@@ -19,6 +19,9 @@ s32 ngbe_start_hw(struct ngbe_hw *hw)
{
DEBUGFUNC("ngbe_start_hw");
+ /* Clear the VLAN filter table */
+ hw->mac.clear_vfta(hw);
+
/* Clear statistics registers */
hw->mac.clear_hw_cntrs(hw);
@@ -910,6 +913,30 @@ s32 ngbe_init_uta_tables(struct ngbe_hw *hw)
return 0;
}
+/**
+ * ngbe_clear_vfta - Clear VLAN filter table
+ * @hw: pointer to hardware structure
+ *
+ * Clears the VLAN filer table, and the VMDq index associated with the filter
+ **/
+s32 ngbe_clear_vfta(struct ngbe_hw *hw)
+{
+ u32 offset;
+
+ DEBUGFUNC("ngbe_clear_vfta");
+
+ for (offset = 0; offset < hw->mac.vft_size; offset++)
+ wr32(hw, NGBE_VLANTBL(offset), 0);
+
+ for (offset = 0; offset < NGBE_NUM_POOL; offset++) {
+ wr32(hw, NGBE_PSRVLANIDX, offset);
+ wr32(hw, NGBE_PSRVLAN, 0);
+ wr32(hw, NGBE_PSRVLANPLM(0), 0);
+ }
+
+ return 0;
+}
+
/**
* ngbe_check_mac_link_em - Determine link and speed status
* @hw: pointer to hardware structure
@@ -1238,6 +1265,7 @@ s32 ngbe_init_ops_pf(struct ngbe_hw *hw)
mac->update_mc_addr_list = ngbe_update_mc_addr_list;
mac->set_vmdq = ngbe_set_vmdq;
mac->clear_vmdq = ngbe_clear_vmdq;
+ mac->clear_vfta = ngbe_clear_vfta;
/* Link */
mac->get_link_capabilities = ngbe_get_link_capabilities_em;
@@ -1254,6 +1282,7 @@ s32 ngbe_init_ops_pf(struct ngbe_hw *hw)
rom->validate_checksum = ngbe_validate_eeprom_checksum_em;
mac->mcft_size = NGBE_EM_MC_TBL_SIZE;
+ mac->vft_size = NGBE_EM_VFT_TBL_SIZE;
mac->num_rar_entries = NGBE_EM_RAR_ENTRIES;
mac->max_rx_queues = NGBE_EM_MAX_RX_QUEUES;
mac->max_tx_queues = NGBE_EM_MAX_TX_QUEUES;
diff --git a/drivers/net/ngbe/base/ngbe_hw.h b/drivers/net/ngbe/base/ngbe_hw.h
index f06baa4395..a27bd3e650 100644
--- a/drivers/net/ngbe/base/ngbe_hw.h
+++ b/drivers/net/ngbe/base/ngbe_hw.h
@@ -12,6 +12,7 @@
#define NGBE_EM_MAX_RX_QUEUES 8
#define NGBE_EM_RAR_ENTRIES 32
#define NGBE_EM_MC_TBL_SIZE 32
+#define NGBE_EM_VFT_TBL_SIZE 128
s32 ngbe_init_hw(struct ngbe_hw *hw);
s32 ngbe_start_hw(struct ngbe_hw *hw);
@@ -48,6 +49,7 @@ void ngbe_release_swfw_sync(struct ngbe_hw *hw, u32 mask);
s32 ngbe_set_vmdq(struct ngbe_hw *hw, u32 rar, u32 vmdq);
s32 ngbe_clear_vmdq(struct ngbe_hw *hw, u32 rar, u32 vmdq);
s32 ngbe_init_uta_tables(struct ngbe_hw *hw);
+s32 ngbe_clear_vfta(struct ngbe_hw *hw);
s32 ngbe_init_thermal_sensor_thresh(struct ngbe_hw *hw);
s32 ngbe_mac_check_overtemp(struct ngbe_hw *hw);
diff --git a/drivers/net/ngbe/base/ngbe_type.h b/drivers/net/ngbe/base/ngbe_type.h
index 3e62dde707..5a88d38e84 100644
--- a/drivers/net/ngbe/base/ngbe_type.h
+++ b/drivers/net/ngbe/base/ngbe_type.h
@@ -10,6 +10,7 @@
#define NGBE_FRAME_SIZE_MAX (9728) /* Maximum frame size, +FCS */
#define NGBE_FRAME_SIZE_DFT (1522) /* Default frame size, +FCS */
+#define NGBE_NUM_POOL (32)
#define NGBE_MAX_QP (8)
#define NGBE_MAX_UTA 128
@@ -252,6 +253,7 @@ struct ngbe_mac_info {
s32 (*update_mc_addr_list)(struct ngbe_hw *hw, u8 *mc_addr_list,
u32 mc_addr_count,
ngbe_mc_addr_itr func, bool clear);
+ s32 (*clear_vfta)(struct ngbe_hw *hw);
/* Manageability interface */
s32 (*init_thermal_sensor_thresh)(struct ngbe_hw *hw);
@@ -264,6 +266,7 @@ struct ngbe_mac_info {
u32 mta_shadow[NGBE_MAX_MTA];
s32 mc_filter_type;
u32 mcft_size;
+ u32 vft_size;
u32 num_rar_entries;
u32 max_tx_queues;
u32 max_rx_queues;
diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c
index d076ba8036..acc018c811 100644
--- a/drivers/net/ngbe/ngbe_ethdev.c
+++ b/drivers/net/ngbe/ngbe_ethdev.c
@@ -492,6 +492,131 @@ static struct rte_pci_driver rte_ngbe_pmd = {
.remove = eth_ngbe_pci_remove,
};
+static int
+ngbe_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on)
+{
+ struct ngbe_hw *hw = ngbe_dev_hw(dev);
+ struct ngbe_vfta *shadow_vfta = NGBE_DEV_VFTA(dev);
+ uint32_t vfta;
+ uint32_t vid_idx;
+ uint32_t vid_bit;
+
+ vid_idx = (uint32_t)((vlan_id >> 5) & 0x7F);
+ vid_bit = (uint32_t)(1 << (vlan_id & 0x1F));
+ vfta = rd32(hw, NGBE_VLANTBL(vid_idx));
+ if (on)
+ vfta |= vid_bit;
+ else
+ vfta &= ~vid_bit;
+ wr32(hw, NGBE_VLANTBL(vid_idx), vfta);
+
+ /* update local VFTA copy */
+ shadow_vfta->vfta[vid_idx] = vfta;
+
+ return 0;
+}
+
+static void
+ngbe_vlan_strip_queue_set(struct rte_eth_dev *dev, uint16_t queue, int on)
+{
+ struct ngbe_hw *hw = ngbe_dev_hw(dev);
+ struct ngbe_rx_queue *rxq;
+ bool restart;
+ uint32_t rxcfg, rxbal, rxbah;
+
+ if (on)
+ ngbe_vlan_hw_strip_enable(dev, queue);
+ else
+ ngbe_vlan_hw_strip_disable(dev, queue);
+
+ rxq = dev->data->rx_queues[queue];
+ rxbal = rd32(hw, NGBE_RXBAL(rxq->reg_idx));
+ rxbah = rd32(hw, NGBE_RXBAH(rxq->reg_idx));
+ rxcfg = rd32(hw, NGBE_RXCFG(rxq->reg_idx));
+ if (rxq->offloads & DEV_RX_OFFLOAD_VLAN_STRIP) {
+ restart = (rxcfg & NGBE_RXCFG_ENA) &&
+ !(rxcfg & NGBE_RXCFG_VLAN);
+ rxcfg |= NGBE_RXCFG_VLAN;
+ } else {
+ restart = (rxcfg & NGBE_RXCFG_ENA) &&
+ (rxcfg & NGBE_RXCFG_VLAN);
+ rxcfg &= ~NGBE_RXCFG_VLAN;
+ }
+ rxcfg &= ~NGBE_RXCFG_ENA;
+
+ if (restart) {
+ /* set vlan strip for ring */
+ ngbe_dev_rx_queue_stop(dev, queue);
+ wr32(hw, NGBE_RXBAL(rxq->reg_idx), rxbal);
+ wr32(hw, NGBE_RXBAH(rxq->reg_idx), rxbah);
+ wr32(hw, NGBE_RXCFG(rxq->reg_idx), rxcfg);
+ ngbe_dev_rx_queue_start(dev, queue);
+ }
+}
+
+static int
+ngbe_vlan_tpid_set(struct rte_eth_dev *dev,
+ enum rte_vlan_type vlan_type,
+ uint16_t tpid)
+{
+ struct ngbe_hw *hw = ngbe_dev_hw(dev);
+ int ret = 0;
+ uint32_t portctrl, vlan_ext, qinq;
+
+ portctrl = rd32(hw, NGBE_PORTCTL);
+
+ vlan_ext = (portctrl & NGBE_PORTCTL_VLANEXT);
+ qinq = vlan_ext && (portctrl & NGBE_PORTCTL_QINQ);
+ switch (vlan_type) {
+ case ETH_VLAN_TYPE_INNER:
+ if (vlan_ext) {
+ wr32m(hw, NGBE_VLANCTL,
+ NGBE_VLANCTL_TPID_MASK,
+ NGBE_VLANCTL_TPID(tpid));
+ wr32m(hw, NGBE_DMATXCTRL,
+ NGBE_DMATXCTRL_TPID_MASK,
+ NGBE_DMATXCTRL_TPID(tpid));
+ } else {
+ ret = -ENOTSUP;
+ PMD_DRV_LOG(ERR,
+ "Inner type is not supported by single VLAN");
+ }
+
+ if (qinq) {
+ wr32m(hw, NGBE_TAGTPID(0),
+ NGBE_TAGTPID_LSB_MASK,
+ NGBE_TAGTPID_LSB(tpid));
+ }
+ break;
+ case ETH_VLAN_TYPE_OUTER:
+ if (vlan_ext) {
+ /* Only the high 16-bits is valid */
+ wr32m(hw, NGBE_EXTAG,
+ NGBE_EXTAG_VLAN_MASK,
+ NGBE_EXTAG_VLAN(tpid));
+ } else {
+ wr32m(hw, NGBE_VLANCTL,
+ NGBE_VLANCTL_TPID_MASK,
+ NGBE_VLANCTL_TPID(tpid));
+ wr32m(hw, NGBE_DMATXCTRL,
+ NGBE_DMATXCTRL_TPID_MASK,
+ NGBE_DMATXCTRL_TPID(tpid));
+ }
+
+ if (qinq) {
+ wr32m(hw, NGBE_TAGTPID(0),
+ NGBE_TAGTPID_MSB_MASK,
+ NGBE_TAGTPID_MSB(tpid));
+ }
+ break;
+ default:
+ PMD_DRV_LOG(ERR, "Unsupported VLAN type %d", vlan_type);
+ return -EINVAL;
+ }
+
+ return ret;
+}
+
void
ngbe_vlan_hw_filter_disable(struct rte_eth_dev *dev)
{
@@ -2411,7 +2536,10 @@ static const struct eth_dev_ops ngbe_eth_dev_ops = {
.queue_stats_mapping_set = ngbe_dev_queue_stats_mapping_set,
.fw_version_get = ngbe_fw_version_get,
.mtu_set = ngbe_dev_mtu_set,
+ .vlan_filter_set = ngbe_vlan_filter_set,
+ .vlan_tpid_set = ngbe_vlan_tpid_set,
.vlan_offload_set = ngbe_vlan_offload_set,
+ .vlan_strip_queue_set = ngbe_vlan_strip_queue_set,
.rx_queue_start = ngbe_dev_rx_queue_start,
.rx_queue_stop = ngbe_dev_rx_queue_stop,
.tx_queue_start = ngbe_dev_tx_queue_start,
--
2.21.0.windows.1
^ permalink raw reply [flat|nested] 54+ messages in thread
* [dpdk-dev] [PATCH 17/32] net/ngbe: support RSS hash
2021-09-08 8:37 [dpdk-dev] [PATCH 00/32] net/ngbe: add many features Jiawen Wu
` (15 preceding siblings ...)
2021-09-08 8:37 ` [dpdk-dev] [PATCH 16/32] net/ngbe: support VLAN filter Jiawen Wu
@ 2021-09-08 8:37 ` Jiawen Wu
2021-09-08 8:37 ` [dpdk-dev] [PATCH 18/32] net/ngbe: support SRIOV Jiawen Wu
` (14 subsequent siblings)
31 siblings, 0 replies; 54+ messages in thread
From: Jiawen Wu @ 2021-09-08 8:37 UTC (permalink / raw)
To: dev; +Cc: Jiawen Wu
Support RSS hashing on Rx, and configuration of RSS hash computation.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
doc/guides/nics/features/ngbe.ini | 3 +
doc/guides/nics/ngbe.rst | 2 +
drivers/net/ngbe/meson.build | 2 +
drivers/net/ngbe/ngbe_ethdev.c | 99 +++++++++++++
drivers/net/ngbe/ngbe_ethdev.h | 27 ++++
drivers/net/ngbe/ngbe_rxtx.c | 235 ++++++++++++++++++++++++++++++
6 files changed, 368 insertions(+)
diff --git a/doc/guides/nics/features/ngbe.ini b/doc/guides/nics/features/ngbe.ini
index 265edba361..70d731a695 100644
--- a/doc/guides/nics/features/ngbe.ini
+++ b/doc/guides/nics/features/ngbe.ini
@@ -17,6 +17,9 @@ Promiscuous mode = Y
Allmulticast mode = Y
Unicast MAC filter = Y
Multicast MAC filter = Y
+RSS hash = Y
+RSS key update = Y
+RSS reta update = Y
VLAN filter = Y
CRC offload = P
VLAN offload = P
diff --git a/doc/guides/nics/ngbe.rst b/doc/guides/nics/ngbe.rst
index 3683862fd1..ce160e832c 100644
--- a/doc/guides/nics/ngbe.rst
+++ b/doc/guides/nics/ngbe.rst
@@ -11,6 +11,8 @@ for Wangxun 1 Gigabit Ethernet NICs.
Features
--------
+- Multiple queues for Tx and Rx
+- Receiver Side Scaling (RSS)
- MAC/VLAN filtering
- Packet type information
- Checksum offload
diff --git a/drivers/net/ngbe/meson.build b/drivers/net/ngbe/meson.build
index 05f94fe7d6..c55e6c20e8 100644
--- a/drivers/net/ngbe/meson.build
+++ b/drivers/net/ngbe/meson.build
@@ -16,4 +16,6 @@ sources = files(
'ngbe_rxtx.c',
)
+deps += ['hash']
+
includes += include_directories('base')
diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c
index acc018c811..0bc1400aea 100644
--- a/drivers/net/ngbe/ngbe_ethdev.c
+++ b/drivers/net/ngbe/ngbe_ethdev.c
@@ -856,6 +856,9 @@ ngbe_dev_configure(struct rte_eth_dev *dev)
PMD_INIT_FUNC_TRACE();
+ if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
+ dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+
/* set flag to update link status after init */
intr->flags |= NGBE_FLAG_NEED_LINK_UPDATE;
@@ -1082,6 +1085,7 @@ static int
ngbe_dev_stop(struct rte_eth_dev *dev)
{
struct rte_eth_link link;
+ struct ngbe_adapter *adapter = ngbe_dev_adapter(dev);
struct ngbe_hw *hw = ngbe_dev_hw(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
@@ -1129,6 +1133,8 @@ ngbe_dev_stop(struct rte_eth_dev *dev)
intr_handle->intr_vec = NULL;
}
+ adapter->rss_reta_updated = 0;
+
hw->adapter_stopped = true;
dev->data->dev_started = 0;
@@ -1718,6 +1724,10 @@ ngbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->rx_desc_lim = rx_desc_lim;
dev_info->tx_desc_lim = tx_desc_lim;
+ dev_info->hash_key_size = NGBE_HKEY_MAX_INDEX * sizeof(uint32_t);
+ dev_info->reta_size = ETH_RSS_RETA_SIZE_128;
+ dev_info->flow_type_rss_offloads = NGBE_RSS_OFFLOAD_ALL;
+
dev_info->speed_capa = ETH_LINK_SPEED_1G | ETH_LINK_SPEED_100M |
ETH_LINK_SPEED_10M;
@@ -2184,6 +2194,91 @@ ngbe_dev_interrupt_handler(void *param)
ngbe_dev_interrupt_action(dev);
}
+int
+ngbe_dev_rss_reta_update(struct rte_eth_dev *dev,
+ struct rte_eth_rss_reta_entry64 *reta_conf,
+ uint16_t reta_size)
+{
+ uint8_t i, j, mask;
+ uint32_t reta;
+ uint16_t idx, shift;
+ struct ngbe_adapter *adapter = ngbe_dev_adapter(dev);
+ struct ngbe_hw *hw = ngbe_dev_hw(dev);
+
+ PMD_INIT_FUNC_TRACE();
+
+ if (!hw->is_pf) {
+ PMD_DRV_LOG(ERR, "RSS reta update is not supported on this "
+ "NIC.");
+ return -ENOTSUP;
+ }
+
+ if (reta_size != ETH_RSS_RETA_SIZE_128) {
+ PMD_DRV_LOG(ERR, "The size of hash lookup table configured "
+ "(%d) doesn't match the number hardware can supported "
+ "(%d)", reta_size, ETH_RSS_RETA_SIZE_128);
+ return -EINVAL;
+ }
+
+ for (i = 0; i < reta_size; i += 4) {
+ idx = i / RTE_RETA_GROUP_SIZE;
+ shift = i % RTE_RETA_GROUP_SIZE;
+ mask = (uint8_t)RS64(reta_conf[idx].mask, shift, 0xF);
+ if (!mask)
+ continue;
+
+ reta = rd32a(hw, NGBE_REG_RSSTBL, i >> 2);
+ for (j = 0; j < 4; j++) {
+ if (RS8(mask, j, 0x1)) {
+ reta &= ~(MS32(8 * j, 0xFF));
+ reta |= LS32(reta_conf[idx].reta[shift + j],
+ 8 * j, 0xFF);
+ }
+ }
+ wr32a(hw, NGBE_REG_RSSTBL, i >> 2, reta);
+ }
+ adapter->rss_reta_updated = 1;
+
+ return 0;
+}
+
+int
+ngbe_dev_rss_reta_query(struct rte_eth_dev *dev,
+ struct rte_eth_rss_reta_entry64 *reta_conf,
+ uint16_t reta_size)
+{
+ struct ngbe_hw *hw = ngbe_dev_hw(dev);
+ uint8_t i, j, mask;
+ uint32_t reta;
+ uint16_t idx, shift;
+
+ PMD_INIT_FUNC_TRACE();
+
+ if (reta_size != ETH_RSS_RETA_SIZE_128) {
+ PMD_DRV_LOG(ERR, "The size of hash lookup table configured "
+ "(%d) doesn't match the number hardware can supported "
+ "(%d)", reta_size, ETH_RSS_RETA_SIZE_128);
+ return -EINVAL;
+ }
+
+ for (i = 0; i < reta_size; i += 4) {
+ idx = i / RTE_RETA_GROUP_SIZE;
+ shift = i % RTE_RETA_GROUP_SIZE;
+ mask = (uint8_t)RS64(reta_conf[idx].mask, shift, 0xF);
+ if (!mask)
+ continue;
+
+ reta = rd32a(hw, NGBE_REG_RSSTBL, i >> 2);
+ for (j = 0; j < 4; j++) {
+ if (RS8(mask, j, 0x1))
+ reta_conf[idx].reta[shift + j] =
+ (uint16_t)RS32(reta, 8 * j, 0xFF);
+ }
+ }
+
+ return 0;
+}
+
static int
ngbe_add_rar(struct rte_eth_dev *dev, struct rte_ether_addr *mac_addr,
uint32_t index, uint32_t pool)
@@ -2555,6 +2650,10 @@ static const struct eth_dev_ops ngbe_eth_dev_ops = {
.mac_addr_set = ngbe_set_default_mac_addr,
.uc_hash_table_set = ngbe_uc_hash_table_set,
.uc_all_hash_table_set = ngbe_uc_all_hash_table_set,
+ .reta_update = ngbe_dev_rss_reta_update,
+ .reta_query = ngbe_dev_rss_reta_query,
+ .rss_hash_update = ngbe_dev_rss_hash_update,
+ .rss_hash_conf_get = ngbe_dev_rss_hash_conf_get,
.set_mc_addr_list = ngbe_dev_set_mc_addr_list,
};
diff --git a/drivers/net/ngbe/ngbe_ethdev.h b/drivers/net/ngbe/ngbe_ethdev.h
index 65dad4a72b..083db6080b 100644
--- a/drivers/net/ngbe/ngbe_ethdev.h
+++ b/drivers/net/ngbe/ngbe_ethdev.h
@@ -17,6 +17,7 @@
#define NGBE_VFTA_SIZE 128
#define NGBE_VLAN_TAG_SIZE 4
+#define NGBE_HKEY_MAX_INDEX 10
/*Default value of Max Rx Queue*/
#define NGBE_MAX_RX_QUEUE_NUM 8
@@ -28,6 +29,17 @@
#define NGBE_QUEUE_ITR_INTERVAL_DEFAULT 500 /* 500us */
+#define NGBE_RSS_OFFLOAD_ALL ( \
+ ETH_RSS_IPV4 | \
+ ETH_RSS_NONFRAG_IPV4_TCP | \
+ ETH_RSS_NONFRAG_IPV4_UDP | \
+ ETH_RSS_IPV6 | \
+ ETH_RSS_NONFRAG_IPV6_TCP | \
+ ETH_RSS_NONFRAG_IPV6_UDP | \
+ ETH_RSS_IPV6_EX | \
+ ETH_RSS_IPV6_TCP_EX | \
+ ETH_RSS_IPV6_UDP_EX)
+
#define NGBE_MISC_VEC_ID RTE_INTR_VEC_ZERO_OFFSET
#define NGBE_RX_VEC_START RTE_INTR_VEC_RXTX_OFFSET
@@ -75,6 +87,9 @@ struct ngbe_adapter {
struct ngbe_hwstrip hwstrip;
struct ngbe_uta_info uta_info;
bool rx_bulk_alloc_allowed;
+
+ /* For RSS reta table update */
+ uint8_t rss_reta_updated;
};
static inline struct ngbe_adapter *
@@ -177,6 +192,12 @@ uint16_t ngbe_xmit_pkts_simple(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t ngbe_prep_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts);
+int ngbe_dev_rss_hash_update(struct rte_eth_dev *dev,
+ struct rte_eth_rss_conf *rss_conf);
+
+int ngbe_dev_rss_hash_conf_get(struct rte_eth_dev *dev,
+ struct rte_eth_rss_conf *rss_conf);
+
void ngbe_set_ivar_map(struct ngbe_hw *hw, int8_t direction,
uint8_t queue, uint8_t msix_vector);
@@ -222,6 +243,12 @@ const uint32_t *ngbe_dev_supported_ptypes_get(struct rte_eth_dev *dev);
int ngbe_dev_set_mc_addr_list(struct rte_eth_dev *dev,
struct rte_ether_addr *mc_addr_set,
uint32_t nb_mc_addr);
+int ngbe_dev_rss_reta_update(struct rte_eth_dev *dev,
+ struct rte_eth_rss_reta_entry64 *reta_conf,
+ uint16_t reta_size);
+int ngbe_dev_rss_reta_query(struct rte_eth_dev *dev,
+ struct rte_eth_rss_reta_entry64 *reta_conf,
+ uint16_t reta_size);
void ngbe_vlan_hw_strip_bitmap_set(struct rte_eth_dev *dev,
uint16_t queue, bool on);
void ngbe_config_vlan_strip_on_all_queues(struct rte_eth_dev *dev,
diff --git a/drivers/net/ngbe/ngbe_rxtx.c b/drivers/net/ngbe/ngbe_rxtx.c
index 22693c144a..04abc2bb47 100644
--- a/drivers/net/ngbe/ngbe_rxtx.c
+++ b/drivers/net/ngbe/ngbe_rxtx.c
@@ -897,6 +897,18 @@ ngbe_rxd_pkt_info_to_pkt_type(uint32_t pkt_info, uint16_t ptid_mask)
return ngbe_decode_ptype(ptid);
}
+static inline uint64_t
+ngbe_rxd_pkt_info_to_pkt_flags(uint32_t pkt_info)
+{
+ static uint64_t ip_rss_types_map[16] __rte_cache_aligned = {
+ 0, PKT_RX_RSS_HASH, PKT_RX_RSS_HASH, PKT_RX_RSS_HASH,
+ 0, PKT_RX_RSS_HASH, 0, PKT_RX_RSS_HASH,
+ PKT_RX_RSS_HASH, 0, 0, 0,
+ 0, 0, 0, PKT_RX_FDIR,
+ };
+ return ip_rss_types_map[NGBE_RXD_RSSTYPE(pkt_info)];
+}
+
static inline uint64_t
rx_desc_status_to_pkt_flags(uint32_t rx_status, uint64_t vlan_flags)
{
@@ -1008,10 +1020,16 @@ ngbe_rx_scan_hw_ring(struct ngbe_rx_queue *rxq)
pkt_flags = rx_desc_status_to_pkt_flags(s[j],
rxq->vlan_flags);
pkt_flags |= rx_desc_error_to_pkt_flags(s[j]);
+ pkt_flags |=
+ ngbe_rxd_pkt_info_to_pkt_flags(pkt_info[j]);
mb->ol_flags = pkt_flags;
mb->packet_type =
ngbe_rxd_pkt_info_to_pkt_type(pkt_info[j],
rxq->pkt_type_mask);
+
+ if (likely(pkt_flags & PKT_RX_RSS_HASH))
+ mb->hash.rss =
+ rte_le_to_cpu_32(rxdp[j].qw0.dw1);
}
/* Move mbuf pointers from the S/W ring to the stage */
@@ -1302,6 +1320,7 @@ ngbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
* - packet length,
* - Rx port identifier.
* 2) integrate hardware offload data, if any:
+ * - RSS flag & hash,
* - IP checksum flag,
* - VLAN TCI, if any,
* - error flags.
@@ -1323,10 +1342,14 @@ ngbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
pkt_flags = rx_desc_status_to_pkt_flags(staterr,
rxq->vlan_flags);
pkt_flags |= rx_desc_error_to_pkt_flags(staterr);
+ pkt_flags |= ngbe_rxd_pkt_info_to_pkt_flags(pkt_info);
rxm->ol_flags = pkt_flags;
rxm->packet_type = ngbe_rxd_pkt_info_to_pkt_type(pkt_info,
rxq->pkt_type_mask);
+ if (likely(pkt_flags & PKT_RX_RSS_HASH))
+ rxm->hash.rss = rte_le_to_cpu_32(rxd.qw0.dw1);
+
/*
* Store the mbuf address into the next entry of the array
* of returned packets.
@@ -1366,6 +1389,7 @@ ngbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
* Fill the following info in the HEAD buffer of the Rx cluster:
* - RX port identifier
* - hardware offload data, if any:
+ * - RSS flag & hash
* - IP checksum flag
* - VLAN TCI, if any
* - error flags
@@ -1389,9 +1413,13 @@ ngbe_fill_cluster_head_buf(struct rte_mbuf *head, struct ngbe_rx_desc *desc,
pkt_info = rte_le_to_cpu_32(desc->qw0.dw0);
pkt_flags = rx_desc_status_to_pkt_flags(staterr, rxq->vlan_flags);
pkt_flags |= rx_desc_error_to_pkt_flags(staterr);
+ pkt_flags |= ngbe_rxd_pkt_info_to_pkt_flags(pkt_info);
head->ol_flags = pkt_flags;
head->packet_type = ngbe_rxd_pkt_info_to_pkt_type(pkt_info,
rxq->pkt_type_mask);
+
+ if (likely(pkt_flags & PKT_RX_RSS_HASH))
+ head->hash.rss = rte_le_to_cpu_32(desc->qw0.dw1);
}
/**
@@ -2249,6 +2277,188 @@ ngbe_dev_free_queues(struct rte_eth_dev *dev)
dev->data->nb_tx_queues = 0;
}
+/**
+ * Receive Side Scaling (RSS)
+ *
+ * Principles:
+ * The source and destination IP addresses of the IP header and the source
+ * and destination ports of TCP/UDP headers, if any, of received packets are
+ * hashed against a configurable random key to compute a 32-bit RSS hash result.
+ * The seven (7) LSBs of the 32-bit hash result are used as an index into a
+ * 128-entry redirection table (RETA). Each entry of the RETA provides a 3-bit
+ * RSS output index which is used as the Rx queue index where to store the
+ * received packets.
+ * The following output is supplied in the Rx write-back descriptor:
+ * - 32-bit result of the Microsoft RSS hash function,
+ * - 4-bit RSS type field.
+ */
+
+/*
+ * Used as the default key.
+ */
+static uint8_t rss_intel_key[40] = {
+ 0x6D, 0x5A, 0x56, 0xDA, 0x25, 0x5B, 0x0E, 0xC2,
+ 0x41, 0x67, 0x25, 0x3D, 0x43, 0xA3, 0x8F, 0xB0,
+ 0xD0, 0xCA, 0x2B, 0xCB, 0xAE, 0x7B, 0x30, 0xB4,
+ 0x77, 0xCB, 0x2D, 0xA3, 0x80, 0x30, 0xF2, 0x0C,
+ 0x6A, 0x42, 0xB7, 0x3B, 0xBE, 0xAC, 0x01, 0xFA,
+};
+
+static void
+ngbe_rss_disable(struct rte_eth_dev *dev)
+{
+ struct ngbe_hw *hw = ngbe_dev_hw(dev);
+
+ wr32m(hw, NGBE_RACTL, NGBE_RACTL_RSSENA, 0);
+}
+
+int
+ngbe_dev_rss_hash_update(struct rte_eth_dev *dev,
+ struct rte_eth_rss_conf *rss_conf)
+{
+ struct ngbe_hw *hw = ngbe_dev_hw(dev);
+ uint8_t *hash_key;
+ uint32_t mrqc;
+ uint32_t rss_key;
+ uint64_t rss_hf;
+ uint16_t i;
+
+ if (!hw->is_pf) {
+ PMD_DRV_LOG(ERR, "RSS hash update is not supported on this "
+ "NIC.");
+ return -ENOTSUP;
+ }
+
+ hash_key = rss_conf->rss_key;
+ if (hash_key) {
+ /* Fill in RSS hash key */
+ for (i = 0; i < 10; i++) {
+ rss_key = LS32(hash_key[(i * 4) + 0], 0, 0xFF);
+ rss_key |= LS32(hash_key[(i * 4) + 1], 8, 0xFF);
+ rss_key |= LS32(hash_key[(i * 4) + 2], 16, 0xFF);
+ rss_key |= LS32(hash_key[(i * 4) + 3], 24, 0xFF);
+ wr32a(hw, NGBE_REG_RSSKEY, i, rss_key);
+ }
+ }
+
+ /* Set configured hashing protocols */
+ rss_hf = rss_conf->rss_hf & NGBE_RSS_OFFLOAD_ALL;
+
+ mrqc = rd32(hw, NGBE_RACTL);
+ mrqc &= ~NGBE_RACTL_RSSMASK;
+ if (rss_hf & ETH_RSS_IPV4)
+ mrqc |= NGBE_RACTL_RSSIPV4;
+ if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP)
+ mrqc |= NGBE_RACTL_RSSIPV4TCP;
+ if (rss_hf & ETH_RSS_IPV6 ||
+ rss_hf & ETH_RSS_IPV6_EX)
+ mrqc |= NGBE_RACTL_RSSIPV6;
+ if (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP ||
+ rss_hf & ETH_RSS_IPV6_TCP_EX)
+ mrqc |= NGBE_RACTL_RSSIPV6TCP;
+ if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP)
+ mrqc |= NGBE_RACTL_RSSIPV4UDP;
+ if (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP ||
+ rss_hf & ETH_RSS_IPV6_UDP_EX)
+ mrqc |= NGBE_RACTL_RSSIPV6UDP;
+
+ if (rss_hf)
+ mrqc |= NGBE_RACTL_RSSENA;
+ else
+ mrqc &= ~NGBE_RACTL_RSSENA;
+
+ wr32(hw, NGBE_RACTL, mrqc);
+
+ return 0;
+}
+
+int
+ngbe_dev_rss_hash_conf_get(struct rte_eth_dev *dev,
+ struct rte_eth_rss_conf *rss_conf)
+{
+ struct ngbe_hw *hw = ngbe_dev_hw(dev);
+ uint8_t *hash_key;
+ uint32_t mrqc;
+ uint32_t rss_key;
+ uint64_t rss_hf;
+ uint16_t i;
+
+ hash_key = rss_conf->rss_key;
+ if (hash_key) {
+ /* Return RSS hash key */
+ for (i = 0; i < 10; i++) {
+ rss_key = rd32a(hw, NGBE_REG_RSSKEY, i);
+ hash_key[(i * 4) + 0] = RS32(rss_key, 0, 0xFF);
+ hash_key[(i * 4) + 1] = RS32(rss_key, 8, 0xFF);
+ hash_key[(i * 4) + 2] = RS32(rss_key, 16, 0xFF);
+ hash_key[(i * 4) + 3] = RS32(rss_key, 24, 0xFF);
+ }
+ }
+
+ rss_hf = 0;
+
+ mrqc = rd32(hw, NGBE_RACTL);
+ if (mrqc & NGBE_RACTL_RSSIPV4)
+ rss_hf |= ETH_RSS_IPV4;
+ if (mrqc & NGBE_RACTL_RSSIPV4TCP)
+ rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP;
+ if (mrqc & NGBE_RACTL_RSSIPV6)
+ rss_hf |= ETH_RSS_IPV6 |
+ ETH_RSS_IPV6_EX;
+ if (mrqc & NGBE_RACTL_RSSIPV6TCP)
+ rss_hf |= ETH_RSS_NONFRAG_IPV6_TCP |
+ ETH_RSS_IPV6_TCP_EX;
+ if (mrqc & NGBE_RACTL_RSSIPV4UDP)
+ rss_hf |= ETH_RSS_NONFRAG_IPV4_UDP;
+ if (mrqc & NGBE_RACTL_RSSIPV6UDP)
+ rss_hf |= ETH_RSS_NONFRAG_IPV6_UDP |
+ ETH_RSS_IPV6_UDP_EX;
+ if (!(mrqc & NGBE_RACTL_RSSENA))
+ rss_hf = 0;
+
+ rss_hf &= NGBE_RSS_OFFLOAD_ALL;
+
+ rss_conf->rss_hf = rss_hf;
+ return 0;
+}
+
+static void
+ngbe_rss_configure(struct rte_eth_dev *dev)
+{
+ struct rte_eth_rss_conf rss_conf;
+ struct ngbe_adapter *adapter = ngbe_dev_adapter(dev);
+ struct ngbe_hw *hw = ngbe_dev_hw(dev);
+ uint32_t reta;
+ uint16_t i;
+ uint16_t j;
+
+ PMD_INIT_FUNC_TRACE();
+
+ /*
+ * Fill in redirection table
+ * The byte-swap is needed because NIC registers are in
+ * little-endian order.
+ */
+ if (adapter->rss_reta_updated == 0) {
+ reta = 0;
+ for (i = 0, j = 0; i < ETH_RSS_RETA_SIZE_128; i++, j++) {
+ if (j == dev->data->nb_rx_queues)
+ j = 0;
+ reta = (reta >> 8) | LS32(j, 24, 0xFF);
+ if ((i & 3) == 3)
+ wr32a(hw, NGBE_REG_RSSTBL, i >> 2, reta);
+ }
+ }
+ /*
+ * Configure the RSS key and the RSS protocols used to compute
+ * the RSS hash of input packets.
+ */
+ rss_conf = dev->data->dev_conf.rx_adv_conf.rss_conf;
+ if (rss_conf.rss_key == NULL)
+ rss_conf.rss_key = rss_intel_key; /* Default hash key */
+ ngbe_dev_rss_hash_update(dev, &rss_conf);
+}
+
void ngbe_configure_port(struct rte_eth_dev *dev)
{
struct ngbe_hw *hw = ngbe_dev_hw(dev);
@@ -2317,6 +2527,24 @@ ngbe_alloc_rx_queue_mbufs(struct ngbe_rx_queue *rxq)
return 0;
}
+static int
+ngbe_dev_mq_rx_configure(struct rte_eth_dev *dev)
+{
+ switch (dev->data->dev_conf.rxmode.mq_mode) {
+ case ETH_MQ_RX_RSS:
+ ngbe_rss_configure(dev);
+ break;
+
+ case ETH_MQ_RX_NONE:
+ default:
+ /* if mq_mode is none, disable rss mode.*/
+ ngbe_rss_disable(dev);
+ break;
+ }
+
+ return 0;
+}
+
void
ngbe_set_rx_function(struct rte_eth_dev *dev)
{
@@ -2488,8 +2716,15 @@ ngbe_dev_rx_init(struct rte_eth_dev *dev)
if (rx_conf->offloads & DEV_RX_OFFLOAD_SCATTER)
dev->data->scattered_rx = 1;
+
+ /*
+ * Device configured with multiple RX queues.
+ */
+ ngbe_dev_mq_rx_configure(dev);
+
/*
* Setup the Checksum Register.
+ * Disable Full-Packet Checksum which is mutually exclusive with RSS.
* Enable IP/L4 checksum computation by hardware if requested to do so.
*/
rxcsum = rd32(hw, NGBE_PSRCTL);
--
2.21.0.windows.1
^ permalink raw reply [flat|nested] 54+ messages in thread
* [dpdk-dev] [PATCH 18/32] net/ngbe: support SRIOV
2021-09-08 8:37 [dpdk-dev] [PATCH 00/32] net/ngbe: add many features Jiawen Wu
` (16 preceding siblings ...)
2021-09-08 8:37 ` [dpdk-dev] [PATCH 17/32] net/ngbe: support RSS hash Jiawen Wu
@ 2021-09-08 8:37 ` Jiawen Wu
2021-09-08 8:37 ` [dpdk-dev] [PATCH 19/32] net/ngbe: add mailbox process operations Jiawen Wu
` (13 subsequent siblings)
31 siblings, 0 replies; 54+ messages in thread
From: Jiawen Wu @ 2021-09-08 8:37 UTC (permalink / raw)
To: dev; +Cc: Jiawen Wu
Initialize and configure PF module to support SRIOV.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
doc/guides/nics/features/ngbe.ini | 1 +
drivers/net/ngbe/base/meson.build | 1 +
drivers/net/ngbe/base/ngbe_dummy.h | 17 +++
drivers/net/ngbe/base/ngbe_hw.c | 47 ++++++-
drivers/net/ngbe/base/ngbe_mbx.c | 30 +++++
drivers/net/ngbe/base/ngbe_mbx.h | 11 ++
drivers/net/ngbe/base/ngbe_type.h | 22 ++++
drivers/net/ngbe/meson.build | 1 +
drivers/net/ngbe/ngbe_ethdev.c | 32 ++++-
drivers/net/ngbe/ngbe_ethdev.h | 19 +++
drivers/net/ngbe/ngbe_pf.c | 196 +++++++++++++++++++++++++++++
drivers/net/ngbe/ngbe_rxtx.c | 26 ++--
12 files changed, 390 insertions(+), 13 deletions(-)
create mode 100644 drivers/net/ngbe/base/ngbe_mbx.c
create mode 100644 drivers/net/ngbe/base/ngbe_mbx.h
create mode 100644 drivers/net/ngbe/ngbe_pf.c
diff --git a/doc/guides/nics/features/ngbe.ini b/doc/guides/nics/features/ngbe.ini
index 70d731a695..9a497ccae6 100644
--- a/doc/guides/nics/features/ngbe.ini
+++ b/doc/guides/nics/features/ngbe.ini
@@ -20,6 +20,7 @@ Multicast MAC filter = Y
RSS hash = Y
RSS key update = Y
RSS reta update = Y
+SR-IOV = Y
VLAN filter = Y
CRC offload = P
VLAN offload = P
diff --git a/drivers/net/ngbe/base/meson.build b/drivers/net/ngbe/base/meson.build
index 6081281135..390b0f9c12 100644
--- a/drivers/net/ngbe/base/meson.build
+++ b/drivers/net/ngbe/base/meson.build
@@ -4,6 +4,7 @@
sources = [
'ngbe_eeprom.c',
'ngbe_hw.c',
+ 'ngbe_mbx.c',
'ngbe_mng.c',
'ngbe_phy.c',
'ngbe_phy_rtl.c',
diff --git a/drivers/net/ngbe/base/ngbe_dummy.h b/drivers/net/ngbe/base/ngbe_dummy.h
index 7814fd6226..5cb09bfcaa 100644
--- a/drivers/net/ngbe/base/ngbe_dummy.h
+++ b/drivers/net/ngbe/base/ngbe_dummy.h
@@ -136,6 +136,14 @@ static inline s32 ngbe_mac_clear_vfta_dummy(struct ngbe_hw *TUP0)
{
return NGBE_ERR_OPS_DUMMY;
}
+static inline void ngbe_mac_set_mac_anti_spoofing_dummy(struct ngbe_hw *TUP0,
+ bool TUP1, int TUP2)
+{
+}
+static inline void ngbe_mac_set_vlan_anti_spoofing_dummy(struct ngbe_hw *TUP0,
+ bool TUP1, int TUP2)
+{
+}
static inline s32 ngbe_mac_init_thermal_ssth_dummy(struct ngbe_hw *TUP0)
{
return NGBE_ERR_OPS_DUMMY;
@@ -187,6 +195,12 @@ static inline s32 ngbe_phy_check_link_dummy(struct ngbe_hw *TUP0, u32 *TUP1,
{
return NGBE_ERR_OPS_DUMMY;
}
+
+/* struct ngbe_mbx_operations */
+static inline void ngbe_mbx_init_params_dummy(struct ngbe_hw *TUP0)
+{
+}
+
static inline void ngbe_init_ops_dummy(struct ngbe_hw *hw)
{
hw->bus.set_lan_id = ngbe_bus_set_lan_id_dummy;
@@ -214,6 +228,8 @@ static inline void ngbe_init_ops_dummy(struct ngbe_hw *hw)
hw->mac.init_rx_addrs = ngbe_mac_init_rx_addrs_dummy;
hw->mac.update_mc_addr_list = ngbe_mac_update_mc_addr_list_dummy;
hw->mac.clear_vfta = ngbe_mac_clear_vfta_dummy;
+ hw->mac.set_mac_anti_spoofing = ngbe_mac_set_mac_anti_spoofing_dummy;
+ hw->mac.set_vlan_anti_spoofing = ngbe_mac_set_vlan_anti_spoofing_dummy;
hw->mac.init_thermal_sensor_thresh = ngbe_mac_init_thermal_ssth_dummy;
hw->mac.check_overtemp = ngbe_mac_check_overtemp_dummy;
hw->phy.identify = ngbe_phy_identify_dummy;
@@ -225,6 +241,7 @@ static inline void ngbe_init_ops_dummy(struct ngbe_hw *hw)
hw->phy.write_reg_unlocked = ngbe_phy_write_reg_unlocked_dummy;
hw->phy.setup_link = ngbe_phy_setup_link_dummy;
hw->phy.check_link = ngbe_phy_check_link_dummy;
+ hw->mbx.init_params = ngbe_mbx_init_params_dummy;
}
#endif /* _NGBE_TYPE_DUMMY_H_ */
diff --git a/drivers/net/ngbe/base/ngbe_hw.c b/drivers/net/ngbe/base/ngbe_hw.c
index ce0867575a..8b45a91f78 100644
--- a/drivers/net/ngbe/base/ngbe_hw.c
+++ b/drivers/net/ngbe/base/ngbe_hw.c
@@ -4,6 +4,7 @@
*/
#include "ngbe_type.h"
+#include "ngbe_mbx.h"
#include "ngbe_phy.h"
#include "ngbe_eeprom.h"
#include "ngbe_mng.h"
@@ -1008,6 +1009,44 @@ s32 ngbe_setup_mac_link_em(struct ngbe_hw *hw,
return status;
}
+/**
+ * ngbe_set_mac_anti_spoofing - Enable/Disable MAC anti-spoofing
+ * @hw: pointer to hardware structure
+ * @enable: enable or disable switch for MAC anti-spoofing
+ * @vf: Virtual Function pool - VF Pool to set for MAC anti-spoofing
+ *
+ **/
+void ngbe_set_mac_anti_spoofing(struct ngbe_hw *hw, bool enable, int vf)
+{
+ u32 pfvfspoof;
+
+ pfvfspoof = rd32(hw, NGBE_POOLTXASMAC);
+ if (enable)
+ pfvfspoof |= (1 << vf);
+ else
+ pfvfspoof &= ~(1 << vf);
+ wr32(hw, NGBE_POOLTXASMAC, pfvfspoof);
+}
+
+/**
+ * ngbe_set_vlan_anti_spoofing - Enable/Disable VLAN anti-spoofing
+ * @hw: pointer to hardware structure
+ * @enable: enable or disable switch for VLAN anti-spoofing
+ * @vf: Virtual Function pool - VF Pool to set for VLAN anti-spoofing
+ *
+ **/
+void ngbe_set_vlan_anti_spoofing(struct ngbe_hw *hw, bool enable, int vf)
+{
+ u32 pfvfspoof;
+
+ pfvfspoof = rd32(hw, NGBE_POOLTXASVLAN);
+ if (enable)
+ pfvfspoof |= (1 << vf);
+ else
+ pfvfspoof &= ~(1 << vf);
+ wr32(hw, NGBE_POOLTXASVLAN, pfvfspoof);
+}
+
/**
* ngbe_init_thermal_sensor_thresh - Inits thermal sensor thresholds
* @hw: pointer to hardware structure
@@ -1231,6 +1270,7 @@ s32 ngbe_init_ops_pf(struct ngbe_hw *hw)
struct ngbe_mac_info *mac = &hw->mac;
struct ngbe_phy_info *phy = &hw->phy;
struct ngbe_rom_info *rom = &hw->rom;
+ struct ngbe_mbx_info *mbx = &hw->mbx;
DEBUGFUNC("ngbe_init_ops_pf");
@@ -1258,7 +1298,8 @@ s32 ngbe_init_ops_pf(struct ngbe_hw *hw)
mac->disable_sec_rx_path = ngbe_disable_sec_rx_path;
mac->enable_sec_rx_path = ngbe_enable_sec_rx_path;
- /* RAR, Multicast */
+
+ /* RAR, Multicast, VLAN */
mac->set_rar = ngbe_set_rar;
mac->clear_rar = ngbe_clear_rar;
mac->init_rx_addrs = ngbe_init_rx_addrs;
@@ -1266,6 +1307,8 @@ s32 ngbe_init_ops_pf(struct ngbe_hw *hw)
mac->set_vmdq = ngbe_set_vmdq;
mac->clear_vmdq = ngbe_clear_vmdq;
mac->clear_vfta = ngbe_clear_vfta;
+ mac->set_mac_anti_spoofing = ngbe_set_mac_anti_spoofing;
+ mac->set_vlan_anti_spoofing = ngbe_set_vlan_anti_spoofing;
/* Link */
mac->get_link_capabilities = ngbe_get_link_capabilities_em;
@@ -1276,6 +1319,8 @@ s32 ngbe_init_ops_pf(struct ngbe_hw *hw)
mac->init_thermal_sensor_thresh = ngbe_init_thermal_sensor_thresh;
mac->check_overtemp = ngbe_mac_check_overtemp;
+ mbx->init_params = ngbe_init_mbx_params_pf;
+
/* EEPROM */
rom->init_params = ngbe_init_eeprom_params;
rom->read32 = ngbe_ee_read32;
diff --git a/drivers/net/ngbe/base/ngbe_mbx.c b/drivers/net/ngbe/base/ngbe_mbx.c
new file mode 100644
index 0000000000..1ac9531ceb
--- /dev/null
+++ b/drivers/net/ngbe/base/ngbe_mbx.c
@@ -0,0 +1,30 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018-2021 Beijing WangXun Technology Co., Ltd.
+ * Copyright(c) 2010-2017 Intel Corporation
+ */
+
+#include "ngbe_type.h"
+
+#include "ngbe_mbx.h"
+
+/**
+ * ngbe_init_mbx_params_pf - set initial values for pf mailbox
+ * @hw: pointer to the HW structure
+ *
+ * Initializes the hw->mbx struct to correct values for pf mailbox
+ */
+void ngbe_init_mbx_params_pf(struct ngbe_hw *hw)
+{
+ struct ngbe_mbx_info *mbx = &hw->mbx;
+
+ mbx->timeout = 0;
+ mbx->usec_delay = 0;
+
+ mbx->size = NGBE_P2VMBX_SIZE;
+
+ mbx->stats.msgs_tx = 0;
+ mbx->stats.msgs_rx = 0;
+ mbx->stats.reqs = 0;
+ mbx->stats.acks = 0;
+ mbx->stats.rsts = 0;
+}
diff --git a/drivers/net/ngbe/base/ngbe_mbx.h b/drivers/net/ngbe/base/ngbe_mbx.h
new file mode 100644
index 0000000000..d280945baf
--- /dev/null
+++ b/drivers/net/ngbe/base/ngbe_mbx.h
@@ -0,0 +1,11 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018-2021 Beijing WangXun Technology Co., Ltd.
+ * Copyright(c) 2010-2017 Intel Corporation
+ */
+
+#ifndef _NGBE_MBX_H_
+#define _NGBE_MBX_H_
+
+void ngbe_init_mbx_params_pf(struct ngbe_hw *hw);
+
+#endif /* _NGBE_MBX_H_ */
diff --git a/drivers/net/ngbe/base/ngbe_type.h b/drivers/net/ngbe/base/ngbe_type.h
index 5a88d38e84..bc95fcf609 100644
--- a/drivers/net/ngbe/base/ngbe_type.h
+++ b/drivers/net/ngbe/base/ngbe_type.h
@@ -254,6 +254,9 @@ struct ngbe_mac_info {
u32 mc_addr_count,
ngbe_mc_addr_itr func, bool clear);
s32 (*clear_vfta)(struct ngbe_hw *hw);
+ void (*set_mac_anti_spoofing)(struct ngbe_hw *hw, bool enable, int vf);
+ void (*set_vlan_anti_spoofing)(struct ngbe_hw *hw,
+ bool enable, int vf);
/* Manageability interface */
s32 (*init_thermal_sensor_thresh)(struct ngbe_hw *hw);
@@ -305,6 +308,24 @@ struct ngbe_phy_info {
u32 autoneg_advertised;
};
+struct ngbe_mbx_stats {
+ u32 msgs_tx;
+ u32 msgs_rx;
+
+ u32 acks;
+ u32 reqs;
+ u32 rsts;
+};
+
+struct ngbe_mbx_info {
+ void (*init_params)(struct ngbe_hw *hw);
+
+ struct ngbe_mbx_stats stats;
+ u32 timeout;
+ u32 usec_delay;
+ u16 size;
+};
+
enum ngbe_isb_idx {
NGBE_ISB_HEADER,
NGBE_ISB_MISC,
@@ -321,6 +342,7 @@ struct ngbe_hw {
struct ngbe_phy_info phy;
struct ngbe_rom_info rom;
struct ngbe_bus_info bus;
+ struct ngbe_mbx_info mbx;
u16 device_id;
u16 vendor_id;
u16 sub_device_id;
diff --git a/drivers/net/ngbe/meson.build b/drivers/net/ngbe/meson.build
index c55e6c20e8..8b5195aab3 100644
--- a/drivers/net/ngbe/meson.build
+++ b/drivers/net/ngbe/meson.build
@@ -13,6 +13,7 @@ objs = [base_objs]
sources = files(
'ngbe_ethdev.c',
'ngbe_ptypes.c',
+ 'ngbe_pf.c',
'ngbe_rxtx.c',
)
diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c
index 0bc1400aea..70e471b2c2 100644
--- a/drivers/net/ngbe/ngbe_ethdev.c
+++ b/drivers/net/ngbe/ngbe_ethdev.c
@@ -304,7 +304,7 @@ eth_ngbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
const struct rte_memzone *mz;
uint32_t ctrl_ext;
- int err;
+ int err, ret;
PMD_INIT_FUNC_TRACE();
@@ -423,6 +423,16 @@ eth_ngbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
/* initialize the hw strip bitmap*/
memset(hwstrip, 0, sizeof(*hwstrip));
+ /* initialize PF if max_vfs not zero */
+ ret = ngbe_pf_host_init(eth_dev);
+ if (ret) {
+ rte_free(eth_dev->data->mac_addrs);
+ eth_dev->data->mac_addrs = NULL;
+ rte_free(eth_dev->data->hash_mac_addrs);
+ eth_dev->data->hash_mac_addrs = NULL;
+ return ret;
+ }
+
ctrl_ext = rd32(hw, NGBE_PORTCTL);
/* let hardware know driver is loaded */
ctrl_ext |= NGBE_PORTCTL_DRVLOAD;
@@ -926,6 +936,9 @@ ngbe_dev_start(struct rte_eth_dev *dev)
hw->mac.start_hw(hw);
hw->mac.get_link_status = true;
+ /* configure PF module if SRIOV enabled */
+ ngbe_pf_host_configure(dev);
+
ngbe_dev_phy_intr_setup(dev);
/* check and configure queue intr-vector mapping */
@@ -1087,8 +1100,10 @@ ngbe_dev_stop(struct rte_eth_dev *dev)
struct rte_eth_link link;
struct ngbe_adapter *adapter = ngbe_dev_adapter(dev);
struct ngbe_hw *hw = ngbe_dev_hw(dev);
+ struct ngbe_vf_info *vfinfo = *NGBE_DEV_VFDATA(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ int vf;
if (hw->adapter_stopped)
return 0;
@@ -1111,6 +1126,9 @@ ngbe_dev_stop(struct rte_eth_dev *dev)
/* stop adapter */
ngbe_stop_hw(hw);
+ for (vf = 0; vfinfo != NULL && vf < pci_dev->max_vfs; vf++)
+ vfinfo[vf].clear_to_send = false;
+
ngbe_dev_clear_queues(dev);
/* Clear stored conf */
@@ -1183,6 +1201,9 @@ ngbe_dev_close(struct rte_eth_dev *dev)
rte_delay_ms(100);
} while (retries++ < (10 + NGBE_LINK_UP_TIME));
+ /* uninitialize PF if max_vfs not zero */
+ ngbe_pf_host_uninit(dev);
+
rte_free(dev->data->mac_addrs);
dev->data->mac_addrs = NULL;
@@ -1200,6 +1221,15 @@ ngbe_dev_reset(struct rte_eth_dev *dev)
{
int ret;
+ /* When a DPDK PMD PF begin to reset PF port, it should notify all
+ * its VF to make them align with it. The detailed notification
+ * mechanism is PMD specific. As to ngbe PF, it is rather complex.
+ * To avoid unexpected behavior in VF, currently reset of PF with
+ * SR-IOV activation is not supported. It might be supported later.
+ */
+ if (dev->data->sriov.active)
+ return -ENOTSUP;
+
ret = eth_ngbe_dev_uninit(dev);
if (ret != 0)
return ret;
diff --git a/drivers/net/ngbe/ngbe_ethdev.h b/drivers/net/ngbe/ngbe_ethdev.h
index 083db6080b..f5a1363d10 100644
--- a/drivers/net/ngbe/ngbe_ethdev.h
+++ b/drivers/net/ngbe/ngbe_ethdev.h
@@ -7,6 +7,8 @@
#define _NGBE_ETHDEV_H_
#include "ngbe_ptypes.h"
+#include <rte_ethdev.h>
+#include <rte_ethdev_core.h>
/* need update link, bit flag */
#define NGBE_FLAG_NEED_LINK_UPDATE ((uint32_t)(1 << 0))
@@ -75,6 +77,12 @@ struct ngbe_uta_info {
uint32_t uta_shadow[NGBE_MAX_UTA];
};
+struct ngbe_vf_info {
+ uint8_t vf_mac_addresses[RTE_ETHER_ADDR_LEN];
+ bool clear_to_send;
+ uint16_t switch_domain_id;
+};
+
/*
* Structure to store private data for each driver instance (for each port).
*/
@@ -85,6 +93,7 @@ struct ngbe_adapter {
struct ngbe_stat_mappings stat_mappings;
struct ngbe_vfta shadow_vfta;
struct ngbe_hwstrip hwstrip;
+ struct ngbe_vf_info *vfdata;
struct ngbe_uta_info uta_info;
bool rx_bulk_alloc_allowed;
@@ -129,6 +138,10 @@ ngbe_dev_intr(struct rte_eth_dev *dev)
#define NGBE_DEV_HWSTRIP(dev) \
(&((struct ngbe_adapter *)(dev)->data->dev_private)->hwstrip)
+
+#define NGBE_DEV_VFDATA(dev) \
+ (&((struct ngbe_adapter *)(dev)->data->dev_private)->vfdata)
+
#define NGBE_DEV_UTA_INFO(dev) \
(&((struct ngbe_adapter *)(dev)->data->dev_private)->uta_info)
@@ -216,6 +229,12 @@ void ngbe_vlan_hw_filter_disable(struct rte_eth_dev *dev);
void ngbe_vlan_hw_strip_config(struct rte_eth_dev *dev);
+int ngbe_pf_host_init(struct rte_eth_dev *eth_dev);
+
+void ngbe_pf_host_uninit(struct rte_eth_dev *eth_dev);
+
+int ngbe_pf_host_configure(struct rte_eth_dev *eth_dev);
+
#define NGBE_LINK_DOWN_CHECK_TIMEOUT 4000 /* ms */
#define NGBE_LINK_UP_CHECK_TIMEOUT 1000 /* ms */
#define NGBE_VMDQ_NUM_UC_MAC 4096 /* Maximum nb. of UC MAC addr. */
diff --git a/drivers/net/ngbe/ngbe_pf.c b/drivers/net/ngbe/ngbe_pf.c
new file mode 100644
index 0000000000..2f9dfc4284
--- /dev/null
+++ b/drivers/net/ngbe/ngbe_pf.c
@@ -0,0 +1,196 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018-2021 Beijing WangXun Technology Co., Ltd.
+ * Copyright(c) 2010-2017 Intel Corporation
+ */
+
+#include <rte_ether.h>
+#include <ethdev_driver.h>
+#include <rte_malloc.h>
+#include <rte_bus_pci.h>
+
+#include "base/ngbe.h"
+#include "ngbe_ethdev.h"
+
+#define NGBE_MAX_VFTA (128)
+
+static inline uint16_t
+dev_num_vf(struct rte_eth_dev *eth_dev)
+{
+ struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
+
+ /* EM only support 7 VFs. */
+ return pci_dev->max_vfs;
+}
+
+static inline
+int ngbe_vf_perm_addr_gen(struct rte_eth_dev *dev, uint16_t vf_num)
+{
+ unsigned char vf_mac_addr[RTE_ETHER_ADDR_LEN];
+ struct ngbe_vf_info *vfinfo = *NGBE_DEV_VFDATA(dev);
+ uint16_t vfn;
+
+ for (vfn = 0; vfn < vf_num; vfn++) {
+ rte_eth_random_addr(vf_mac_addr);
+ /* keep the random address as default */
+ memcpy(vfinfo[vfn].vf_mac_addresses, vf_mac_addr,
+ RTE_ETHER_ADDR_LEN);
+ }
+
+ return 0;
+}
+
+int ngbe_pf_host_init(struct rte_eth_dev *eth_dev)
+{
+ struct ngbe_vf_info **vfinfo = NGBE_DEV_VFDATA(eth_dev);
+ struct ngbe_uta_info *uta_info = NGBE_DEV_UTA_INFO(eth_dev);
+ struct ngbe_hw *hw = ngbe_dev_hw(eth_dev);
+ uint16_t vf_num;
+ uint8_t nb_queue = 1;
+ int ret = 0;
+
+ PMD_INIT_FUNC_TRACE();
+
+ RTE_ETH_DEV_SRIOV(eth_dev).active = 0;
+ vf_num = dev_num_vf(eth_dev);
+ if (vf_num == 0)
+ return ret;
+
+ *vfinfo = rte_zmalloc("vf_info",
+ sizeof(struct ngbe_vf_info) * vf_num, 0);
+ if (*vfinfo == NULL) {
+ PMD_INIT_LOG(ERR,
+ "Cannot allocate memory for private VF data\n");
+ return -ENOMEM;
+ }
+
+ ret = rte_eth_switch_domain_alloc(&(*vfinfo)->switch_domain_id);
+ if (ret) {
+ PMD_INIT_LOG(ERR,
+ "failed to allocate switch domain for device %d", ret);
+ rte_free(*vfinfo);
+ *vfinfo = NULL;
+ return ret;
+ }
+
+ memset(uta_info, 0, sizeof(struct ngbe_uta_info));
+ hw->mac.mc_filter_type = 0;
+
+ RTE_ETH_DEV_SRIOV(eth_dev).active = ETH_8_POOLS;
+ RTE_ETH_DEV_SRIOV(eth_dev).nb_q_per_pool = nb_queue;
+ RTE_ETH_DEV_SRIOV(eth_dev).def_pool_q_idx =
+ (uint16_t)(vf_num * nb_queue);
+
+ ngbe_vf_perm_addr_gen(eth_dev, vf_num);
+
+ /* init_mailbox_params */
+ hw->mbx.init_params(hw);
+
+ return ret;
+}
+
+void ngbe_pf_host_uninit(struct rte_eth_dev *eth_dev)
+{
+ struct ngbe_vf_info **vfinfo;
+ uint16_t vf_num;
+ int ret;
+
+ PMD_INIT_FUNC_TRACE();
+
+ RTE_ETH_DEV_SRIOV(eth_dev).active = 0;
+ RTE_ETH_DEV_SRIOV(eth_dev).nb_q_per_pool = 0;
+ RTE_ETH_DEV_SRIOV(eth_dev).def_pool_q_idx = 0;
+
+ vf_num = dev_num_vf(eth_dev);
+ if (vf_num == 0)
+ return;
+
+ vfinfo = NGBE_DEV_VFDATA(eth_dev);
+ if (*vfinfo == NULL)
+ return;
+
+ ret = rte_eth_switch_domain_free((*vfinfo)->switch_domain_id);
+ if (ret)
+ PMD_INIT_LOG(WARNING, "failed to free switch domain: %d", ret);
+
+ rte_free(*vfinfo);
+ *vfinfo = NULL;
+}
+
+int ngbe_pf_host_configure(struct rte_eth_dev *eth_dev)
+{
+ uint32_t vtctl, fcrth;
+ uint32_t vfre_offset;
+ uint16_t vf_num;
+ const uint8_t VFRE_SHIFT = 5; /* VFRE 32 bits per slot */
+ const uint8_t VFRE_MASK = (uint8_t)((1U << VFRE_SHIFT) - 1);
+ struct ngbe_hw *hw = ngbe_dev_hw(eth_dev);
+ uint32_t gpie;
+ uint32_t gcr_ext;
+ uint32_t vlanctrl;
+ int i;
+
+ vf_num = dev_num_vf(eth_dev);
+ if (vf_num == 0)
+ return -1;
+
+ /* set the default pool for PF */
+ vtctl = rd32(hw, NGBE_POOLCTL);
+ vtctl &= ~NGBE_POOLCTL_DEFPL_MASK;
+ vtctl |= NGBE_POOLCTL_DEFPL(vf_num);
+ vtctl |= NGBE_POOLCTL_RPLEN;
+ wr32(hw, NGBE_POOLCTL, vtctl);
+
+ vfre_offset = vf_num & VFRE_MASK;
+
+ /* Enable pools reserved to PF only */
+ wr32(hw, NGBE_POOLRXENA(0), (~0U) << vfre_offset);
+ wr32(hw, NGBE_POOLTXENA(0), (~0U) << vfre_offset);
+
+ wr32(hw, NGBE_PSRCTL, NGBE_PSRCTL_LBENA);
+
+ /* clear VMDq map to perment rar 0 */
+ hw->mac.clear_vmdq(hw, 0, BIT_MASK32);
+
+ /* clear VMDq map to scan rar 31 */
+ wr32(hw, NGBE_ETHADDRIDX, hw->mac.num_rar_entries);
+ wr32(hw, NGBE_ETHADDRASS, 0);
+
+ /* set VMDq map to default PF pool */
+ hw->mac.set_vmdq(hw, 0, vf_num);
+
+ /*
+ * SW msut set PORTCTL.VT_Mode the same as GPIE.VT_Mode
+ */
+ gpie = rd32(hw, NGBE_GPIE);
+ gpie |= NGBE_GPIE_MSIX;
+ gcr_ext = rd32(hw, NGBE_PORTCTL);
+ gcr_ext &= ~NGBE_PORTCTL_NUMVT_MASK;
+
+ if (RTE_ETH_DEV_SRIOV(eth_dev).active == ETH_8_POOLS)
+ gcr_ext |= NGBE_PORTCTL_NUMVT_8;
+
+ wr32(hw, NGBE_PORTCTL, gcr_ext);
+ wr32(hw, NGBE_GPIE, gpie);
+
+ /*
+ * enable vlan filtering and allow all vlan tags through
+ */
+ vlanctrl = rd32(hw, NGBE_VLANCTL);
+ vlanctrl |= NGBE_VLANCTL_VFE; /* enable vlan filters */
+ wr32(hw, NGBE_VLANCTL, vlanctrl);
+
+ /* enable all vlan filters */
+ for (i = 0; i < NGBE_MAX_VFTA; i++)
+ wr32(hw, NGBE_VLANTBL(i), 0xFFFFFFFF);
+
+ /* Enable MAC Anti-Spoofing */
+ hw->mac.set_mac_anti_spoofing(hw, FALSE, vf_num);
+
+ /* set flow control threshold to max to avoid tx switch hang */
+ wr32(hw, NGBE_FCWTRLO, 0);
+ fcrth = rd32(hw, NGBE_PBRXSIZE) - 32;
+ wr32(hw, NGBE_FCWTRHI, fcrth);
+
+ return 0;
+}
+
diff --git a/drivers/net/ngbe/ngbe_rxtx.c b/drivers/net/ngbe/ngbe_rxtx.c
index 04abc2bb47..91cafed7fc 100644
--- a/drivers/net/ngbe/ngbe_rxtx.c
+++ b/drivers/net/ngbe/ngbe_rxtx.c
@@ -1886,7 +1886,8 @@ ngbe_dev_tx_queue_setup(struct rte_eth_dev *dev,
txq->hthresh = tx_conf->tx_thresh.hthresh;
txq->wthresh = tx_conf->tx_thresh.wthresh;
txq->queue_id = queue_idx;
- txq->reg_idx = queue_idx;
+ txq->reg_idx = (uint16_t)((RTE_ETH_DEV_SRIOV(dev).active == 0) ?
+ queue_idx : RTE_ETH_DEV_SRIOV(dev).def_pool_q_idx + queue_idx);
txq->port_id = dev->data->port_id;
txq->offloads = offloads;
txq->ops = &def_txq_ops;
@@ -2138,7 +2139,8 @@ ngbe_dev_rx_queue_setup(struct rte_eth_dev *dev,
rxq->nb_rx_desc = nb_desc;
rxq->rx_free_thresh = rx_conf->rx_free_thresh;
rxq->queue_id = queue_idx;
- rxq->reg_idx = queue_idx;
+ rxq->reg_idx = (uint16_t)((RTE_ETH_DEV_SRIOV(dev).active == 0) ?
+ queue_idx : RTE_ETH_DEV_SRIOV(dev).def_pool_q_idx + queue_idx);
rxq->port_id = dev->data->port_id;
if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
rxq->crc_len = RTE_ETHER_CRC_LEN;
@@ -2530,16 +2532,18 @@ ngbe_alloc_rx_queue_mbufs(struct ngbe_rx_queue *rxq)
static int
ngbe_dev_mq_rx_configure(struct rte_eth_dev *dev)
{
- switch (dev->data->dev_conf.rxmode.mq_mode) {
- case ETH_MQ_RX_RSS:
- ngbe_rss_configure(dev);
- break;
+ if (RTE_ETH_DEV_SRIOV(dev).active == 0) {
+ switch (dev->data->dev_conf.rxmode.mq_mode) {
+ case ETH_MQ_RX_RSS:
+ ngbe_rss_configure(dev);
+ break;
- case ETH_MQ_RX_NONE:
- default:
- /* if mq_mode is none, disable rss mode.*/
- ngbe_rss_disable(dev);
- break;
+ case ETH_MQ_RX_NONE:
+ default:
+ /* if mq_mode is none, disable rss mode.*/
+ ngbe_rss_disable(dev);
+ break;
+ }
}
return 0;
--
2.21.0.windows.1
^ permalink raw reply [flat|nested] 54+ messages in thread
* [dpdk-dev] [PATCH 19/32] net/ngbe: add mailbox process operations
2021-09-08 8:37 [dpdk-dev] [PATCH 00/32] net/ngbe: add many features Jiawen Wu
` (17 preceding siblings ...)
2021-09-08 8:37 ` [dpdk-dev] [PATCH 18/32] net/ngbe: support SRIOV Jiawen Wu
@ 2021-09-08 8:37 ` Jiawen Wu
2021-09-15 16:56 ` Ferruh Yigit
2021-09-08 8:37 ` [dpdk-dev] [PATCH 20/32] net/ngbe: support flow control Jiawen Wu
` (12 subsequent siblings)
31 siblings, 1 reply; 54+ messages in thread
From: Jiawen Wu @ 2021-09-08 8:37 UTC (permalink / raw)
To: dev; +Cc: Jiawen Wu
Add check operation for vf function level reset,
mailbox messages and ack from vf.
Waiting to process the messages.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
drivers/net/ngbe/base/ngbe.h | 4 +
drivers/net/ngbe/base/ngbe_dummy.h | 39 ++
drivers/net/ngbe/base/ngbe_hw.c | 215 +++++++++++
drivers/net/ngbe/base/ngbe_hw.h | 8 +
drivers/net/ngbe/base/ngbe_mbx.c | 297 +++++++++++++++
drivers/net/ngbe/base/ngbe_mbx.h | 78 ++++
drivers/net/ngbe/base/ngbe_type.h | 10 +
drivers/net/ngbe/meson.build | 2 +
drivers/net/ngbe/ngbe_ethdev.c | 7 +
drivers/net/ngbe/ngbe_ethdev.h | 13 +
drivers/net/ngbe/ngbe_pf.c | 564 +++++++++++++++++++++++++++++
drivers/net/ngbe/rte_pmd_ngbe.h | 39 ++
12 files changed, 1276 insertions(+)
create mode 100644 drivers/net/ngbe/rte_pmd_ngbe.h
diff --git a/drivers/net/ngbe/base/ngbe.h b/drivers/net/ngbe/base/ngbe.h
index fe85b07b57..1d17c2f115 100644
--- a/drivers/net/ngbe/base/ngbe.h
+++ b/drivers/net/ngbe/base/ngbe.h
@@ -6,6 +6,10 @@
#define _NGBE_H_
#include "ngbe_type.h"
+#include "ngbe_mng.h"
+#include "ngbe_mbx.h"
+#include "ngbe_eeprom.h"
+#include "ngbe_phy.h"
#include "ngbe_hw.h"
#endif /* _NGBE_H_ */
diff --git a/drivers/net/ngbe/base/ngbe_dummy.h b/drivers/net/ngbe/base/ngbe_dummy.h
index 5cb09bfcaa..940b448734 100644
--- a/drivers/net/ngbe/base/ngbe_dummy.h
+++ b/drivers/net/ngbe/base/ngbe_dummy.h
@@ -136,6 +136,16 @@ static inline s32 ngbe_mac_clear_vfta_dummy(struct ngbe_hw *TUP0)
{
return NGBE_ERR_OPS_DUMMY;
}
+static inline s32 ngbe_mac_set_vfta_dummy(struct ngbe_hw *TUP0, u32 TUP1,
+ u32 TUP2, bool TUP3, bool TUP4)
+{
+ return NGBE_ERR_OPS_DUMMY;
+}
+static inline s32 ngbe_mac_set_vlvf_dummy(struct ngbe_hw *TUP0, u32 TUP1,
+ u32 TUP2, bool TUP3, u32 *TUP4, u32 TUP5, bool TUP6)
+{
+ return NGBE_ERR_OPS_DUMMY;
+}
static inline void ngbe_mac_set_mac_anti_spoofing_dummy(struct ngbe_hw *TUP0,
bool TUP1, int TUP2)
{
@@ -200,6 +210,28 @@ static inline s32 ngbe_phy_check_link_dummy(struct ngbe_hw *TUP0, u32 *TUP1,
static inline void ngbe_mbx_init_params_dummy(struct ngbe_hw *TUP0)
{
}
+static inline s32 ngbe_mbx_read_dummy(struct ngbe_hw *TUP0, u32 *TUP1,
+ u16 TUP2, u16 TUP3)
+{
+ return NGBE_ERR_OPS_DUMMY;
+}
+static inline s32 ngbe_mbx_write_dummy(struct ngbe_hw *TUP0, u32 *TUP1,
+ u16 TUP2, u16 TUP3)
+{
+ return NGBE_ERR_OPS_DUMMY;
+}
+static inline s32 ngbe_mbx_check_for_msg_dummy(struct ngbe_hw *TUP0, u16 TUP1)
+{
+ return NGBE_ERR_OPS_DUMMY;
+}
+static inline s32 ngbe_mbx_check_for_ack_dummy(struct ngbe_hw *TUP0, u16 TUP1)
+{
+ return NGBE_ERR_OPS_DUMMY;
+}
+static inline s32 ngbe_mbx_check_for_rst_dummy(struct ngbe_hw *TUP0, u16 TUP1)
+{
+ return NGBE_ERR_OPS_DUMMY;
+}
static inline void ngbe_init_ops_dummy(struct ngbe_hw *hw)
{
@@ -228,6 +260,8 @@ static inline void ngbe_init_ops_dummy(struct ngbe_hw *hw)
hw->mac.init_rx_addrs = ngbe_mac_init_rx_addrs_dummy;
hw->mac.update_mc_addr_list = ngbe_mac_update_mc_addr_list_dummy;
hw->mac.clear_vfta = ngbe_mac_clear_vfta_dummy;
+ hw->mac.set_vfta = ngbe_mac_set_vfta_dummy;
+ hw->mac.set_vlvf = ngbe_mac_set_vlvf_dummy;
hw->mac.set_mac_anti_spoofing = ngbe_mac_set_mac_anti_spoofing_dummy;
hw->mac.set_vlan_anti_spoofing = ngbe_mac_set_vlan_anti_spoofing_dummy;
hw->mac.init_thermal_sensor_thresh = ngbe_mac_init_thermal_ssth_dummy;
@@ -242,6 +276,11 @@ static inline void ngbe_init_ops_dummy(struct ngbe_hw *hw)
hw->phy.setup_link = ngbe_phy_setup_link_dummy;
hw->phy.check_link = ngbe_phy_check_link_dummy;
hw->mbx.init_params = ngbe_mbx_init_params_dummy;
+ hw->mbx.read = ngbe_mbx_read_dummy;
+ hw->mbx.write = ngbe_mbx_write_dummy;
+ hw->mbx.check_for_msg = ngbe_mbx_check_for_msg_dummy;
+ hw->mbx.check_for_ack = ngbe_mbx_check_for_ack_dummy;
+ hw->mbx.check_for_rst = ngbe_mbx_check_for_rst_dummy;
}
#endif /* _NGBE_TYPE_DUMMY_H_ */
diff --git a/drivers/net/ngbe/base/ngbe_hw.c b/drivers/net/ngbe/base/ngbe_hw.c
index 8b45a91f78..afde58a89e 100644
--- a/drivers/net/ngbe/base/ngbe_hw.c
+++ b/drivers/net/ngbe/base/ngbe_hw.c
@@ -914,6 +914,214 @@ s32 ngbe_init_uta_tables(struct ngbe_hw *hw)
return 0;
}
+/**
+ * ngbe_find_vlvf_slot - find the vlanid or the first empty slot
+ * @hw: pointer to hardware structure
+ * @vlan: VLAN id to write to VLAN filter
+ * @vlvf_bypass: true to find vlanid only, false returns first empty slot if
+ * vlanid not found
+ *
+ *
+ * return the VLVF index where this VLAN id should be placed
+ *
+ **/
+s32 ngbe_find_vlvf_slot(struct ngbe_hw *hw, u32 vlan, bool vlvf_bypass)
+{
+ s32 regindex, first_empty_slot;
+ u32 bits;
+
+ /* short cut the special case */
+ if (vlan == 0)
+ return 0;
+
+ /* if vlvf_bypass is set we don't want to use an empty slot, we
+ * will simply bypass the VLVF if there are no entries present in the
+ * VLVF that contain our VLAN
+ */
+ first_empty_slot = vlvf_bypass ? NGBE_ERR_NO_SPACE : 0;
+
+ /* add VLAN enable bit for comparison */
+ vlan |= NGBE_PSRVLAN_EA;
+
+ /* Search for the vlan id in the VLVF entries. Save off the first empty
+ * slot found along the way.
+ *
+ * pre-decrement loop covering (NGBE_NUM_POOL - 1) .. 1
+ */
+ for (regindex = NGBE_NUM_POOL; --regindex;) {
+ wr32(hw, NGBE_PSRVLANIDX, regindex);
+ bits = rd32(hw, NGBE_PSRVLAN);
+ if (bits == vlan)
+ return regindex;
+ if (!first_empty_slot && !bits)
+ first_empty_slot = regindex;
+ }
+
+ /* If we are here then we didn't find the VLAN. Return first empty
+ * slot we found during our search, else error.
+ */
+ if (!first_empty_slot)
+ DEBUGOUT("No space in VLVF.\n");
+
+ return first_empty_slot ? first_empty_slot : NGBE_ERR_NO_SPACE;
+}
+
+/**
+ * ngbe_set_vfta - Set VLAN filter table
+ * @hw: pointer to hardware structure
+ * @vlan: VLAN id to write to VLAN filter
+ * @vind: VMDq output index that maps queue to VLAN id in VLVFB
+ * @vlan_on: boolean flag to turn on/off VLAN
+ * @vlvf_bypass: boolean flag indicating updating default pool is okay
+ *
+ * Turn on/off specified VLAN in the VLAN filter table.
+ **/
+s32 ngbe_set_vfta(struct ngbe_hw *hw, u32 vlan, u32 vind,
+ bool vlan_on, bool vlvf_bypass)
+{
+ u32 regidx, vfta_delta, vfta;
+ s32 err;
+
+ DEBUGFUNC("ngbe_set_vfta");
+
+ if (vlan > 4095 || vind > 63)
+ return NGBE_ERR_PARAM;
+
+ /*
+ * this is a 2 part operation - first the VFTA, then the
+ * VLVF and VLVFB if VT Mode is set
+ * We don't write the VFTA until we know the VLVF part succeeded.
+ */
+
+ /* Part 1
+ * The VFTA is a bitstring made up of 128 32-bit registers
+ * that enable the particular VLAN id, much like the MTA:
+ * bits[11-5]: which register
+ * bits[4-0]: which bit in the register
+ */
+ regidx = vlan / 32;
+ vfta_delta = 1 << (vlan % 32);
+ vfta = rd32(hw, NGBE_VLANTBL(regidx));
+
+ /*
+ * vfta_delta represents the difference between the current value
+ * of vfta and the value we want in the register. Since the diff
+ * is an XOR mask we can just update the vfta using an XOR
+ */
+ vfta_delta &= vlan_on ? ~vfta : vfta;
+ vfta ^= vfta_delta;
+
+ /* Part 2
+ * Call ngbe_set_vlvf to set VLVFB and VLVF
+ */
+ err = ngbe_set_vlvf(hw, vlan, vind, vlan_on, &vfta_delta,
+ vfta, vlvf_bypass);
+ if (err != 0) {
+ if (vlvf_bypass)
+ goto vfta_update;
+ return err;
+ }
+
+vfta_update:
+ /* Update VFTA now that we are ready for traffic */
+ if (vfta_delta)
+ wr32(hw, NGBE_VLANTBL(regidx), vfta);
+
+ return 0;
+}
+
+/**
+ * ngbe_set_vlvf - Set VLAN Pool Filter
+ * @hw: pointer to hardware structure
+ * @vlan: VLAN id to write to VLAN filter
+ * @vind: VMDq output index that maps queue to VLAN id in PSRVLANPLM
+ * @vlan_on: boolean flag to turn on/off VLAN in PSRVLAN
+ * @vfta_delta: pointer to the difference between the current value
+ * of PSRVLANPLM and the desired value
+ * @vfta: the desired value of the VFTA
+ * @vlvf_bypass: boolean flag indicating updating default pool is okay
+ *
+ * Turn on/off specified bit in VLVF table.
+ **/
+s32 ngbe_set_vlvf(struct ngbe_hw *hw, u32 vlan, u32 vind,
+ bool vlan_on, u32 *vfta_delta, u32 vfta,
+ bool vlvf_bypass)
+{
+ u32 bits;
+ u32 portctl;
+ s32 vlvf_index;
+
+ DEBUGFUNC("ngbe_set_vlvf");
+
+ if (vlan > 4095 || vind > 63)
+ return NGBE_ERR_PARAM;
+
+ /* If VT Mode is set
+ * Either vlan_on
+ * make sure the vlan is in PSRVLAN
+ * set the vind bit in the matching PSRVLANPLM
+ * Or !vlan_on
+ * clear the pool bit and possibly the vind
+ */
+ portctl = rd32(hw, NGBE_PORTCTL);
+ if (!(portctl & NGBE_PORTCTL_NUMVT_MASK))
+ return 0;
+
+ vlvf_index = ngbe_find_vlvf_slot(hw, vlan, vlvf_bypass);
+ if (vlvf_index < 0)
+ return vlvf_index;
+
+ wr32(hw, NGBE_PSRVLANIDX, vlvf_index);
+ bits = rd32(hw, NGBE_PSRVLANPLM(vind / 32));
+
+ /* set the pool bit */
+ bits |= 1 << (vind % 32);
+ if (vlan_on)
+ goto vlvf_update;
+
+ /* clear the pool bit */
+ bits ^= 1 << (vind % 32);
+
+ if (!bits &&
+ !rd32(hw, NGBE_PSRVLANPLM(vind / 32))) {
+ /* Clear PSRVLANPLM first, then disable PSRVLAN. Otherwise
+ * we run the risk of stray packets leaking into
+ * the PF via the default pool
+ */
+ if (*vfta_delta)
+ wr32(hw, NGBE_PSRVLANPLM(vlan / 32), vfta);
+
+ /* disable VLVF and clear remaining bit from pool */
+ wr32(hw, NGBE_PSRVLAN, 0);
+ wr32(hw, NGBE_PSRVLANPLM(vind / 32), 0);
+
+ return 0;
+ }
+
+ /* If there are still bits set in the PSRVLANPLM registers
+ * for the VLAN ID indicated we need to see if the
+ * caller is requesting that we clear the PSRVLANPLM entry bit.
+ * If the caller has requested that we clear the PSRVLANPLM
+ * entry bit but there are still pools/VFs using this VLAN
+ * ID entry then ignore the request. We're not worried
+ * about the case where we're turning the PSRVLANPLM VLAN ID
+ * entry bit on, only when requested to turn it off as
+ * there may be multiple pools and/or VFs using the
+ * VLAN ID entry. In that case we cannot clear the
+ * PSRVLANPLM bit until all pools/VFs using that VLAN ID have also
+ * been cleared. This will be indicated by "bits" being
+ * zero.
+ */
+ *vfta_delta = 0;
+
+vlvf_update:
+ /* record pool change and enable VLAN ID if not already enabled */
+ wr32(hw, NGBE_PSRVLANPLM(vind / 32), bits);
+ wr32(hw, NGBE_PSRVLAN, NGBE_PSRVLAN_EA | vlan);
+
+ return 0;
+}
+
/**
* ngbe_clear_vfta - Clear VLAN filter table
* @hw: pointer to hardware structure
@@ -1306,6 +1514,8 @@ s32 ngbe_init_ops_pf(struct ngbe_hw *hw)
mac->update_mc_addr_list = ngbe_update_mc_addr_list;
mac->set_vmdq = ngbe_set_vmdq;
mac->clear_vmdq = ngbe_clear_vmdq;
+ mac->set_vfta = ngbe_set_vfta;
+ mac->set_vlvf = ngbe_set_vlvf;
mac->clear_vfta = ngbe_clear_vfta;
mac->set_mac_anti_spoofing = ngbe_set_mac_anti_spoofing;
mac->set_vlan_anti_spoofing = ngbe_set_vlan_anti_spoofing;
@@ -1320,6 +1530,11 @@ s32 ngbe_init_ops_pf(struct ngbe_hw *hw)
mac->check_overtemp = ngbe_mac_check_overtemp;
mbx->init_params = ngbe_init_mbx_params_pf;
+ mbx->read = ngbe_read_mbx_pf;
+ mbx->write = ngbe_write_mbx_pf;
+ mbx->check_for_msg = ngbe_check_for_msg_pf;
+ mbx->check_for_ack = ngbe_check_for_ack_pf;
+ mbx->check_for_rst = ngbe_check_for_rst_pf;
/* EEPROM */
rom->init_params = ngbe_init_eeprom_params;
diff --git a/drivers/net/ngbe/base/ngbe_hw.h b/drivers/net/ngbe/base/ngbe_hw.h
index a27bd3e650..83ad646dde 100644
--- a/drivers/net/ngbe/base/ngbe_hw.h
+++ b/drivers/net/ngbe/base/ngbe_hw.h
@@ -49,8 +49,16 @@ void ngbe_release_swfw_sync(struct ngbe_hw *hw, u32 mask);
s32 ngbe_set_vmdq(struct ngbe_hw *hw, u32 rar, u32 vmdq);
s32 ngbe_clear_vmdq(struct ngbe_hw *hw, u32 rar, u32 vmdq);
s32 ngbe_init_uta_tables(struct ngbe_hw *hw);
+s32 ngbe_set_vfta(struct ngbe_hw *hw, u32 vlan,
+ u32 vind, bool vlan_on, bool vlvf_bypass);
+s32 ngbe_set_vlvf(struct ngbe_hw *hw, u32 vlan, u32 vind,
+ bool vlan_on, u32 *vfta_delta, u32 vfta,
+ bool vlvf_bypass);
s32 ngbe_clear_vfta(struct ngbe_hw *hw);
+s32 ngbe_find_vlvf_slot(struct ngbe_hw *hw, u32 vlan, bool vlvf_bypass);
+void ngbe_set_mac_anti_spoofing(struct ngbe_hw *hw, bool enable, int vf);
+void ngbe_set_vlan_anti_spoofing(struct ngbe_hw *hw, bool enable, int vf);
s32 ngbe_init_thermal_sensor_thresh(struct ngbe_hw *hw);
s32 ngbe_mac_check_overtemp(struct ngbe_hw *hw);
void ngbe_disable_rx(struct ngbe_hw *hw);
diff --git a/drivers/net/ngbe/base/ngbe_mbx.c b/drivers/net/ngbe/base/ngbe_mbx.c
index 1ac9531ceb..764ae81319 100644
--- a/drivers/net/ngbe/base/ngbe_mbx.c
+++ b/drivers/net/ngbe/base/ngbe_mbx.c
@@ -7,6 +7,303 @@
#include "ngbe_mbx.h"
+/**
+ * ngbe_read_mbx - Reads a message from the mailbox
+ * @hw: pointer to the HW structure
+ * @msg: The message buffer
+ * @size: Length of buffer
+ * @mbx_id: id of mailbox to read
+ *
+ * returns 0 if it successfully read message from buffer
+ **/
+s32 ngbe_read_mbx(struct ngbe_hw *hw, u32 *msg, u16 size, u16 mbx_id)
+{
+ struct ngbe_mbx_info *mbx = &hw->mbx;
+ s32 ret_val = NGBE_ERR_MBX;
+
+ DEBUGFUNC("ngbe_read_mbx");
+
+ /* limit read to size of mailbox */
+ if (size > mbx->size)
+ size = mbx->size;
+
+ if (mbx->read)
+ ret_val = mbx->read(hw, msg, size, mbx_id);
+
+ return ret_val;
+}
+
+/**
+ * ngbe_write_mbx - Write a message to the mailbox
+ * @hw: pointer to the HW structure
+ * @msg: The message buffer
+ * @size: Length of buffer
+ * @mbx_id: id of mailbox to write
+ *
+ * returns 0 if it successfully copied message into the buffer
+ **/
+s32 ngbe_write_mbx(struct ngbe_hw *hw, u32 *msg, u16 size, u16 mbx_id)
+{
+ struct ngbe_mbx_info *mbx = &hw->mbx;
+ s32 ret_val = 0;
+
+ DEBUGFUNC("ngbe_write_mbx");
+
+ if (size > mbx->size) {
+ ret_val = NGBE_ERR_MBX;
+ DEBUGOUT("Invalid mailbox message size %d", size);
+ } else if (mbx->write) {
+ ret_val = mbx->write(hw, msg, size, mbx_id);
+ }
+
+ return ret_val;
+}
+
+/**
+ * ngbe_check_for_msg - checks to see if someone sent us mail
+ * @hw: pointer to the HW structure
+ * @mbx_id: id of mailbox to check
+ *
+ * returns 0 if the Status bit was found or else ERR_MBX
+ **/
+s32 ngbe_check_for_msg(struct ngbe_hw *hw, u16 mbx_id)
+{
+ struct ngbe_mbx_info *mbx = &hw->mbx;
+ s32 ret_val = NGBE_ERR_MBX;
+
+ DEBUGFUNC("ngbe_check_for_msg");
+
+ if (mbx->check_for_msg)
+ ret_val = mbx->check_for_msg(hw, mbx_id);
+
+ return ret_val;
+}
+
+/**
+ * ngbe_check_for_ack - checks to see if someone sent us ACK
+ * @hw: pointer to the HW structure
+ * @mbx_id: id of mailbox to check
+ *
+ * returns 0 if the Status bit was found or else ERR_MBX
+ **/
+s32 ngbe_check_for_ack(struct ngbe_hw *hw, u16 mbx_id)
+{
+ struct ngbe_mbx_info *mbx = &hw->mbx;
+ s32 ret_val = NGBE_ERR_MBX;
+
+ DEBUGFUNC("ngbe_check_for_ack");
+
+ if (mbx->check_for_ack)
+ ret_val = mbx->check_for_ack(hw, mbx_id);
+
+ return ret_val;
+}
+
+/**
+ * ngbe_check_for_rst - checks to see if other side has reset
+ * @hw: pointer to the HW structure
+ * @mbx_id: id of mailbox to check
+ *
+ * returns 0 if the Status bit was found or else ERR_MBX
+ **/
+s32 ngbe_check_for_rst(struct ngbe_hw *hw, u16 mbx_id)
+{
+ struct ngbe_mbx_info *mbx = &hw->mbx;
+ s32 ret_val = NGBE_ERR_MBX;
+
+ DEBUGFUNC("ngbe_check_for_rst");
+
+ if (mbx->check_for_rst)
+ ret_val = mbx->check_for_rst(hw, mbx_id);
+
+ return ret_val;
+}
+
+STATIC s32 ngbe_check_for_bit_pf(struct ngbe_hw *hw, u32 mask)
+{
+ u32 mbvficr = rd32(hw, NGBE_MBVFICR);
+ s32 ret_val = NGBE_ERR_MBX;
+
+ if (mbvficr & mask) {
+ ret_val = 0;
+ wr32(hw, NGBE_MBVFICR, mask);
+ }
+
+ return ret_val;
+}
+
+/**
+ * ngbe_check_for_msg_pf - checks to see if the VF has sent mail
+ * @hw: pointer to the HW structure
+ * @vf_number: the VF index
+ *
+ * returns 0 if the VF has set the Status bit or else ERR_MBX
+ **/
+s32 ngbe_check_for_msg_pf(struct ngbe_hw *hw, u16 vf_number)
+{
+ s32 ret_val = NGBE_ERR_MBX;
+ u32 vf_bit = vf_number;
+
+ DEBUGFUNC("ngbe_check_for_msg_pf");
+
+ if (!ngbe_check_for_bit_pf(hw, NGBE_MBVFICR_VFREQ_VF1 << vf_bit)) {
+ ret_val = 0;
+ hw->mbx.stats.reqs++;
+ }
+
+ return ret_val;
+}
+
+/**
+ * ngbe_check_for_ack_pf - checks to see if the VF has ACKed
+ * @hw: pointer to the HW structure
+ * @vf_number: the VF index
+ *
+ * returns 0 if the VF has set the Status bit or else ERR_MBX
+ **/
+s32 ngbe_check_for_ack_pf(struct ngbe_hw *hw, u16 vf_number)
+{
+ s32 ret_val = NGBE_ERR_MBX;
+ u32 vf_bit = vf_number;
+
+ DEBUGFUNC("ngbe_check_for_ack_pf");
+
+ if (!ngbe_check_for_bit_pf(hw, NGBE_MBVFICR_VFACK_VF1 << vf_bit)) {
+ ret_val = 0;
+ hw->mbx.stats.acks++;
+ }
+
+ return ret_val;
+}
+
+/**
+ * ngbe_check_for_rst_pf - checks to see if the VF has reset
+ * @hw: pointer to the HW structure
+ * @vf_number: the VF index
+ *
+ * returns 0 if the VF has set the Status bit or else ERR_MBX
+ **/
+s32 ngbe_check_for_rst_pf(struct ngbe_hw *hw, u16 vf_number)
+{
+ u32 vflre = 0;
+ s32 ret_val = NGBE_ERR_MBX;
+
+ DEBUGFUNC("ngbe_check_for_rst_pf");
+
+ vflre = rd32(hw, NGBE_FLRVFE);
+ if (vflre & (1 << vf_number)) {
+ ret_val = 0;
+ wr32(hw, NGBE_FLRVFEC, (1 << vf_number));
+ hw->mbx.stats.rsts++;
+ }
+
+ return ret_val;
+}
+
+/**
+ * ngbe_obtain_mbx_lock_pf - obtain mailbox lock
+ * @hw: pointer to the HW structure
+ * @vf_number: the VF index
+ *
+ * return 0 if we obtained the mailbox lock
+ **/
+STATIC s32 ngbe_obtain_mbx_lock_pf(struct ngbe_hw *hw, u16 vf_number)
+{
+ s32 ret_val = NGBE_ERR_MBX;
+ u32 p2v_mailbox;
+
+ DEBUGFUNC("ngbe_obtain_mbx_lock_pf");
+
+ /* Take ownership of the buffer */
+ wr32(hw, NGBE_MBCTL(vf_number), NGBE_MBCTL_PFU);
+
+ /* reserve mailbox for vf use */
+ p2v_mailbox = rd32(hw, NGBE_MBCTL(vf_number));
+ if (p2v_mailbox & NGBE_MBCTL_PFU)
+ ret_val = 0;
+ else
+ DEBUGOUT("Failed to obtain mailbox lock for VF%d", vf_number);
+
+
+ return ret_val;
+}
+
+/**
+ * ngbe_write_mbx_pf - Places a message in the mailbox
+ * @hw: pointer to the HW structure
+ * @msg: The message buffer
+ * @size: Length of buffer
+ * @vf_number: the VF index
+ *
+ * returns 0 if it successfully copied message into the buffer
+ **/
+s32 ngbe_write_mbx_pf(struct ngbe_hw *hw, u32 *msg, u16 size, u16 vf_number)
+{
+ s32 ret_val;
+ u16 i;
+
+ DEBUGFUNC("ngbe_write_mbx_pf");
+
+ /* lock the mailbox to prevent pf/vf race condition */
+ ret_val = ngbe_obtain_mbx_lock_pf(hw, vf_number);
+ if (ret_val)
+ goto out_no_write;
+
+ /* flush msg and acks as we are overwriting the message buffer */
+ ngbe_check_for_msg_pf(hw, vf_number);
+ ngbe_check_for_ack_pf(hw, vf_number);
+
+ /* copy the caller specified message to the mailbox memory buffer */
+ for (i = 0; i < size; i++)
+ wr32a(hw, NGBE_MBMEM(vf_number), i, msg[i]);
+
+ /* Interrupt VF to tell it a message has been sent and release buffer*/
+ wr32(hw, NGBE_MBCTL(vf_number), NGBE_MBCTL_STS);
+
+ /* update stats */
+ hw->mbx.stats.msgs_tx++;
+
+out_no_write:
+ return ret_val;
+}
+
+/**
+ * ngbe_read_mbx_pf - Read a message from the mailbox
+ * @hw: pointer to the HW structure
+ * @msg: The message buffer
+ * @size: Length of buffer
+ * @vf_number: the VF index
+ *
+ * This function copies a message from the mailbox buffer to the caller's
+ * memory buffer. The presumption is that the caller knows that there was
+ * a message due to a VF request so no polling for message is needed.
+ **/
+s32 ngbe_read_mbx_pf(struct ngbe_hw *hw, u32 *msg, u16 size, u16 vf_number)
+{
+ s32 ret_val;
+ u16 i;
+
+ DEBUGFUNC("ngbe_read_mbx_pf");
+
+ /* lock the mailbox to prevent pf/vf race condition */
+ ret_val = ngbe_obtain_mbx_lock_pf(hw, vf_number);
+ if (ret_val)
+ goto out_no_read;
+
+ /* copy the message to the mailbox memory buffer */
+ for (i = 0; i < size; i++)
+ msg[i] = rd32a(hw, NGBE_MBMEM(vf_number), i);
+
+ /* Acknowledge the message and release buffer */
+ wr32(hw, NGBE_MBCTL(vf_number), NGBE_MBCTL_ACK);
+
+ /* update stats */
+ hw->mbx.stats.msgs_rx++;
+
+out_no_read:
+ return ret_val;
+}
+
/**
* ngbe_init_mbx_params_pf - set initial values for pf mailbox
* @hw: pointer to the HW structure
diff --git a/drivers/net/ngbe/base/ngbe_mbx.h b/drivers/net/ngbe/base/ngbe_mbx.h
index d280945baf..d47da2718c 100644
--- a/drivers/net/ngbe/base/ngbe_mbx.h
+++ b/drivers/net/ngbe/base/ngbe_mbx.h
@@ -6,6 +6,84 @@
#ifndef _NGBE_MBX_H_
#define _NGBE_MBX_H_
+#define NGBE_ERR_MBX -100
+
+/* If it's a NGBE_VF_* msg then it originates in the VF and is sent to the
+ * PF. The reverse is true if it is NGBE_PF_*.
+ * Message ACK's are the value or'd with 0xF0000000
+ */
+/* Messages below or'd with this are the ACK */
+#define NGBE_VT_MSGTYPE_ACK 0x80000000
+/* Messages below or'd with this are the NACK */
+#define NGBE_VT_MSGTYPE_NACK 0x40000000
+/* Indicates that VF is still clear to send requests */
+#define NGBE_VT_MSGTYPE_CTS 0x20000000
+
+#define NGBE_VT_MSGINFO_SHIFT 16
+/* bits 23:16 are used for extra info for certain messages */
+#define NGBE_VT_MSGINFO_MASK (0xFF << NGBE_VT_MSGINFO_SHIFT)
+
+/*
+ * each element denotes a version of the API; existing numbers may not
+ * change; any additions must go at the end
+ */
+enum ngbe_pfvf_api_rev {
+ ngbe_mbox_api_null,
+ ngbe_mbox_api_10, /* API version 1.0, linux/freebsd VF driver */
+ ngbe_mbox_api_11, /* API version 1.1, linux/freebsd VF driver */
+ ngbe_mbox_api_12, /* API version 1.2, linux/freebsd VF driver */
+ ngbe_mbox_api_13, /* API version 1.3, linux/freebsd VF driver */
+ ngbe_mbox_api_20, /* API version 2.0, solaris Phase1 VF driver */
+ /* This value should always be last */
+ ngbe_mbox_api_unknown, /* indicates that API version is not known */
+};
+
+/* mailbox API, legacy requests */
+#define NGBE_VF_RESET 0x01 /* VF requests reset */
+#define NGBE_VF_SET_MAC_ADDR 0x02 /* VF requests PF to set MAC addr */
+#define NGBE_VF_SET_MULTICAST 0x03 /* VF requests PF to set MC addr */
+#define NGBE_VF_SET_VLAN 0x04 /* VF requests PF to set VLAN */
+
+/* mailbox API, version 1.0 VF requests */
+#define NGBE_VF_SET_LPE 0x05 /* VF requests PF to set VMOLR.LPE */
+#define NGBE_VF_SET_MACVLAN 0x06 /* VF requests PF for unicast filter */
+#define NGBE_VF_API_NEGOTIATE 0x08 /* negotiate API version */
+
+/* mailbox API, version 1.1 VF requests */
+#define NGBE_VF_GET_QUEUES 0x09 /* get queue configuration */
+
+/* mailbox API, version 1.2 VF requests */
+#define NGBE_VF_GET_RETA 0x0a /* VF request for RETA */
+#define NGBE_VF_GET_RSS_KEY 0x0b /* get RSS key */
+#define NGBE_VF_UPDATE_XCAST_MODE 0x0c
+
+/* mode choices for NGBE_VF_UPDATE_XCAST_MODE */
+enum ngbevf_xcast_modes {
+ NGBEVF_XCAST_MODE_NONE = 0,
+ NGBEVF_XCAST_MODE_MULTI,
+ NGBEVF_XCAST_MODE_ALLMULTI,
+ NGBEVF_XCAST_MODE_PROMISC,
+};
+
+/* GET_QUEUES return data indices within the mailbox */
+#define NGBE_VF_TX_QUEUES 1 /* number of Tx queues supported */
+#define NGBE_VF_RX_QUEUES 2 /* number of Rx queues supported */
+#define NGBE_VF_TRANS_VLAN 3 /* Indication of port vlan */
+#define NGBE_VF_DEF_QUEUE 4 /* Default queue offset */
+
+/* length of permanent address message returned from PF */
+#define NGBE_VF_PERMADDR_MSG_LEN 4
+s32 ngbe_read_mbx(struct ngbe_hw *hw, u32 *msg, u16 size, u16 mbx_id);
+s32 ngbe_write_mbx(struct ngbe_hw *hw, u32 *msg, u16 size, u16 mbx_id);
+s32 ngbe_check_for_msg(struct ngbe_hw *hw, u16 mbx_id);
+s32 ngbe_check_for_ack(struct ngbe_hw *hw, u16 mbx_id);
+s32 ngbe_check_for_rst(struct ngbe_hw *hw, u16 mbx_id);
void ngbe_init_mbx_params_pf(struct ngbe_hw *hw);
+s32 ngbe_read_mbx_pf(struct ngbe_hw *hw, u32 *msg, u16 size, u16 vf_number);
+s32 ngbe_write_mbx_pf(struct ngbe_hw *hw, u32 *msg, u16 size, u16 vf_number);
+s32 ngbe_check_for_msg_pf(struct ngbe_hw *hw, u16 vf_number);
+s32 ngbe_check_for_ack_pf(struct ngbe_hw *hw, u16 vf_number);
+s32 ngbe_check_for_rst_pf(struct ngbe_hw *hw, u16 vf_number);
+
#endif /* _NGBE_MBX_H_ */
diff --git a/drivers/net/ngbe/base/ngbe_type.h b/drivers/net/ngbe/base/ngbe_type.h
index bc95fcf609..7a85f82abd 100644
--- a/drivers/net/ngbe/base/ngbe_type.h
+++ b/drivers/net/ngbe/base/ngbe_type.h
@@ -254,6 +254,11 @@ struct ngbe_mac_info {
u32 mc_addr_count,
ngbe_mc_addr_itr func, bool clear);
s32 (*clear_vfta)(struct ngbe_hw *hw);
+ s32 (*set_vfta)(struct ngbe_hw *hw, u32 vlan,
+ u32 vind, bool vlan_on, bool vlvf_bypass);
+ s32 (*set_vlvf)(struct ngbe_hw *hw, u32 vlan, u32 vind,
+ bool vlan_on, u32 *vfta_delta, u32 vfta,
+ bool vlvf_bypass);
void (*set_mac_anti_spoofing)(struct ngbe_hw *hw, bool enable, int vf);
void (*set_vlan_anti_spoofing)(struct ngbe_hw *hw,
bool enable, int vf);
@@ -319,6 +324,11 @@ struct ngbe_mbx_stats {
struct ngbe_mbx_info {
void (*init_params)(struct ngbe_hw *hw);
+ s32 (*read)(struct ngbe_hw *hw, u32 *msg, u16 size, u16 vf_number);
+ s32 (*write)(struct ngbe_hw *hw, u32 *msg, u16 size, u16 vf_number);
+ s32 (*check_for_msg)(struct ngbe_hw *hw, u16 mbx_id);
+ s32 (*check_for_ack)(struct ngbe_hw *hw, u16 mbx_id);
+ s32 (*check_for_rst)(struct ngbe_hw *hw, u16 mbx_id);
struct ngbe_mbx_stats stats;
u32 timeout;
diff --git a/drivers/net/ngbe/meson.build b/drivers/net/ngbe/meson.build
index 8b5195aab3..b276ec3341 100644
--- a/drivers/net/ngbe/meson.build
+++ b/drivers/net/ngbe/meson.build
@@ -20,3 +20,5 @@ sources = files(
deps += ['hash']
includes += include_directories('base')
+
+install_headers('rte_pmd_ngbe.h')
diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c
index 70e471b2c2..52d7b6376d 100644
--- a/drivers/net/ngbe/ngbe_ethdev.c
+++ b/drivers/net/ngbe/ngbe_ethdev.c
@@ -2123,6 +2123,11 @@ ngbe_dev_interrupt_action(struct rte_eth_dev *dev)
PMD_DRV_LOG(DEBUG, "intr action type %d", intr->flags);
+ if (intr->flags & NGBE_FLAG_MAILBOX) {
+ ngbe_pf_mbx_process(dev);
+ intr->flags &= ~NGBE_FLAG_MAILBOX;
+ }
+
if (intr->flags & NGBE_FLAG_NEED_LINK_UPDATE) {
struct rte_eth_link link;
@@ -2183,6 +2188,8 @@ ngbe_dev_interrupt_delayed_handler(void *param)
ngbe_disable_intr(hw);
eicr = ((u32 *)hw->isb_mem)[NGBE_ISB_MISC];
+ if (eicr & NGBE_ICRMISC_VFMBX)
+ ngbe_pf_mbx_process(dev);
if (intr->flags & NGBE_FLAG_NEED_LINK_UPDATE) {
ngbe_dev_link_update(dev, 0);
diff --git a/drivers/net/ngbe/ngbe_ethdev.h b/drivers/net/ngbe/ngbe_ethdev.h
index f5a1363d10..26911cc7d2 100644
--- a/drivers/net/ngbe/ngbe_ethdev.h
+++ b/drivers/net/ngbe/ngbe_ethdev.h
@@ -71,6 +71,11 @@ struct ngbe_hwstrip {
uint32_t bitmap[NGBE_HWSTRIP_BITMAP_SIZE];
};
+/*
+ * VF data which used by PF host only
+ */
+#define NGBE_MAX_VF_MC_ENTRIES 30
+
struct ngbe_uta_info {
uint8_t uc_filter_type;
uint16_t uta_in_use;
@@ -79,8 +84,14 @@ struct ngbe_uta_info {
struct ngbe_vf_info {
uint8_t vf_mac_addresses[RTE_ETHER_ADDR_LEN];
+ uint16_t vf_mc_hashes[NGBE_MAX_VF_MC_ENTRIES];
+ uint16_t num_vf_mc_hashes;
bool clear_to_send;
+ uint16_t vlan_count;
+ uint8_t api_version;
uint16_t switch_domain_id;
+ uint16_t xcast_mode;
+ uint16_t mac_count;
};
/*
@@ -233,6 +244,8 @@ int ngbe_pf_host_init(struct rte_eth_dev *eth_dev);
void ngbe_pf_host_uninit(struct rte_eth_dev *eth_dev);
+void ngbe_pf_mbx_process(struct rte_eth_dev *eth_dev);
+
int ngbe_pf_host_configure(struct rte_eth_dev *eth_dev);
#define NGBE_LINK_DOWN_CHECK_TIMEOUT 4000 /* ms */
diff --git a/drivers/net/ngbe/ngbe_pf.c b/drivers/net/ngbe/ngbe_pf.c
index 2f9dfc4284..550f5d556b 100644
--- a/drivers/net/ngbe/ngbe_pf.c
+++ b/drivers/net/ngbe/ngbe_pf.c
@@ -10,8 +10,11 @@
#include "base/ngbe.h"
#include "ngbe_ethdev.h"
+#include "rte_pmd_ngbe.h"
#define NGBE_MAX_VFTA (128)
+#define NGBE_VF_MSG_SIZE_DEFAULT 1
+#define NGBE_VF_GET_QUEUE_MSG_SIZE 5
static inline uint16_t
dev_num_vf(struct rte_eth_dev *eth_dev)
@@ -39,6 +42,16 @@ int ngbe_vf_perm_addr_gen(struct rte_eth_dev *dev, uint16_t vf_num)
return 0;
}
+static inline int
+ngbe_mb_intr_setup(struct rte_eth_dev *dev)
+{
+ struct ngbe_interrupt *intr = ngbe_dev_intr(dev);
+
+ intr->mask_misc |= NGBE_ICRMISC_VFMBX;
+
+ return 0;
+}
+
int ngbe_pf_host_init(struct rte_eth_dev *eth_dev)
{
struct ngbe_vf_info **vfinfo = NGBE_DEV_VFDATA(eth_dev);
@@ -85,6 +98,9 @@ int ngbe_pf_host_init(struct rte_eth_dev *eth_dev)
/* init_mailbox_params */
hw->mbx.init_params(hw);
+ /* set mb interrupt mask */
+ ngbe_mb_intr_setup(eth_dev);
+
return ret;
}
@@ -194,3 +210,551 @@ int ngbe_pf_host_configure(struct rte_eth_dev *eth_dev)
return 0;
}
+static void
+ngbe_set_rx_mode(struct rte_eth_dev *eth_dev)
+{
+ struct rte_eth_dev_data *dev_data = eth_dev->data;
+ struct ngbe_hw *hw = ngbe_dev_hw(eth_dev);
+ u32 fctrl, vmolr;
+ uint16_t vfn = dev_num_vf(eth_dev);
+
+ /* disable store-bad-packets */
+ wr32m(hw, NGBE_SECRXCTL, NGBE_SECRXCTL_SAVEBAD, 0);
+
+ /* Check for Promiscuous and All Multicast modes */
+ fctrl = rd32m(hw, NGBE_PSRCTL,
+ ~(NGBE_PSRCTL_UCP | NGBE_PSRCTL_MCP));
+ fctrl |= NGBE_PSRCTL_BCA |
+ NGBE_PSRCTL_MCHFENA;
+
+ vmolr = rd32m(hw, NGBE_POOLETHCTL(vfn),
+ ~(NGBE_POOLETHCTL_UCP |
+ NGBE_POOLETHCTL_MCP |
+ NGBE_POOLETHCTL_UCHA |
+ NGBE_POOLETHCTL_MCHA));
+ vmolr |= NGBE_POOLETHCTL_BCA |
+ NGBE_POOLETHCTL_UTA |
+ NGBE_POOLETHCTL_VLA;
+
+ if (dev_data->promiscuous) {
+ fctrl |= NGBE_PSRCTL_UCP |
+ NGBE_PSRCTL_MCP;
+ /* pf don't want packets routing to vf, so clear UPE */
+ vmolr |= NGBE_POOLETHCTL_MCP;
+ } else if (dev_data->all_multicast) {
+ fctrl |= NGBE_PSRCTL_MCP;
+ vmolr |= NGBE_POOLETHCTL_MCP;
+ } else {
+ vmolr |= NGBE_POOLETHCTL_UCHA;
+ vmolr |= NGBE_POOLETHCTL_MCHA;
+ }
+
+ wr32(hw, NGBE_POOLETHCTL(vfn), vmolr);
+
+ wr32(hw, NGBE_PSRCTL, fctrl);
+
+ ngbe_vlan_hw_strip_config(eth_dev);
+}
+
+static inline void
+ngbe_vf_reset_event(struct rte_eth_dev *eth_dev, uint16_t vf)
+{
+ struct ngbe_hw *hw = ngbe_dev_hw(eth_dev);
+ struct ngbe_vf_info *vfinfo = *(NGBE_DEV_VFDATA(eth_dev));
+ int rar_entry = hw->mac.num_rar_entries - (vf + 1);
+ uint32_t vmolr = rd32(hw, NGBE_POOLETHCTL(vf));
+
+ vmolr |= (NGBE_POOLETHCTL_UCHA |
+ NGBE_POOLETHCTL_BCA | NGBE_POOLETHCTL_UTA);
+ wr32(hw, NGBE_POOLETHCTL(vf), vmolr);
+
+ wr32(hw, NGBE_POOLTAG(vf), 0);
+
+ /* reset multicast table array for vf */
+ vfinfo[vf].num_vf_mc_hashes = 0;
+
+ /* reset rx mode */
+ ngbe_set_rx_mode(eth_dev);
+
+ hw->mac.clear_rar(hw, rar_entry);
+}
+
+static inline void
+ngbe_vf_reset_msg(struct rte_eth_dev *eth_dev, uint16_t vf)
+{
+ struct ngbe_hw *hw = ngbe_dev_hw(eth_dev);
+ uint32_t reg;
+ uint32_t vf_shift;
+ const uint8_t VFRE_SHIFT = 5; /* VFRE 32 bits per slot */
+ const uint8_t VFRE_MASK = (uint8_t)((1U << VFRE_SHIFT) - 1);
+ uint8_t nb_q_per_pool;
+ int i;
+
+ vf_shift = vf & VFRE_MASK;
+
+ /* enable transmit for vf */
+ reg = rd32(hw, NGBE_POOLTXENA(0));
+ reg |= (1 << vf_shift);
+ wr32(hw, NGBE_POOLTXENA(0), reg);
+
+ /* enable all queue drop for IOV */
+ nb_q_per_pool = RTE_ETH_DEV_SRIOV(eth_dev).nb_q_per_pool;
+ for (i = vf * nb_q_per_pool; i < (vf + 1) * nb_q_per_pool; i++) {
+ ngbe_flush(hw);
+ reg = 1 << (i % 32);
+ wr32m(hw, NGBE_QPRXDROP, reg, reg);
+ }
+
+ /* enable receive for vf */
+ reg = rd32(hw, NGBE_POOLRXENA(0));
+ reg |= (reg | (1 << vf_shift));
+ wr32(hw, NGBE_POOLRXENA(0), reg);
+
+ ngbe_vf_reset_event(eth_dev, vf);
+}
+
+static int
+ngbe_disable_vf_mc_promisc(struct rte_eth_dev *eth_dev, uint32_t vf)
+{
+ struct ngbe_hw *hw = ngbe_dev_hw(eth_dev);
+ uint32_t vmolr;
+
+ vmolr = rd32(hw, NGBE_POOLETHCTL(vf));
+
+ PMD_DRV_LOG(INFO, "VF %u: disabling multicast promiscuous\n", vf);
+
+ vmolr &= ~NGBE_POOLETHCTL_MCP;
+
+ wr32(hw, NGBE_POOLETHCTL(vf), vmolr);
+
+ return 0;
+}
+
+static int
+ngbe_vf_reset(struct rte_eth_dev *eth_dev, uint16_t vf, uint32_t *msgbuf)
+{
+ struct ngbe_hw *hw = ngbe_dev_hw(eth_dev);
+ struct ngbe_vf_info *vfinfo = *(NGBE_DEV_VFDATA(eth_dev));
+ unsigned char *vf_mac = vfinfo[vf].vf_mac_addresses;
+ int rar_entry = hw->mac.num_rar_entries - (vf + 1);
+ uint8_t *new_mac = (uint8_t *)(&msgbuf[1]);
+
+ ngbe_vf_reset_msg(eth_dev, vf);
+
+ hw->mac.set_rar(hw, rar_entry, vf_mac, vf, true);
+
+ /* Disable multicast promiscuous at reset */
+ ngbe_disable_vf_mc_promisc(eth_dev, vf);
+
+ /* reply to reset with ack and vf mac address */
+ msgbuf[0] = NGBE_VF_RESET | NGBE_VT_MSGTYPE_ACK;
+ rte_memcpy(new_mac, vf_mac, RTE_ETHER_ADDR_LEN);
+ /*
+ * Piggyback the multicast filter type so VF can compute the
+ * correct vectors
+ */
+ msgbuf[3] = hw->mac.mc_filter_type;
+ ngbe_write_mbx(hw, msgbuf, NGBE_VF_PERMADDR_MSG_LEN, vf);
+
+ return 0;
+}
+
+static int
+ngbe_vf_set_mac_addr(struct rte_eth_dev *eth_dev,
+ uint32_t vf, uint32_t *msgbuf)
+{
+ struct ngbe_hw *hw = ngbe_dev_hw(eth_dev);
+ struct ngbe_vf_info *vfinfo = *(NGBE_DEV_VFDATA(eth_dev));
+ int rar_entry = hw->mac.num_rar_entries - (vf + 1);
+ uint8_t *new_mac = (uint8_t *)(&msgbuf[1]);
+ struct rte_ether_addr *ea = (struct rte_ether_addr *)new_mac;
+
+ if (rte_is_valid_assigned_ether_addr(ea)) {
+ rte_memcpy(vfinfo[vf].vf_mac_addresses, new_mac, 6);
+ return hw->mac.set_rar(hw, rar_entry, new_mac, vf, true);
+ }
+ return -1;
+}
+
+static int
+ngbe_vf_set_multicast(struct rte_eth_dev *eth_dev,
+ uint32_t vf, uint32_t *msgbuf)
+{
+ struct ngbe_hw *hw = ngbe_dev_hw(eth_dev);
+ struct ngbe_vf_info *vfinfo = *(NGBE_DEV_VFDATA(eth_dev));
+ int nb_entries = (msgbuf[0] & NGBE_VT_MSGINFO_MASK) >>
+ NGBE_VT_MSGINFO_SHIFT;
+ uint16_t *hash_list = (uint16_t *)&msgbuf[1];
+ uint32_t mta_idx;
+ uint32_t mta_shift;
+ const uint32_t NGBE_MTA_INDEX_MASK = 0x7F;
+ const uint32_t NGBE_MTA_BIT_SHIFT = 5;
+ const uint32_t NGBE_MTA_BIT_MASK = (0x1 << NGBE_MTA_BIT_SHIFT) - 1;
+ uint32_t reg_val;
+ int i;
+ u32 vmolr = rd32(hw, NGBE_POOLETHCTL(vf));
+
+ /* Disable multicast promiscuous first */
+ ngbe_disable_vf_mc_promisc(eth_dev, vf);
+
+ /* only so many hash values supported */
+ nb_entries = RTE_MIN(nb_entries, NGBE_MAX_VF_MC_ENTRIES);
+
+ /* store the mc entries */
+ vfinfo->num_vf_mc_hashes = (uint16_t)nb_entries;
+ for (i = 0; i < nb_entries; i++)
+ vfinfo->vf_mc_hashes[i] = hash_list[i];
+
+ if (nb_entries == 0) {
+ vmolr &= ~NGBE_POOLETHCTL_MCHA;
+ wr32(hw, NGBE_POOLETHCTL(vf), vmolr);
+ return 0;
+ }
+
+ for (i = 0; i < vfinfo->num_vf_mc_hashes; i++) {
+ mta_idx = (vfinfo->vf_mc_hashes[i] >> NGBE_MTA_BIT_SHIFT)
+ & NGBE_MTA_INDEX_MASK;
+ mta_shift = vfinfo->vf_mc_hashes[i] & NGBE_MTA_BIT_MASK;
+ reg_val = rd32(hw, NGBE_MCADDRTBL(mta_idx));
+ reg_val |= (1 << mta_shift);
+ wr32(hw, NGBE_MCADDRTBL(mta_idx), reg_val);
+ }
+
+ vmolr |= NGBE_POOLETHCTL_MCHA;
+ wr32(hw, NGBE_POOLETHCTL(vf), vmolr);
+
+ return 0;
+}
+
+static int
+ngbe_vf_set_vlan(struct rte_eth_dev *eth_dev, uint32_t vf, uint32_t *msgbuf)
+{
+ int add, vid;
+ struct ngbe_hw *hw = ngbe_dev_hw(eth_dev);
+ struct ngbe_vf_info *vfinfo = *(NGBE_DEV_VFDATA(eth_dev));
+
+ add = (msgbuf[0] & NGBE_VT_MSGINFO_MASK)
+ >> NGBE_VT_MSGINFO_SHIFT;
+ vid = NGBE_PSRVLAN_VID(msgbuf[1]);
+
+ if (add)
+ vfinfo[vf].vlan_count++;
+ else if (vfinfo[vf].vlan_count)
+ vfinfo[vf].vlan_count--;
+ return hw->mac.set_vfta(hw, vid, vf, (bool)add, false);
+}
+
+static int
+ngbe_set_vf_lpe(struct rte_eth_dev *eth_dev,
+ __rte_unused uint32_t vf, uint32_t *msgbuf)
+{
+ struct ngbe_hw *hw = ngbe_dev_hw(eth_dev);
+ uint32_t max_frame = msgbuf[1];
+ uint32_t max_frs;
+
+ if (max_frame < RTE_ETHER_MIN_LEN ||
+ max_frame > RTE_ETHER_MAX_JUMBO_FRAME_LEN)
+ return -1;
+
+ max_frs = rd32m(hw, NGBE_FRMSZ, NGBE_FRMSZ_MAX_MASK);
+ if (max_frs < max_frame) {
+ wr32m(hw, NGBE_FRMSZ, NGBE_FRMSZ_MAX_MASK,
+ NGBE_FRMSZ_MAX(max_frame));
+ }
+
+ return 0;
+}
+
+static int
+ngbe_negotiate_vf_api(struct rte_eth_dev *eth_dev,
+ uint32_t vf, uint32_t *msgbuf)
+{
+ uint32_t api_version = msgbuf[1];
+ struct ngbe_vf_info *vfinfo = *NGBE_DEV_VFDATA(eth_dev);
+
+ switch (api_version) {
+ case ngbe_mbox_api_10:
+ case ngbe_mbox_api_11:
+ case ngbe_mbox_api_12:
+ case ngbe_mbox_api_13:
+ vfinfo[vf].api_version = (uint8_t)api_version;
+ return 0;
+ default:
+ break;
+ }
+
+ PMD_DRV_LOG(ERR, "Negotiate invalid api version %u from VF %d\n",
+ api_version, vf);
+
+ return -1;
+}
+
+static int
+ngbe_get_vf_queues(struct rte_eth_dev *eth_dev, uint32_t vf, uint32_t *msgbuf)
+{
+ struct ngbe_vf_info *vfinfo = *NGBE_DEV_VFDATA(eth_dev);
+ uint32_t default_q = 0;
+
+ /* Verify if the PF supports the mbox APIs version or not */
+ switch (vfinfo[vf].api_version) {
+ case ngbe_mbox_api_20:
+ case ngbe_mbox_api_11:
+ case ngbe_mbox_api_12:
+ case ngbe_mbox_api_13:
+ break;
+ default:
+ return -1;
+ }
+
+ /* Notify VF of Rx and Tx queue number */
+ msgbuf[NGBE_VF_RX_QUEUES] = RTE_ETH_DEV_SRIOV(eth_dev).nb_q_per_pool;
+ msgbuf[NGBE_VF_TX_QUEUES] = RTE_ETH_DEV_SRIOV(eth_dev).nb_q_per_pool;
+
+ /* Notify VF of default queue */
+ msgbuf[NGBE_VF_DEF_QUEUE] = default_q;
+
+ msgbuf[NGBE_VF_TRANS_VLAN] = 0;
+
+ return 0;
+}
+
+static int
+ngbe_set_vf_mc_promisc(struct rte_eth_dev *eth_dev,
+ uint32_t vf, uint32_t *msgbuf)
+{
+ struct ngbe_vf_info *vfinfo = *(NGBE_DEV_VFDATA(eth_dev));
+ struct ngbe_hw *hw = ngbe_dev_hw(eth_dev);
+ int xcast_mode = msgbuf[1]; /* msgbuf contains the flag to enable */
+ u32 vmolr, fctrl, disable, enable;
+
+ switch (vfinfo[vf].api_version) {
+ case ngbe_mbox_api_12:
+ /* promisc introduced in 1.3 version */
+ if (xcast_mode == NGBEVF_XCAST_MODE_PROMISC)
+ return -EOPNOTSUPP;
+ break;
+ /* Fall threw */
+ case ngbe_mbox_api_13:
+ break;
+ default:
+ return -1;
+ }
+
+ if (vfinfo[vf].xcast_mode == xcast_mode)
+ goto out;
+
+ switch (xcast_mode) {
+ case NGBEVF_XCAST_MODE_NONE:
+ disable = NGBE_POOLETHCTL_BCA | NGBE_POOLETHCTL_MCHA |
+ NGBE_POOLETHCTL_MCP | NGBE_POOLETHCTL_UCP |
+ NGBE_POOLETHCTL_VLP;
+ enable = 0;
+ break;
+ case NGBEVF_XCAST_MODE_MULTI:
+ disable = NGBE_POOLETHCTL_MCP | NGBE_POOLETHCTL_UCP |
+ NGBE_POOLETHCTL_VLP;
+ enable = NGBE_POOLETHCTL_BCA | NGBE_POOLETHCTL_MCHA;
+ break;
+ case NGBEVF_XCAST_MODE_ALLMULTI:
+ disable = NGBE_POOLETHCTL_UCP | NGBE_POOLETHCTL_VLP;
+ enable = NGBE_POOLETHCTL_BCA | NGBE_POOLETHCTL_MCHA |
+ NGBE_POOLETHCTL_MCP;
+ break;
+ case NGBEVF_XCAST_MODE_PROMISC:
+ fctrl = rd32(hw, NGBE_PSRCTL);
+ if (!(fctrl & NGBE_PSRCTL_UCP)) {
+ /* VF promisc requires PF in promisc */
+ PMD_DRV_LOG(ERR,
+ "Enabling VF promisc requires PF in promisc\n");
+ return -1;
+ }
+
+ disable = 0;
+ enable = NGBE_POOLETHCTL_BCA | NGBE_POOLETHCTL_MCHA |
+ NGBE_POOLETHCTL_MCP | NGBE_POOLETHCTL_UCP |
+ NGBE_POOLETHCTL_VLP;
+ break;
+ default:
+ return -1;
+ }
+
+ vmolr = rd32(hw, NGBE_POOLETHCTL(vf));
+ vmolr &= ~disable;
+ vmolr |= enable;
+ wr32(hw, NGBE_POOLETHCTL(vf), vmolr);
+ vfinfo[vf].xcast_mode = xcast_mode;
+
+out:
+ msgbuf[1] = xcast_mode;
+
+ return 0;
+}
+
+static int
+ngbe_set_vf_macvlan_msg(struct rte_eth_dev *dev, uint32_t vf, uint32_t *msgbuf)
+{
+ struct ngbe_hw *hw = ngbe_dev_hw(dev);
+ struct ngbe_vf_info *vf_info = *(NGBE_DEV_VFDATA(dev));
+ uint8_t *new_mac = (uint8_t *)(&msgbuf[1]);
+ struct rte_ether_addr *ea = (struct rte_ether_addr *)new_mac;
+ int index = (msgbuf[0] & NGBE_VT_MSGINFO_MASK) >>
+ NGBE_VT_MSGINFO_SHIFT;
+
+ if (index) {
+ if (!rte_is_valid_assigned_ether_addr(ea)) {
+ PMD_DRV_LOG(ERR, "set invalid mac vf:%d\n", vf);
+ return -1;
+ }
+
+ vf_info[vf].mac_count++;
+
+ hw->mac.set_rar(hw, vf_info[vf].mac_count,
+ new_mac, vf, true);
+ } else {
+ if (vf_info[vf].mac_count) {
+ hw->mac.clear_rar(hw, vf_info[vf].mac_count);
+ vf_info[vf].mac_count = 0;
+ }
+ }
+ return 0;
+}
+
+static int
+ngbe_rcv_msg_from_vf(struct rte_eth_dev *eth_dev, uint16_t vf)
+{
+ uint16_t mbx_size = NGBE_P2VMBX_SIZE;
+ uint16_t msg_size = NGBE_VF_MSG_SIZE_DEFAULT;
+ uint32_t msgbuf[NGBE_P2VMBX_SIZE];
+ int32_t retval;
+ struct ngbe_hw *hw = ngbe_dev_hw(eth_dev);
+ struct ngbe_vf_info *vfinfo = *NGBE_DEV_VFDATA(eth_dev);
+ struct rte_pmd_ngbe_mb_event_param ret_param;
+
+ retval = ngbe_read_mbx(hw, msgbuf, mbx_size, vf);
+ if (retval) {
+ PMD_DRV_LOG(ERR, "Error mbx recv msg from VF %d", vf);
+ return retval;
+ }
+
+ /* do nothing with the message already been processed */
+ if (msgbuf[0] & (NGBE_VT_MSGTYPE_ACK | NGBE_VT_MSGTYPE_NACK))
+ return retval;
+
+ /* flush the ack before we write any messages back */
+ ngbe_flush(hw);
+
+ /**
+ * initialise structure to send to user application
+ * will return response from user in retval field
+ */
+ ret_param.retval = RTE_PMD_NGBE_MB_EVENT_PROCEED;
+ ret_param.vfid = vf;
+ ret_param.msg_type = msgbuf[0] & 0xFFFF;
+ ret_param.msg = (void *)msgbuf;
+
+ /* perform VF reset */
+ if (msgbuf[0] == NGBE_VF_RESET) {
+ int ret = ngbe_vf_reset(eth_dev, vf, msgbuf);
+
+ vfinfo[vf].clear_to_send = true;
+
+ /* notify application about VF reset */
+ rte_eth_dev_callback_process(eth_dev, RTE_ETH_EVENT_VF_MBOX,
+ &ret_param);
+ return ret;
+ }
+
+ /**
+ * ask user application if we allowed to perform those functions
+ * if we get ret_param.retval == RTE_PMD_COMPAT_MB_EVENT_PROCEED
+ * then business as usual,
+ * if 0, do nothing and send ACK to VF
+ * if ret_param.retval > 1, do nothing and send NAK to VF
+ */
+ rte_eth_dev_callback_process(eth_dev, RTE_ETH_EVENT_VF_MBOX,
+ &ret_param);
+
+ retval = ret_param.retval;
+
+ /* check & process VF to PF mailbox message */
+ switch ((msgbuf[0] & 0xFFFF)) {
+ case NGBE_VF_SET_MAC_ADDR:
+ if (retval == RTE_PMD_NGBE_MB_EVENT_PROCEED)
+ retval = ngbe_vf_set_mac_addr(eth_dev, vf, msgbuf);
+ break;
+ case NGBE_VF_SET_MULTICAST:
+ if (retval == RTE_PMD_NGBE_MB_EVENT_PROCEED)
+ retval = ngbe_vf_set_multicast(eth_dev, vf, msgbuf);
+ break;
+ case NGBE_VF_SET_LPE:
+ if (retval == RTE_PMD_NGBE_MB_EVENT_PROCEED)
+ retval = ngbe_set_vf_lpe(eth_dev, vf, msgbuf);
+ break;
+ case NGBE_VF_SET_VLAN:
+ if (retval == RTE_PMD_NGBE_MB_EVENT_PROCEED)
+ retval = ngbe_vf_set_vlan(eth_dev, vf, msgbuf);
+ break;
+ case NGBE_VF_API_NEGOTIATE:
+ retval = ngbe_negotiate_vf_api(eth_dev, vf, msgbuf);
+ break;
+ case NGBE_VF_GET_QUEUES:
+ retval = ngbe_get_vf_queues(eth_dev, vf, msgbuf);
+ msg_size = NGBE_VF_GET_QUEUE_MSG_SIZE;
+ break;
+ case NGBE_VF_UPDATE_XCAST_MODE:
+ if (retval == RTE_PMD_NGBE_MB_EVENT_PROCEED)
+ retval = ngbe_set_vf_mc_promisc(eth_dev, vf, msgbuf);
+ break;
+ case NGBE_VF_SET_MACVLAN:
+ if (retval == RTE_PMD_NGBE_MB_EVENT_PROCEED)
+ retval = ngbe_set_vf_macvlan_msg(eth_dev, vf, msgbuf);
+ break;
+ default:
+ PMD_DRV_LOG(DEBUG, "Unhandled Msg %8.8x", (uint32_t)msgbuf[0]);
+ retval = NGBE_ERR_MBX;
+ break;
+ }
+
+ /* response the VF according to the message process result */
+ if (retval)
+ msgbuf[0] |= NGBE_VT_MSGTYPE_NACK;
+ else
+ msgbuf[0] |= NGBE_VT_MSGTYPE_ACK;
+
+ msgbuf[0] |= NGBE_VT_MSGTYPE_CTS;
+
+ ngbe_write_mbx(hw, msgbuf, msg_size, vf);
+
+ return retval;
+}
+
+static inline void
+ngbe_rcv_ack_from_vf(struct rte_eth_dev *eth_dev, uint16_t vf)
+{
+ uint32_t msg = NGBE_VT_MSGTYPE_NACK;
+ struct ngbe_hw *hw = ngbe_dev_hw(eth_dev);
+ struct ngbe_vf_info *vfinfo = *NGBE_DEV_VFDATA(eth_dev);
+
+ if (!vfinfo[vf].clear_to_send)
+ ngbe_write_mbx(hw, &msg, 1, vf);
+}
+
+void ngbe_pf_mbx_process(struct rte_eth_dev *eth_dev)
+{
+ uint16_t vf;
+ struct ngbe_hw *hw = ngbe_dev_hw(eth_dev);
+
+ for (vf = 0; vf < dev_num_vf(eth_dev); vf++) {
+ /* check & process vf function level reset */
+ if (!ngbe_check_for_rst(hw, vf))
+ ngbe_vf_reset_event(eth_dev, vf);
+
+ /* check & process vf mailbox messages */
+ if (!ngbe_check_for_msg(hw, vf))
+ ngbe_rcv_msg_from_vf(eth_dev, vf);
+
+ /* check & process acks from vf */
+ if (!ngbe_check_for_ack(hw, vf))
+ ngbe_rcv_ack_from_vf(eth_dev, vf);
+ }
+}
diff --git a/drivers/net/ngbe/rte_pmd_ngbe.h b/drivers/net/ngbe/rte_pmd_ngbe.h
new file mode 100644
index 0000000000..e895ecd7ef
--- /dev/null
+++ b/drivers/net/ngbe/rte_pmd_ngbe.h
@@ -0,0 +1,39 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018-2021 Beijing WangXun Technology Co., Ltd.
+ * Copyright(c) 2010-2017 Intel Corporation
+ */
+
+/**
+ * @file rte_pmd_ngbe.h
+ * ngbe PMD specific functions.
+ *
+ **/
+
+#ifndef _PMD_NGBE_H_
+#define _PMD_NGBE_H_
+
+#include <rte_compat.h>
+#include <rte_ethdev.h>
+#include <rte_ether.h>
+
+/**
+ * Response sent back to ngbe driver from user app after callback
+ */
+enum rte_pmd_ngbe_mb_event_rsp {
+ RTE_PMD_NGBE_MB_EVENT_NOOP_ACK, /**< skip mbox request and ACK */
+ RTE_PMD_NGBE_MB_EVENT_NOOP_NACK, /**< skip mbox request and NACK */
+ RTE_PMD_NGBE_MB_EVENT_PROCEED, /**< proceed with mbox request */
+ RTE_PMD_NGBE_MB_EVENT_MAX /**< max value of this enum */
+};
+
+/**
+ * Data sent to the user application when the callback is executed.
+ */
+struct rte_pmd_ngbe_mb_event_param {
+ uint16_t vfid; /**< Virtual Function number */
+ uint16_t msg_type; /**< VF to PF message type, defined in ngbe_mbx.h */
+ uint16_t retval; /**< return value */
+ void *msg; /**< pointer to message */
+};
+
+#endif /* _PMD_NGBE_H_ */
--
2.21.0.windows.1
^ permalink raw reply [flat|nested] 54+ messages in thread
* [dpdk-dev] [PATCH 20/32] net/ngbe: support flow control
2021-09-08 8:37 [dpdk-dev] [PATCH 00/32] net/ngbe: add many features Jiawen Wu
` (18 preceding siblings ...)
2021-09-08 8:37 ` [dpdk-dev] [PATCH 19/32] net/ngbe: add mailbox process operations Jiawen Wu
@ 2021-09-08 8:37 ` Jiawen Wu
2021-09-08 8:37 ` [dpdk-dev] [PATCH 21/32] net/ngbe: support device LED on and off Jiawen Wu
` (11 subsequent siblings)
31 siblings, 0 replies; 54+ messages in thread
From: Jiawen Wu @ 2021-09-08 8:37 UTC (permalink / raw)
To: dev; +Cc: Jiawen Wu
Support to get and set flow control.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
doc/guides/nics/features/ngbe.ini | 1 +
doc/guides/nics/ngbe.rst | 1 +
drivers/net/ngbe/base/ngbe_dummy.h | 31 +++
drivers/net/ngbe/base/ngbe_hw.c | 334 +++++++++++++++++++++++++++
drivers/net/ngbe/base/ngbe_hw.h | 6 +
drivers/net/ngbe/base/ngbe_phy.c | 9 +
drivers/net/ngbe/base/ngbe_phy.h | 3 +
drivers/net/ngbe/base/ngbe_phy_mvl.c | 57 +++++
drivers/net/ngbe/base/ngbe_phy_mvl.h | 4 +
drivers/net/ngbe/base/ngbe_phy_rtl.c | 42 ++++
drivers/net/ngbe/base/ngbe_phy_rtl.h | 3 +
drivers/net/ngbe/base/ngbe_phy_yt.c | 44 ++++
drivers/net/ngbe/base/ngbe_phy_yt.h | 6 +
drivers/net/ngbe/base/ngbe_type.h | 32 +++
drivers/net/ngbe/ngbe_ethdev.c | 111 +++++++++
drivers/net/ngbe/ngbe_ethdev.h | 8 +
16 files changed, 692 insertions(+)
diff --git a/doc/guides/nics/features/ngbe.ini b/doc/guides/nics/features/ngbe.ini
index 9a497ccae6..00150282cb 100644
--- a/doc/guides/nics/features/ngbe.ini
+++ b/doc/guides/nics/features/ngbe.ini
@@ -22,6 +22,7 @@ RSS key update = Y
RSS reta update = Y
SR-IOV = Y
VLAN filter = Y
+Flow control = Y
CRC offload = P
VLAN offload = P
QinQ offload = P
diff --git a/doc/guides/nics/ngbe.rst b/doc/guides/nics/ngbe.rst
index ce160e832c..09175e83cd 100644
--- a/doc/guides/nics/ngbe.rst
+++ b/doc/guides/nics/ngbe.rst
@@ -23,6 +23,7 @@ Features
- Port hardware statistics
- Jumbo frames
- Link state information
+- Link flow control
- Interrupt mode for RX
- Scattered and gather for TX and RX
- FW version
diff --git a/drivers/net/ngbe/base/ngbe_dummy.h b/drivers/net/ngbe/base/ngbe_dummy.h
index 940b448734..0baabcbae7 100644
--- a/drivers/net/ngbe/base/ngbe_dummy.h
+++ b/drivers/net/ngbe/base/ngbe_dummy.h
@@ -154,6 +154,17 @@ static inline void ngbe_mac_set_vlan_anti_spoofing_dummy(struct ngbe_hw *TUP0,
bool TUP1, int TUP2)
{
}
+static inline s32 ngbe_mac_fc_enable_dummy(struct ngbe_hw *TUP0)
+{
+ return NGBE_ERR_OPS_DUMMY;
+}
+static inline s32 ngbe_mac_setup_fc_dummy(struct ngbe_hw *TUP0)
+{
+ return NGBE_ERR_OPS_DUMMY;
+}
+static inline void ngbe_mac_fc_autoneg_dummy(struct ngbe_hw *TUP0)
+{
+}
static inline s32 ngbe_mac_init_thermal_ssth_dummy(struct ngbe_hw *TUP0)
{
return NGBE_ERR_OPS_DUMMY;
@@ -205,6 +216,20 @@ static inline s32 ngbe_phy_check_link_dummy(struct ngbe_hw *TUP0, u32 *TUP1,
{
return NGBE_ERR_OPS_DUMMY;
}
+static inline s32 ngbe_get_phy_advertised_pause_dummy(struct ngbe_hw *TUP0,
+ u8 *TUP1)
+{
+ return NGBE_ERR_OPS_DUMMY;
+}
+static inline s32 ngbe_get_phy_lp_advertised_pause_dummy(struct ngbe_hw *TUP0,
+ u8 *TUP1)
+{
+ return NGBE_ERR_OPS_DUMMY;
+}
+static inline s32 ngbe_set_phy_pause_adv_dummy(struct ngbe_hw *TUP0, u16 TUP1)
+{
+ return NGBE_ERR_OPS_DUMMY;
+}
/* struct ngbe_mbx_operations */
static inline void ngbe_mbx_init_params_dummy(struct ngbe_hw *TUP0)
@@ -264,6 +289,9 @@ static inline void ngbe_init_ops_dummy(struct ngbe_hw *hw)
hw->mac.set_vlvf = ngbe_mac_set_vlvf_dummy;
hw->mac.set_mac_anti_spoofing = ngbe_mac_set_mac_anti_spoofing_dummy;
hw->mac.set_vlan_anti_spoofing = ngbe_mac_set_vlan_anti_spoofing_dummy;
+ hw->mac.fc_enable = ngbe_mac_fc_enable_dummy;
+ hw->mac.setup_fc = ngbe_mac_setup_fc_dummy;
+ hw->mac.fc_autoneg = ngbe_mac_fc_autoneg_dummy;
hw->mac.init_thermal_sensor_thresh = ngbe_mac_init_thermal_ssth_dummy;
hw->mac.check_overtemp = ngbe_mac_check_overtemp_dummy;
hw->phy.identify = ngbe_phy_identify_dummy;
@@ -275,6 +303,9 @@ static inline void ngbe_init_ops_dummy(struct ngbe_hw *hw)
hw->phy.write_reg_unlocked = ngbe_phy_write_reg_unlocked_dummy;
hw->phy.setup_link = ngbe_phy_setup_link_dummy;
hw->phy.check_link = ngbe_phy_check_link_dummy;
+ hw->phy.get_adv_pause = ngbe_get_phy_advertised_pause_dummy;
+ hw->phy.get_lp_adv_pause = ngbe_get_phy_lp_advertised_pause_dummy;
+ hw->phy.set_pause_adv = ngbe_set_phy_pause_adv_dummy;
hw->mbx.init_params = ngbe_mbx_init_params_dummy;
hw->mbx.read = ngbe_mbx_read_dummy;
hw->mbx.write = ngbe_mbx_write_dummy;
diff --git a/drivers/net/ngbe/base/ngbe_hw.c b/drivers/net/ngbe/base/ngbe_hw.c
index afde58a89e..35351a2702 100644
--- a/drivers/net/ngbe/base/ngbe_hw.c
+++ b/drivers/net/ngbe/base/ngbe_hw.c
@@ -18,6 +18,8 @@
**/
s32 ngbe_start_hw(struct ngbe_hw *hw)
{
+ s32 err;
+
DEBUGFUNC("ngbe_start_hw");
/* Clear the VLAN filter table */
@@ -26,6 +28,13 @@ s32 ngbe_start_hw(struct ngbe_hw *hw)
/* Clear statistics registers */
hw->mac.clear_hw_cntrs(hw);
+ /* Setup flow control */
+ err = hw->mac.setup_fc(hw);
+ if (err != 0 && err != NGBE_NOT_IMPLEMENTED) {
+ DEBUGOUT("Flow control setup failed, returning %d\n", err);
+ return err;
+ }
+
/* Clear adapter stopped flag */
hw->adapter_stopped = false;
@@ -703,6 +712,326 @@ s32 ngbe_update_mc_addr_list(struct ngbe_hw *hw, u8 *mc_addr_list,
return 0;
}
+/**
+ * ngbe_setup_fc_em - Set up flow control
+ * @hw: pointer to hardware structure
+ *
+ * Called at init time to set up flow control.
+ **/
+s32 ngbe_setup_fc_em(struct ngbe_hw *hw)
+{
+ s32 err = 0;
+ u16 reg_cu = 0;
+
+ DEBUGFUNC("ngbe_setup_fc");
+
+ /* Validate the requested mode */
+ if (hw->fc.strict_ieee && hw->fc.requested_mode == ngbe_fc_rx_pause) {
+ DEBUGOUT("ngbe_fc_rx_pause not valid in strict IEEE mode\n");
+ err = NGBE_ERR_INVALID_LINK_SETTINGS;
+ goto out;
+ }
+
+ /*
+ * 1gig parts do not have a word in the EEPROM to determine the
+ * default flow control setting, so we explicitly set it to full.
+ */
+ if (hw->fc.requested_mode == ngbe_fc_default)
+ hw->fc.requested_mode = ngbe_fc_full;
+
+ /*
+ * The possible values of fc.requested_mode are:
+ * 0: Flow control is completely disabled
+ * 1: Rx flow control is enabled (we can receive pause frames,
+ * but not send pause frames).
+ * 2: Tx flow control is enabled (we can send pause frames but
+ * we do not support receiving pause frames).
+ * 3: Both Rx and Tx flow control (symmetric) are enabled.
+ * other: Invalid.
+ */
+ switch (hw->fc.requested_mode) {
+ case ngbe_fc_none:
+ /* Flow control completely disabled by software override. */
+ break;
+ case ngbe_fc_tx_pause:
+ /*
+ * Tx Flow control is enabled, and Rx Flow control is
+ * disabled by software override.
+ */
+ if (hw->phy.type == ngbe_phy_mvl_sfi ||
+ hw->phy.type == ngbe_phy_yt8521s_sfi)
+ reg_cu |= MVL_FANA_ASM_PAUSE;
+ else
+ reg_cu |= 0x800; /*need to merge rtl and mvl on page 0*/
+ break;
+ case ngbe_fc_rx_pause:
+ /*
+ * Rx Flow control is enabled and Tx Flow control is
+ * disabled by software override. Since there really
+ * isn't a way to advertise that we are capable of RX
+ * Pause ONLY, we will advertise that we support both
+ * symmetric and asymmetric Rx PAUSE, as such we fall
+ * through to the fc_full statement. Later, we will
+ * disable the adapter's ability to send PAUSE frames.
+ */
+ case ngbe_fc_full:
+ /* Flow control (both Rx and Tx) is enabled by SW override. */
+ if (hw->phy.type == ngbe_phy_mvl_sfi ||
+ hw->phy.type == ngbe_phy_yt8521s_sfi)
+ reg_cu |= MVL_FANA_SYM_PAUSE;
+ else
+ reg_cu |= 0xC00; /*need to merge rtl and mvl on page 0*/
+ break;
+ default:
+ DEBUGOUT("Flow control param set incorrectly\n");
+ err = NGBE_ERR_CONFIG;
+ goto out;
+ }
+
+ err = hw->phy.set_pause_adv(hw, reg_cu);
+
+out:
+ return err;
+}
+
+/**
+ * ngbe_fc_enable - Enable flow control
+ * @hw: pointer to hardware structure
+ *
+ * Enable flow control according to the current settings.
+ **/
+s32 ngbe_fc_enable(struct ngbe_hw *hw)
+{
+ s32 err = 0;
+ u32 mflcn_reg, fccfg_reg;
+ u32 pause_time;
+ u32 fcrtl, fcrth;
+
+ DEBUGFUNC("ngbe_fc_enable");
+
+ /* Validate the water mark configuration */
+ if (!hw->fc.pause_time) {
+ err = NGBE_ERR_INVALID_LINK_SETTINGS;
+ goto out;
+ }
+
+ /* Low water mark of zero causes XOFF floods */
+ if ((hw->fc.current_mode & ngbe_fc_tx_pause) && hw->fc.high_water) {
+ if (!hw->fc.low_water ||
+ hw->fc.low_water >= hw->fc.high_water) {
+ DEBUGOUT("Invalid water mark configuration\n");
+ err = NGBE_ERR_INVALID_LINK_SETTINGS;
+ goto out;
+ }
+ }
+
+ /* Negotiate the fc mode to use */
+ hw->mac.fc_autoneg(hw);
+
+ /* Disable any previous flow control settings */
+ mflcn_reg = rd32(hw, NGBE_RXFCCFG);
+ mflcn_reg &= ~NGBE_RXFCCFG_FC;
+
+ fccfg_reg = rd32(hw, NGBE_TXFCCFG);
+ fccfg_reg &= ~NGBE_TXFCCFG_FC;
+ /*
+ * The possible values of fc.current_mode are:
+ * 0: Flow control is completely disabled
+ * 1: Rx flow control is enabled (we can receive pause frames,
+ * but not send pause frames).
+ * 2: Tx flow control is enabled (we can send pause frames but
+ * we do not support receiving pause frames).
+ * 3: Both Rx and Tx flow control (symmetric) are enabled.
+ * other: Invalid.
+ */
+ switch (hw->fc.current_mode) {
+ case ngbe_fc_none:
+ /*
+ * Flow control is disabled by software override or autoneg.
+ * The code below will actually disable it in the HW.
+ */
+ break;
+ case ngbe_fc_rx_pause:
+ /*
+ * Rx Flow control is enabled and Tx Flow control is
+ * disabled by software override. Since there really
+ * isn't a way to advertise that we are capable of RX
+ * Pause ONLY, we will advertise that we support both
+ * symmetric and asymmetric Rx PAUSE. Later, we will
+ * disable the adapter's ability to send PAUSE frames.
+ */
+ mflcn_reg |= NGBE_RXFCCFG_FC;
+ break;
+ case ngbe_fc_tx_pause:
+ /*
+ * Tx Flow control is enabled, and Rx Flow control is
+ * disabled by software override.
+ */
+ fccfg_reg |= NGBE_TXFCCFG_FC;
+ break;
+ case ngbe_fc_full:
+ /* Flow control (both Rx and Tx) is enabled by SW override. */
+ mflcn_reg |= NGBE_RXFCCFG_FC;
+ fccfg_reg |= NGBE_TXFCCFG_FC;
+ break;
+ default:
+ DEBUGOUT("Flow control param set incorrectly\n");
+ err = NGBE_ERR_CONFIG;
+ goto out;
+ }
+
+ /* Set 802.3x based flow control settings. */
+ wr32(hw, NGBE_RXFCCFG, mflcn_reg);
+ wr32(hw, NGBE_TXFCCFG, fccfg_reg);
+
+ /* Set up and enable Rx high/low water mark thresholds, enable XON. */
+ if ((hw->fc.current_mode & ngbe_fc_tx_pause) &&
+ hw->fc.high_water) {
+ fcrtl = NGBE_FCWTRLO_TH(hw->fc.low_water) |
+ NGBE_FCWTRLO_XON;
+ fcrth = NGBE_FCWTRHI_TH(hw->fc.high_water) |
+ NGBE_FCWTRHI_XOFF;
+ } else {
+ /*
+ * In order to prevent Tx hangs when the internal Tx
+ * switch is enabled we must set the high water mark
+ * to the Rx packet buffer size - 24KB. This allows
+ * the Tx switch to function even under heavy Rx
+ * workloads.
+ */
+ fcrtl = 0;
+ fcrth = rd32(hw, NGBE_PBRXSIZE) - 24576;
+ }
+ wr32(hw, NGBE_FCWTRLO, fcrtl);
+ wr32(hw, NGBE_FCWTRHI, fcrth);
+
+ /* Configure pause time */
+ pause_time = NGBE_RXFCFSH_TIME(hw->fc.pause_time);
+ wr32(hw, NGBE_FCXOFFTM, pause_time * 0x00010000);
+
+ /* Configure flow control refresh threshold value */
+ wr32(hw, NGBE_RXFCRFSH, hw->fc.pause_time / 2);
+
+out:
+ return err;
+}
+
+/**
+ * ngbe_negotiate_fc - Negotiate flow control
+ * @hw: pointer to hardware structure
+ * @adv_reg: flow control advertised settings
+ * @lp_reg: link partner's flow control settings
+ * @adv_sym: symmetric pause bit in advertisement
+ * @adv_asm: asymmetric pause bit in advertisement
+ * @lp_sym: symmetric pause bit in link partner advertisement
+ * @lp_asm: asymmetric pause bit in link partner advertisement
+ *
+ * Find the intersection between advertised settings and link partner's
+ * advertised settings
+ **/
+s32 ngbe_negotiate_fc(struct ngbe_hw *hw, u32 adv_reg, u32 lp_reg,
+ u32 adv_sym, u32 adv_asm, u32 lp_sym, u32 lp_asm)
+{
+ if ((!(adv_reg)) || (!(lp_reg))) {
+ DEBUGOUT("Local or link partner's advertised flow control "
+ "settings are NULL. Local: %x, link partner: %x\n",
+ adv_reg, lp_reg);
+ return NGBE_ERR_FC_NOT_NEGOTIATED;
+ }
+
+ if ((adv_reg & adv_sym) && (lp_reg & lp_sym)) {
+ /*
+ * Now we need to check if the user selected Rx ONLY
+ * of pause frames. In this case, we had to advertise
+ * FULL flow control because we could not advertise RX
+ * ONLY. Hence, we must now check to see if we need to
+ * turn OFF the TRANSMISSION of PAUSE frames.
+ */
+ if (hw->fc.requested_mode == ngbe_fc_full) {
+ hw->fc.current_mode = ngbe_fc_full;
+ DEBUGOUT("Flow Control = FULL.\n");
+ } else {
+ hw->fc.current_mode = ngbe_fc_rx_pause;
+ DEBUGOUT("Flow Control=RX PAUSE frames only\n");
+ }
+ } else if (!(adv_reg & adv_sym) && (adv_reg & adv_asm) &&
+ (lp_reg & lp_sym) && (lp_reg & lp_asm)) {
+ hw->fc.current_mode = ngbe_fc_tx_pause;
+ DEBUGOUT("Flow Control = TX PAUSE frames only.\n");
+ } else if ((adv_reg & adv_sym) && (adv_reg & adv_asm) &&
+ !(lp_reg & lp_sym) && (lp_reg & lp_asm)) {
+ hw->fc.current_mode = ngbe_fc_rx_pause;
+ DEBUGOUT("Flow Control = RX PAUSE frames only.\n");
+ } else {
+ hw->fc.current_mode = ngbe_fc_none;
+ DEBUGOUT("Flow Control = NONE.\n");
+ }
+ return 0;
+}
+
+/**
+ * ngbe_fc_autoneg_em - Enable flow control IEEE clause 37
+ * @hw: pointer to hardware structure
+ *
+ * Enable flow control according to IEEE clause 37.
+ **/
+STATIC s32 ngbe_fc_autoneg_em(struct ngbe_hw *hw)
+{
+ u8 technology_ability_reg = 0;
+ u8 lp_technology_ability_reg = 0;
+
+ hw->phy.get_adv_pause(hw, &technology_ability_reg);
+ hw->phy.get_lp_adv_pause(hw, &lp_technology_ability_reg);
+
+ return ngbe_negotiate_fc(hw, (u32)technology_ability_reg,
+ (u32)lp_technology_ability_reg,
+ NGBE_TAF_SYM_PAUSE, NGBE_TAF_ASM_PAUSE,
+ NGBE_TAF_SYM_PAUSE, NGBE_TAF_ASM_PAUSE);
+}
+
+/**
+ * ngbe_fc_autoneg - Configure flow control
+ * @hw: pointer to hardware structure
+ *
+ * Compares our advertised flow control capabilities to those advertised by
+ * our link partner, and determines the proper flow control mode to use.
+ **/
+void ngbe_fc_autoneg(struct ngbe_hw *hw)
+{
+ s32 err = NGBE_ERR_FC_NOT_NEGOTIATED;
+ u32 speed;
+ bool link_up;
+
+ DEBUGFUNC("ngbe_fc_autoneg");
+
+ /*
+ * AN should have completed when the cable was plugged in.
+ * Look for reasons to bail out. Bail out if:
+ * - FC autoneg is disabled, or if
+ * - link is not up.
+ */
+ if (hw->fc.disable_fc_autoneg) {
+ DEBUGOUT("Flow control autoneg is disabled");
+ goto out;
+ }
+
+ hw->mac.check_link(hw, &speed, &link_up, false);
+ if (!link_up) {
+ DEBUGOUT("The link is down");
+ goto out;
+ }
+
+ err = ngbe_fc_autoneg_em(hw);
+
+out:
+ if (err == 0) {
+ hw->fc.fc_was_autonegged = true;
+ } else {
+ hw->fc.fc_was_autonegged = false;
+ hw->fc.current_mode = hw->fc.requested_mode;
+ }
+}
+
/**
* ngbe_acquire_swfw_sync - Acquire SWFW semaphore
* @hw: pointer to hardware structure
@@ -1520,6 +1849,11 @@ s32 ngbe_init_ops_pf(struct ngbe_hw *hw)
mac->set_mac_anti_spoofing = ngbe_set_mac_anti_spoofing;
mac->set_vlan_anti_spoofing = ngbe_set_vlan_anti_spoofing;
+ /* Flow Control */
+ mac->fc_enable = ngbe_fc_enable;
+ mac->fc_autoneg = ngbe_fc_autoneg;
+ mac->setup_fc = ngbe_setup_fc_em;
+
/* Link */
mac->get_link_capabilities = ngbe_get_link_capabilities_em;
mac->check_link = ngbe_check_mac_link_em;
diff --git a/drivers/net/ngbe/base/ngbe_hw.h b/drivers/net/ngbe/base/ngbe_hw.h
index 83ad646dde..a84ddca6ac 100644
--- a/drivers/net/ngbe/base/ngbe_hw.h
+++ b/drivers/net/ngbe/base/ngbe_hw.h
@@ -42,6 +42,10 @@ s32 ngbe_update_mc_addr_list(struct ngbe_hw *hw, u8 *mc_addr_list,
s32 ngbe_disable_sec_rx_path(struct ngbe_hw *hw);
s32 ngbe_enable_sec_rx_path(struct ngbe_hw *hw);
+s32 ngbe_setup_fc_em(struct ngbe_hw *hw);
+s32 ngbe_fc_enable(struct ngbe_hw *hw);
+void ngbe_fc_autoneg(struct ngbe_hw *hw);
+
s32 ngbe_validate_mac_addr(u8 *mac_addr);
s32 ngbe_acquire_swfw_sync(struct ngbe_hw *hw, u32 mask);
void ngbe_release_swfw_sync(struct ngbe_hw *hw, u32 mask);
@@ -64,6 +68,8 @@ s32 ngbe_mac_check_overtemp(struct ngbe_hw *hw);
void ngbe_disable_rx(struct ngbe_hw *hw);
void ngbe_enable_rx(struct ngbe_hw *hw);
void ngbe_set_mta(struct ngbe_hw *hw, u8 *mc_addr);
+s32 ngbe_negotiate_fc(struct ngbe_hw *hw, u32 adv_reg, u32 lp_reg,
+ u32 adv_sym, u32 adv_asm, u32 lp_sym, u32 lp_asm);
s32 ngbe_init_shared_code(struct ngbe_hw *hw);
s32 ngbe_set_mac_type(struct ngbe_hw *hw);
s32 ngbe_init_ops_pf(struct ngbe_hw *hw);
diff --git a/drivers/net/ngbe/base/ngbe_phy.c b/drivers/net/ngbe/base/ngbe_phy.c
index 691171ee9f..51b0a2ec60 100644
--- a/drivers/net/ngbe/base/ngbe_phy.c
+++ b/drivers/net/ngbe/base/ngbe_phy.c
@@ -429,18 +429,27 @@ s32 ngbe_init_phy(struct ngbe_hw *hw)
hw->phy.init_hw = ngbe_init_phy_rtl;
hw->phy.check_link = ngbe_check_phy_link_rtl;
hw->phy.setup_link = ngbe_setup_phy_link_rtl;
+ hw->phy.get_adv_pause = ngbe_get_phy_advertised_pause_rtl;
+ hw->phy.get_lp_adv_pause = ngbe_get_phy_lp_advertised_pause_rtl;
+ hw->phy.set_pause_adv = ngbe_set_phy_pause_adv_rtl;
break;
case ngbe_phy_mvl:
case ngbe_phy_mvl_sfi:
hw->phy.init_hw = ngbe_init_phy_mvl;
hw->phy.check_link = ngbe_check_phy_link_mvl;
hw->phy.setup_link = ngbe_setup_phy_link_mvl;
+ hw->phy.get_adv_pause = ngbe_get_phy_advertised_pause_mvl;
+ hw->phy.get_lp_adv_pause = ngbe_get_phy_lp_advertised_pause_mvl;
+ hw->phy.set_pause_adv = ngbe_set_phy_pause_adv_mvl;
break;
case ngbe_phy_yt8521s:
case ngbe_phy_yt8521s_sfi:
hw->phy.init_hw = ngbe_init_phy_yt;
hw->phy.check_link = ngbe_check_phy_link_yt;
hw->phy.setup_link = ngbe_setup_phy_link_yt;
+ hw->phy.get_adv_pause = ngbe_get_phy_advertised_pause_yt;
+ hw->phy.get_lp_adv_pause = ngbe_get_phy_lp_advertised_pause_yt;
+ hw->phy.set_pause_adv = ngbe_set_phy_pause_adv_yt;
default:
break;
}
diff --git a/drivers/net/ngbe/base/ngbe_phy.h b/drivers/net/ngbe/base/ngbe_phy.h
index 5d6ff1711c..f262ff3350 100644
--- a/drivers/net/ngbe/base/ngbe_phy.h
+++ b/drivers/net/ngbe/base/ngbe_phy.h
@@ -42,6 +42,9 @@ typedef struct mdi_reg mdi_reg_t;
#define NGBE_MD22_PHY_ID_HIGH 0x2 /* PHY ID High Reg*/
#define NGBE_MD22_PHY_ID_LOW 0x3 /* PHY ID Low Reg*/
+#define NGBE_TAF_SYM_PAUSE 0x1
+#define NGBE_TAF_ASM_PAUSE 0x2
+
s32 ngbe_mdi_map_register(mdi_reg_t *reg, mdi_reg_22_t *reg22);
bool ngbe_validate_phy_addr(struct ngbe_hw *hw, u32 phy_addr);
diff --git a/drivers/net/ngbe/base/ngbe_phy_mvl.c b/drivers/net/ngbe/base/ngbe_phy_mvl.c
index 86b0a072c1..2eb351d258 100644
--- a/drivers/net/ngbe/base/ngbe_phy_mvl.c
+++ b/drivers/net/ngbe/base/ngbe_phy_mvl.c
@@ -209,6 +209,63 @@ s32 ngbe_reset_phy_mvl(struct ngbe_hw *hw)
return status;
}
+s32 ngbe_get_phy_advertised_pause_mvl(struct ngbe_hw *hw, u8 *pause_bit)
+{
+ u16 value;
+ s32 status = 0;
+
+ if (hw->phy.type == ngbe_phy_mvl) {
+ status = hw->phy.read_reg(hw, MVL_ANA, 0, &value);
+ value &= MVL_CANA_ASM_PAUSE | MVL_CANA_PAUSE;
+ *pause_bit = (u8)(value >> 10);
+ } else {
+ status = hw->phy.read_reg(hw, MVL_ANA, 0, &value);
+ value &= MVL_FANA_PAUSE_MASK;
+ *pause_bit = (u8)(value >> 7);
+ }
+
+ return status;
+}
+
+s32 ngbe_get_phy_lp_advertised_pause_mvl(struct ngbe_hw *hw, u8 *pause_bit)
+{
+ u16 value;
+ s32 status = 0;
+
+ if (hw->phy.type == ngbe_phy_mvl) {
+ status = hw->phy.read_reg(hw, MVL_LPAR, 0, &value);
+ value &= MVL_CLPAR_ASM_PAUSE | MVL_CLPAR_PAUSE;
+ *pause_bit = (u8)(value >> 10);
+ } else {
+ status = hw->phy.read_reg(hw, MVL_LPAR, 0, &value);
+ value &= MVL_FLPAR_PAUSE_MASK;
+ *pause_bit = (u8)(value >> 7);
+ }
+
+ return status;
+}
+
+s32 ngbe_set_phy_pause_adv_mvl(struct ngbe_hw *hw, u16 pause_bit)
+{
+ u16 value;
+ s32 status = 0;
+
+ DEBUGFUNC("ngbe_set_phy_pause_adv_mvl");
+
+ if (hw->phy.type == ngbe_phy_mvl) {
+ status = hw->phy.read_reg(hw, MVL_ANA, 0, &value);
+ value &= ~(MVL_CANA_ASM_PAUSE | MVL_CANA_PAUSE);
+ } else {
+ status = hw->phy.read_reg(hw, MVL_ANA, 0, &value);
+ value &= ~MVL_FANA_PAUSE_MASK;
+ }
+
+ value |= pause_bit;
+ status = hw->phy.write_reg(hw, MVL_ANA, 0, value);
+
+ return status;
+}
+
s32 ngbe_check_phy_link_mvl(struct ngbe_hw *hw,
u32 *speed, bool *link_up)
{
diff --git a/drivers/net/ngbe/base/ngbe_phy_mvl.h b/drivers/net/ngbe/base/ngbe_phy_mvl.h
index 74d5ecba77..a2b5202d4b 100644
--- a/drivers/net/ngbe/base/ngbe_phy_mvl.h
+++ b/drivers/net/ngbe/base/ngbe_phy_mvl.h
@@ -94,4 +94,8 @@ s32 ngbe_check_phy_link_mvl(struct ngbe_hw *hw,
u32 *speed, bool *link_up);
s32 ngbe_setup_phy_link_mvl(struct ngbe_hw *hw,
u32 speed, bool autoneg_wait_to_complete);
+s32 ngbe_get_phy_advertised_pause_mvl(struct ngbe_hw *hw, u8 *pause_bit);
+s32 ngbe_get_phy_lp_advertised_pause_mvl(struct ngbe_hw *hw, u8 *pause_bit);
+s32 ngbe_set_phy_pause_adv_mvl(struct ngbe_hw *hw, u16 pause_bit);
+
#endif /* _NGBE_PHY_MVL_H_ */
diff --git a/drivers/net/ngbe/base/ngbe_phy_rtl.c b/drivers/net/ngbe/base/ngbe_phy_rtl.c
index 83830921c2..7b08b7a46c 100644
--- a/drivers/net/ngbe/base/ngbe_phy_rtl.c
+++ b/drivers/net/ngbe/base/ngbe_phy_rtl.c
@@ -249,6 +249,48 @@ s32 ngbe_reset_phy_rtl(struct ngbe_hw *hw)
return status;
}
+s32 ngbe_get_phy_advertised_pause_rtl(struct ngbe_hw *hw, u8 *pause_bit)
+{
+ u16 value;
+ s32 status = 0;
+
+ status = hw->phy.read_reg(hw, RTL_ANAR, RTL_DEV_ZERO, &value);
+ value &= RTL_ANAR_APAUSE | RTL_ANAR_PAUSE;
+ *pause_bit = (u8)(value >> 10);
+ return status;
+}
+
+s32 ngbe_get_phy_lp_advertised_pause_rtl(struct ngbe_hw *hw, u8 *pause_bit)
+{
+ u16 value;
+ s32 status = 0;
+
+ status = hw->phy.read_reg(hw, RTL_INSR, 0xa43, &value);
+
+ status = hw->phy.read_reg(hw, RTL_BMSR, RTL_DEV_ZERO, &value);
+ value = value & RTL_BMSR_ANC;
+
+ /* if AN complete then check lp adv pause */
+ status = hw->phy.read_reg(hw, RTL_ANLPAR, RTL_DEV_ZERO, &value);
+ value &= RTL_ANLPAR_LP;
+ *pause_bit = (u8)(value >> 10);
+ return status;
+}
+
+s32 ngbe_set_phy_pause_adv_rtl(struct ngbe_hw *hw, u16 pause_bit)
+{
+ u16 value;
+ s32 status = 0;
+
+ status = hw->phy.read_reg(hw, RTL_ANAR, RTL_DEV_ZERO, &value);
+ value &= ~(RTL_ANAR_APAUSE | RTL_ANAR_PAUSE);
+ value |= pause_bit;
+
+ status = hw->phy.write_reg(hw, RTL_ANAR, RTL_DEV_ZERO, value);
+
+ return status;
+}
+
s32 ngbe_check_phy_link_rtl(struct ngbe_hw *hw, u32 *speed, bool *link_up)
{
s32 status = 0;
diff --git a/drivers/net/ngbe/base/ngbe_phy_rtl.h b/drivers/net/ngbe/base/ngbe_phy_rtl.h
index 9ce2058eac..d717a1915c 100644
--- a/drivers/net/ngbe/base/ngbe_phy_rtl.h
+++ b/drivers/net/ngbe/base/ngbe_phy_rtl.h
@@ -83,6 +83,9 @@ s32 ngbe_setup_phy_link_rtl(struct ngbe_hw *hw,
s32 ngbe_init_phy_rtl(struct ngbe_hw *hw);
s32 ngbe_reset_phy_rtl(struct ngbe_hw *hw);
+s32 ngbe_get_phy_advertised_pause_rtl(struct ngbe_hw *hw, u8 *pause_bit);
+s32 ngbe_get_phy_lp_advertised_pause_rtl(struct ngbe_hw *hw, u8 *pause_bit);
+s32 ngbe_set_phy_pause_adv_rtl(struct ngbe_hw *hw, u16 pause_bit);
s32 ngbe_check_phy_link_rtl(struct ngbe_hw *hw,
u32 *speed, bool *link_up);
diff --git a/drivers/net/ngbe/base/ngbe_phy_yt.c b/drivers/net/ngbe/base/ngbe_phy_yt.c
index 2a7061c100..8db0f9ce48 100644
--- a/drivers/net/ngbe/base/ngbe_phy_yt.c
+++ b/drivers/net/ngbe/base/ngbe_phy_yt.c
@@ -234,6 +234,50 @@ s32 ngbe_reset_phy_yt(struct ngbe_hw *hw)
return status;
}
+s32 ngbe_get_phy_advertised_pause_yt(struct ngbe_hw *hw, u8 *pause_bit)
+{
+ u16 value;
+ s32 status = 0;
+
+ DEBUGFUNC("ngbe_get_phy_advertised_pause_yt");
+
+ status = hw->phy.read_reg(hw, YT_ANA, 0, &value);
+ value &= YT_FANA_PAUSE_MASK;
+ *pause_bit = (u8)(value >> 7);
+
+ return status;
+}
+
+s32 ngbe_get_phy_lp_advertised_pause_yt(struct ngbe_hw *hw, u8 *pause_bit)
+{
+ u16 value;
+ s32 status = 0;
+
+ DEBUGFUNC("ngbe_get_phy_lp_advertised_pause_yt");
+
+ status = hw->phy.read_reg(hw, YT_LPAR, 0, &value);
+ value &= YT_FLPAR_PAUSE_MASK;
+ *pause_bit = (u8)(value >> 7);
+
+ return status;
+}
+
+s32 ngbe_set_phy_pause_adv_yt(struct ngbe_hw *hw, u16 pause_bit)
+{
+ u16 value;
+ s32 status = 0;
+
+ DEBUGFUNC("ngbe_set_phy_pause_adv_yt");
+
+
+ status = hw->phy.read_reg(hw, YT_ANA, 0, &value);
+ value &= ~YT_FANA_PAUSE_MASK;
+ value |= pause_bit;
+ status = hw->phy.write_reg(hw, YT_ANA, 0, value);
+
+ return status;
+}
+
s32 ngbe_check_phy_link_yt(struct ngbe_hw *hw,
u32 *speed, bool *link_up)
{
diff --git a/drivers/net/ngbe/base/ngbe_phy_yt.h b/drivers/net/ngbe/base/ngbe_phy_yt.h
index 157339cce8..e729e0c854 100644
--- a/drivers/net/ngbe/base/ngbe_phy_yt.h
+++ b/drivers/net/ngbe/base/ngbe_phy_yt.h
@@ -73,4 +73,10 @@ s32 ngbe_check_phy_link_yt(struct ngbe_hw *hw,
u32 *speed, bool *link_up);
s32 ngbe_setup_phy_link_yt(struct ngbe_hw *hw,
u32 speed, bool autoneg_wait_to_complete);
+s32 ngbe_get_phy_advertised_pause_yt(struct ngbe_hw *hw,
+ u8 *pause_bit);
+s32 ngbe_get_phy_lp_advertised_pause_yt(struct ngbe_hw *hw,
+ u8 *pause_bit);
+s32 ngbe_set_phy_pause_adv_yt(struct ngbe_hw *hw, u16 pause_bit);
+
#endif /* _NGBE_PHY_YT_H_ */
diff --git a/drivers/net/ngbe/base/ngbe_type.h b/drivers/net/ngbe/base/ngbe_type.h
index 7a85f82abd..310d32ecfa 100644
--- a/drivers/net/ngbe/base/ngbe_type.h
+++ b/drivers/net/ngbe/base/ngbe_type.h
@@ -67,6 +67,15 @@ enum ngbe_media_type {
ngbe_media_type_virtual
};
+/* Flow Control Settings */
+enum ngbe_fc_mode {
+ ngbe_fc_none = 0,
+ ngbe_fc_rx_pause,
+ ngbe_fc_tx_pause,
+ ngbe_fc_full,
+ ngbe_fc_default
+};
+
struct ngbe_hw;
struct ngbe_addr_filter_info {
@@ -82,6 +91,19 @@ struct ngbe_bus_info {
u8 lan_id;
};
+/* Flow control parameters */
+struct ngbe_fc_info {
+ u32 high_water; /* Flow Ctrl High-water */
+ u32 low_water; /* Flow Ctrl Low-water */
+ u16 pause_time; /* Flow Control Pause timer */
+ bool send_xon; /* Flow control send XON */
+ bool strict_ieee; /* Strict IEEE mode */
+ bool disable_fc_autoneg; /* Do not autonegotiate FC */
+ bool fc_was_autonegged; /* Is current_mode the result of autonegging? */
+ enum ngbe_fc_mode current_mode; /* FC mode in effect */
+ enum ngbe_fc_mode requested_mode; /* FC mode requested by caller */
+};
+
/* Statistics counters collected by the MAC */
/* PB[] RxTx */
struct ngbe_pb_stats {
@@ -263,6 +285,11 @@ struct ngbe_mac_info {
void (*set_vlan_anti_spoofing)(struct ngbe_hw *hw,
bool enable, int vf);
+ /* Flow Control */
+ s32 (*fc_enable)(struct ngbe_hw *hw);
+ s32 (*setup_fc)(struct ngbe_hw *hw);
+ void (*fc_autoneg)(struct ngbe_hw *hw);
+
/* Manageability interface */
s32 (*init_thermal_sensor_thresh)(struct ngbe_hw *hw);
s32 (*check_overtemp)(struct ngbe_hw *hw);
@@ -302,6 +329,10 @@ struct ngbe_phy_info {
s32 (*setup_link)(struct ngbe_hw *hw, u32 speed,
bool autoneg_wait_to_complete);
s32 (*check_link)(struct ngbe_hw *hw, u32 *speed, bool *link_up);
+ s32 (*set_phy_power)(struct ngbe_hw *hw, bool on);
+ s32 (*get_adv_pause)(struct ngbe_hw *hw, u8 *pause_bit);
+ s32 (*get_lp_adv_pause)(struct ngbe_hw *hw, u8 *pause_bit);
+ s32 (*set_pause_adv)(struct ngbe_hw *hw, u16 pause_bit);
enum ngbe_media_type media_type;
enum ngbe_phy_type type;
@@ -349,6 +380,7 @@ struct ngbe_hw {
void *back;
struct ngbe_mac_info mac;
struct ngbe_addr_filter_info addr_ctrl;
+ struct ngbe_fc_info fc;
struct ngbe_phy_info phy;
struct ngbe_rom_info rom;
struct ngbe_bus_info bus;
diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c
index 52d7b6376d..e950146f42 100644
--- a/drivers/net/ngbe/ngbe_ethdev.c
+++ b/drivers/net/ngbe/ngbe_ethdev.c
@@ -366,6 +366,14 @@ eth_ngbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
/* Unlock any pending hardware semaphore */
ngbe_swfw_lock_reset(hw);
+ /* Get Hardware Flow Control setting */
+ hw->fc.requested_mode = ngbe_fc_full;
+ hw->fc.current_mode = ngbe_fc_full;
+ hw->fc.pause_time = NGBE_FC_PAUSE_TIME;
+ hw->fc.low_water = NGBE_FC_XON_LOTH;
+ hw->fc.high_water = NGBE_FC_XOFF_HITH;
+ hw->fc.send_xon = 1;
+
err = hw->rom.init_params(hw);
if (err != 0) {
PMD_INIT_LOG(ERR, "The EEPROM init failed: %d", err);
@@ -2231,6 +2239,107 @@ ngbe_dev_interrupt_handler(void *param)
ngbe_dev_interrupt_action(dev);
}
+static int
+ngbe_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
+{
+ struct ngbe_hw *hw = ngbe_dev_hw(dev);
+ uint32_t mflcn_reg;
+ uint32_t fccfg_reg;
+ int rx_pause;
+ int tx_pause;
+
+ fc_conf->pause_time = hw->fc.pause_time;
+ fc_conf->high_water = hw->fc.high_water;
+ fc_conf->low_water = hw->fc.low_water;
+ fc_conf->send_xon = hw->fc.send_xon;
+ fc_conf->autoneg = !hw->fc.disable_fc_autoneg;
+
+ /*
+ * Return rx_pause status according to actual setting of
+ * RXFCCFG register.
+ */
+ mflcn_reg = rd32(hw, NGBE_RXFCCFG);
+ if (mflcn_reg & NGBE_RXFCCFG_FC)
+ rx_pause = 1;
+ else
+ rx_pause = 0;
+
+ /*
+ * Return tx_pause status according to actual setting of
+ * TXFCCFG register.
+ */
+ fccfg_reg = rd32(hw, NGBE_TXFCCFG);
+ if (fccfg_reg & NGBE_TXFCCFG_FC)
+ tx_pause = 1;
+ else
+ tx_pause = 0;
+
+ if (rx_pause && tx_pause)
+ fc_conf->mode = RTE_FC_FULL;
+ else if (rx_pause)
+ fc_conf->mode = RTE_FC_RX_PAUSE;
+ else if (tx_pause)
+ fc_conf->mode = RTE_FC_TX_PAUSE;
+ else
+ fc_conf->mode = RTE_FC_NONE;
+
+ return 0;
+}
+
+static int
+ngbe_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
+{
+ struct ngbe_hw *hw = ngbe_dev_hw(dev);
+ int err;
+ uint32_t rx_buf_size;
+ uint32_t max_high_water;
+ enum ngbe_fc_mode rte_fcmode_2_ngbe_fcmode[] = {
+ ngbe_fc_none,
+ ngbe_fc_rx_pause,
+ ngbe_fc_tx_pause,
+ ngbe_fc_full
+ };
+
+ PMD_INIT_FUNC_TRACE();
+
+ rx_buf_size = rd32(hw, NGBE_PBRXSIZE);
+ PMD_INIT_LOG(DEBUG, "Rx packet buffer size = 0x%x", rx_buf_size);
+
+ /*
+ * At least reserve one Ethernet frame for watermark
+ * high_water/low_water in kilo bytes for ngbe
+ */
+ max_high_water = (rx_buf_size - RTE_ETHER_MAX_LEN) >> 10;
+ if (fc_conf->high_water > max_high_water ||
+ fc_conf->high_water < fc_conf->low_water) {
+ PMD_INIT_LOG(ERR, "Invalid high/low water setup value in KB");
+ PMD_INIT_LOG(ERR, "High_water must <= 0x%x", max_high_water);
+ return -EINVAL;
+ }
+
+ hw->fc.requested_mode = rte_fcmode_2_ngbe_fcmode[fc_conf->mode];
+ hw->fc.pause_time = fc_conf->pause_time;
+ hw->fc.high_water = fc_conf->high_water;
+ hw->fc.low_water = fc_conf->low_water;
+ hw->fc.send_xon = fc_conf->send_xon;
+ hw->fc.disable_fc_autoneg = !fc_conf->autoneg;
+
+ err = hw->mac.fc_enable(hw);
+
+ /* Not negotiated is not an error case */
+ if (err == 0 || err == NGBE_ERR_FC_NOT_NEGOTIATED) {
+ wr32m(hw, NGBE_MACRXFLT, NGBE_MACRXFLT_CTL_MASK,
+ (fc_conf->mac_ctrl_frame_fwd
+ ? NGBE_MACRXFLT_CTL_NOPS : NGBE_MACRXFLT_CTL_DROP));
+ ngbe_flush(hw);
+
+ return 0;
+ }
+
+ PMD_INIT_LOG(ERR, "ngbe_fc_enable = 0x%x", err);
+ return -EIO;
+}
+
int
ngbe_dev_rss_reta_update(struct rte_eth_dev *dev,
struct rte_eth_rss_reta_entry64 *reta_conf,
@@ -2682,6 +2791,8 @@ static const struct eth_dev_ops ngbe_eth_dev_ops = {
.rx_queue_release = ngbe_dev_rx_queue_release,
.tx_queue_setup = ngbe_dev_tx_queue_setup,
.tx_queue_release = ngbe_dev_tx_queue_release,
+ .flow_ctrl_get = ngbe_flow_ctrl_get,
+ .flow_ctrl_set = ngbe_flow_ctrl_set,
.mac_addr_add = ngbe_add_rar,
.mac_addr_remove = ngbe_remove_rar,
.mac_addr_set = ngbe_set_default_mac_addr,
diff --git a/drivers/net/ngbe/ngbe_ethdev.h b/drivers/net/ngbe/ngbe_ethdev.h
index 26911cc7d2..c16c6568be 100644
--- a/drivers/net/ngbe/ngbe_ethdev.h
+++ b/drivers/net/ngbe/ngbe_ethdev.h
@@ -248,6 +248,14 @@ void ngbe_pf_mbx_process(struct rte_eth_dev *eth_dev);
int ngbe_pf_host_configure(struct rte_eth_dev *eth_dev);
+/* High threshold controlling when to start sending XOFF frames. */
+#define NGBE_FC_XOFF_HITH 128 /*KB*/
+/* Low threshold controlling when to start sending XON frames. */
+#define NGBE_FC_XON_LOTH 64 /*KB*/
+
+/* Timer value included in XOFF frames. */
+#define NGBE_FC_PAUSE_TIME 0x680
+
#define NGBE_LINK_DOWN_CHECK_TIMEOUT 4000 /* ms */
#define NGBE_LINK_UP_CHECK_TIMEOUT 1000 /* ms */
#define NGBE_VMDQ_NUM_UC_MAC 4096 /* Maximum nb. of UC MAC addr. */
--
2.21.0.windows.1
^ permalink raw reply [flat|nested] 54+ messages in thread
* [dpdk-dev] [PATCH 21/32] net/ngbe: support device LED on and off
2021-09-08 8:37 [dpdk-dev] [PATCH 00/32] net/ngbe: add many features Jiawen Wu
` (19 preceding siblings ...)
2021-09-08 8:37 ` [dpdk-dev] [PATCH 20/32] net/ngbe: support flow control Jiawen Wu
@ 2021-09-08 8:37 ` Jiawen Wu
2021-09-08 8:37 ` [dpdk-dev] [PATCH 22/32] net/ngbe: support EEPROM dump Jiawen Wu
` (10 subsequent siblings)
31 siblings, 0 replies; 54+ messages in thread
From: Jiawen Wu @ 2021-09-08 8:37 UTC (permalink / raw)
To: dev; +Cc: Jiawen Wu
Support device LED on and off.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
drivers/net/ngbe/base/ngbe_dummy.h | 10 +++++++
drivers/net/ngbe/base/ngbe_hw.c | 48 ++++++++++++++++++++++++++++++
drivers/net/ngbe/base/ngbe_hw.h | 3 ++
drivers/net/ngbe/base/ngbe_type.h | 4 +++
drivers/net/ngbe/ngbe_ethdev.c | 16 ++++++++++
5 files changed, 81 insertions(+)
diff --git a/drivers/net/ngbe/base/ngbe_dummy.h b/drivers/net/ngbe/base/ngbe_dummy.h
index 0baabcbae7..9930a3a1d6 100644
--- a/drivers/net/ngbe/base/ngbe_dummy.h
+++ b/drivers/net/ngbe/base/ngbe_dummy.h
@@ -104,6 +104,14 @@ static inline s32 ngbe_mac_get_link_capabilities_dummy(struct ngbe_hw *TUP0,
{
return NGBE_ERR_OPS_DUMMY;
}
+static inline s32 ngbe_mac_led_on_dummy(struct ngbe_hw *TUP0, u32 TUP1)
+{
+ return NGBE_ERR_OPS_DUMMY;
+}
+static inline s32 ngbe_mac_led_off_dummy(struct ngbe_hw *TUP0, u32 TUP1)
+{
+ return NGBE_ERR_OPS_DUMMY;
+}
static inline s32 ngbe_mac_set_rar_dummy(struct ngbe_hw *TUP0, u32 TUP1,
u8 *TUP2, u32 TUP3, u32 TUP4)
{
@@ -278,6 +286,8 @@ static inline void ngbe_init_ops_dummy(struct ngbe_hw *hw)
hw->mac.setup_link = ngbe_mac_setup_link_dummy;
hw->mac.check_link = ngbe_mac_check_link_dummy;
hw->mac.get_link_capabilities = ngbe_mac_get_link_capabilities_dummy;
+ hw->mac.led_on = ngbe_mac_led_on_dummy;
+ hw->mac.led_off = ngbe_mac_led_off_dummy;
hw->mac.set_rar = ngbe_mac_set_rar_dummy;
hw->mac.clear_rar = ngbe_mac_clear_rar_dummy;
hw->mac.set_vmdq = ngbe_mac_set_vmdq_dummy;
diff --git a/drivers/net/ngbe/base/ngbe_hw.c b/drivers/net/ngbe/base/ngbe_hw.c
index 35351a2702..476e5f25cf 100644
--- a/drivers/net/ngbe/base/ngbe_hw.c
+++ b/drivers/net/ngbe/base/ngbe_hw.c
@@ -390,6 +390,50 @@ s32 ngbe_stop_hw(struct ngbe_hw *hw)
return 0;
}
+/**
+ * ngbe_led_on - Turns on the software controllable LEDs.
+ * @hw: pointer to hardware structure
+ * @index: led number to turn on
+ **/
+s32 ngbe_led_on(struct ngbe_hw *hw, u32 index)
+{
+ u32 led_reg = rd32(hw, NGBE_LEDCTL);
+
+ DEBUGFUNC("ngbe_led_on");
+
+ if (index > 3)
+ return NGBE_ERR_PARAM;
+
+ /* To turn on the LED, set mode to ON. */
+ led_reg |= NGBE_LEDCTL_100M;
+ wr32(hw, NGBE_LEDCTL, led_reg);
+ ngbe_flush(hw);
+
+ return 0;
+}
+
+/**
+ * ngbe_led_off - Turns off the software controllable LEDs.
+ * @hw: pointer to hardware structure
+ * @index: led number to turn off
+ **/
+s32 ngbe_led_off(struct ngbe_hw *hw, u32 index)
+{
+ u32 led_reg = rd32(hw, NGBE_LEDCTL);
+
+ DEBUGFUNC("ngbe_led_off");
+
+ if (index > 3)
+ return NGBE_ERR_PARAM;
+
+ /* To turn off the LED, set mode to OFF. */
+ led_reg &= ~NGBE_LEDCTL_100M;
+ wr32(hw, NGBE_LEDCTL, led_reg);
+ ngbe_flush(hw);
+
+ return 0;
+}
+
/**
* ngbe_validate_mac_addr - Validate MAC address
* @mac_addr: pointer to MAC address.
@@ -1836,6 +1880,10 @@ s32 ngbe_init_ops_pf(struct ngbe_hw *hw)
mac->disable_sec_rx_path = ngbe_disable_sec_rx_path;
mac->enable_sec_rx_path = ngbe_enable_sec_rx_path;
+ /* LEDs */
+ mac->led_on = ngbe_led_on;
+ mac->led_off = ngbe_led_off;
+
/* RAR, Multicast, VLAN */
mac->set_rar = ngbe_set_rar;
mac->clear_rar = ngbe_clear_rar;
diff --git a/drivers/net/ngbe/base/ngbe_hw.h b/drivers/net/ngbe/base/ngbe_hw.h
index a84ddca6ac..ad7e8fc2d9 100644
--- a/drivers/net/ngbe/base/ngbe_hw.h
+++ b/drivers/net/ngbe/base/ngbe_hw.h
@@ -32,6 +32,9 @@ s32 ngbe_setup_mac_link_em(struct ngbe_hw *hw,
u32 speed,
bool autoneg_wait_to_complete);
+s32 ngbe_led_on(struct ngbe_hw *hw, u32 index);
+s32 ngbe_led_off(struct ngbe_hw *hw, u32 index);
+
s32 ngbe_set_rar(struct ngbe_hw *hw, u32 index, u8 *addr, u32 vmdq,
u32 enable_addr);
s32 ngbe_clear_rar(struct ngbe_hw *hw, u32 index);
diff --git a/drivers/net/ngbe/base/ngbe_type.h b/drivers/net/ngbe/base/ngbe_type.h
index 310d32ecfa..886dffc0db 100644
--- a/drivers/net/ngbe/base/ngbe_type.h
+++ b/drivers/net/ngbe/base/ngbe_type.h
@@ -265,6 +265,10 @@ struct ngbe_mac_info {
s32 (*get_link_capabilities)(struct ngbe_hw *hw,
u32 *speed, bool *autoneg);
+ /* LED */
+ s32 (*led_on)(struct ngbe_hw *hw, u32 index);
+ s32 (*led_off)(struct ngbe_hw *hw, u32 index);
+
/* RAR */
s32 (*set_rar)(struct ngbe_hw *hw, u32 index, u8 *addr, u32 vmdq,
u32 enable_addr);
diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c
index e950146f42..6ed836df9e 100644
--- a/drivers/net/ngbe/ngbe_ethdev.c
+++ b/drivers/net/ngbe/ngbe_ethdev.c
@@ -2239,6 +2239,20 @@ ngbe_dev_interrupt_handler(void *param)
ngbe_dev_interrupt_action(dev);
}
+static int
+ngbe_dev_led_on(struct rte_eth_dev *dev)
+{
+ struct ngbe_hw *hw = ngbe_dev_hw(dev);
+ return hw->mac.led_on(hw, 0) == 0 ? 0 : -ENOTSUP;
+}
+
+static int
+ngbe_dev_led_off(struct rte_eth_dev *dev)
+{
+ struct ngbe_hw *hw = ngbe_dev_hw(dev);
+ return hw->mac.led_off(hw, 0) == 0 ? 0 : -ENOTSUP;
+}
+
static int
ngbe_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
{
@@ -2791,6 +2805,8 @@ static const struct eth_dev_ops ngbe_eth_dev_ops = {
.rx_queue_release = ngbe_dev_rx_queue_release,
.tx_queue_setup = ngbe_dev_tx_queue_setup,
.tx_queue_release = ngbe_dev_tx_queue_release,
+ .dev_led_on = ngbe_dev_led_on,
+ .dev_led_off = ngbe_dev_led_off,
.flow_ctrl_get = ngbe_flow_ctrl_get,
.flow_ctrl_set = ngbe_flow_ctrl_set,
.mac_addr_add = ngbe_add_rar,
--
2.21.0.windows.1
^ permalink raw reply [flat|nested] 54+ messages in thread
* [dpdk-dev] [PATCH 22/32] net/ngbe: support EEPROM dump
2021-09-08 8:37 [dpdk-dev] [PATCH 00/32] net/ngbe: add many features Jiawen Wu
` (20 preceding siblings ...)
2021-09-08 8:37 ` [dpdk-dev] [PATCH 21/32] net/ngbe: support device LED on and off Jiawen Wu
@ 2021-09-08 8:37 ` Jiawen Wu
2021-09-08 8:37 ` [dpdk-dev] [PATCH 23/32] net/ngbe: support register dump Jiawen Wu
` (9 subsequent siblings)
31 siblings, 0 replies; 54+ messages in thread
From: Jiawen Wu @ 2021-09-08 8:37 UTC (permalink / raw)
To: dev; +Cc: Jiawen Wu
Support to get and set device EEPROM data.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
doc/guides/nics/features/ngbe.ini | 1 +
drivers/net/ngbe/base/ngbe_dummy.h | 12 +++++
drivers/net/ngbe/base/ngbe_eeprom.c | 77 +++++++++++++++++++++++++++++
drivers/net/ngbe/base/ngbe_eeprom.h | 5 ++
drivers/net/ngbe/base/ngbe_hw.c | 2 +
drivers/net/ngbe/base/ngbe_mng.c | 41 +++++++++++++++
drivers/net/ngbe/base/ngbe_mng.h | 13 +++++
drivers/net/ngbe/base/ngbe_type.h | 4 ++
drivers/net/ngbe/ngbe_ethdev.c | 52 +++++++++++++++++++
9 files changed, 207 insertions(+)
diff --git a/doc/guides/nics/features/ngbe.ini b/doc/guides/nics/features/ngbe.ini
index 00150282cb..3c169ab774 100644
--- a/doc/guides/nics/features/ngbe.ini
+++ b/doc/guides/nics/features/ngbe.ini
@@ -35,6 +35,7 @@ Basic stats = Y
Extended stats = Y
Stats per queue = Y
FW version = Y
+EEPROM dump = Y
Multiprocess aware = Y
Linux = Y
ARMv8 = Y
diff --git a/drivers/net/ngbe/base/ngbe_dummy.h b/drivers/net/ngbe/base/ngbe_dummy.h
index 9930a3a1d6..61b0d82bfb 100644
--- a/drivers/net/ngbe/base/ngbe_dummy.h
+++ b/drivers/net/ngbe/base/ngbe_dummy.h
@@ -33,11 +33,21 @@ static inline s32 ngbe_rom_init_params_dummy(struct ngbe_hw *TUP0)
{
return NGBE_ERR_OPS_DUMMY;
}
+static inline s32 ngbe_rom_readw_buffer_dummy(struct ngbe_hw *TUP0, u32 TUP1,
+ u32 TUP2, void *TUP3)
+{
+ return NGBE_ERR_OPS_DUMMY;
+}
static inline s32 ngbe_rom_read32_dummy(struct ngbe_hw *TUP0, u32 TUP1,
u32 *TUP2)
{
return NGBE_ERR_OPS_DUMMY;
}
+static inline s32 ngbe_rom_writew_buffer_dummy(struct ngbe_hw *TUP0, u32 TUP1,
+ u32 TUP2, void *TUP3)
+{
+ return NGBE_ERR_OPS_DUMMY;
+}
static inline s32 ngbe_rom_validate_checksum_dummy(struct ngbe_hw *TUP0,
u16 *TUP1)
{
@@ -270,7 +280,9 @@ static inline void ngbe_init_ops_dummy(struct ngbe_hw *hw)
{
hw->bus.set_lan_id = ngbe_bus_set_lan_id_dummy;
hw->rom.init_params = ngbe_rom_init_params_dummy;
+ hw->rom.readw_buffer = ngbe_rom_readw_buffer_dummy;
hw->rom.read32 = ngbe_rom_read32_dummy;
+ hw->rom.writew_buffer = ngbe_rom_writew_buffer_dummy;
hw->rom.validate_checksum = ngbe_rom_validate_checksum_dummy;
hw->mac.init_hw = ngbe_mac_init_hw_dummy;
hw->mac.reset_hw = ngbe_mac_reset_hw_dummy;
diff --git a/drivers/net/ngbe/base/ngbe_eeprom.c b/drivers/net/ngbe/base/ngbe_eeprom.c
index 9ae2f0badb..f9a876e9bd 100644
--- a/drivers/net/ngbe/base/ngbe_eeprom.c
+++ b/drivers/net/ngbe/base/ngbe_eeprom.c
@@ -161,6 +161,45 @@ void ngbe_release_eeprom_semaphore(struct ngbe_hw *hw)
ngbe_flush(hw);
}
+/**
+ * ngbe_ee_read_buffer- Read EEPROM word(s) using hostif
+ * @hw: pointer to hardware structure
+ * @offset: offset of word in the EEPROM to read
+ * @words: number of words
+ * @data: word(s) read from the EEPROM
+ *
+ * Reads a 16 bit word(s) from the EEPROM using the hostif.
+ **/
+s32 ngbe_ee_readw_buffer(struct ngbe_hw *hw,
+ u32 offset, u32 words, void *data)
+{
+ const u32 mask = NGBE_MNGSEM_SWMBX | NGBE_MNGSEM_SWFLASH;
+ u32 addr = (offset << 1);
+ u32 len = (words << 1);
+ u8 *buf = (u8 *)data;
+ int err;
+
+ err = hw->mac.acquire_swfw_sync(hw, mask);
+ if (err)
+ return err;
+
+ while (len) {
+ u32 seg = (len <= NGBE_PMMBX_DATA_SIZE
+ ? len : NGBE_PMMBX_DATA_SIZE);
+
+ err = ngbe_hic_sr_read(hw, addr, buf, seg);
+ if (err)
+ break;
+
+ len -= seg;
+ addr += seg;
+ buf += seg;
+ }
+
+ hw->mac.release_swfw_sync(hw, mask);
+ return err;
+}
+
/**
* ngbe_ee_read32 - Read EEPROM word using a host interface cmd
* @hw: pointer to hardware structure
@@ -185,6 +224,44 @@ s32 ngbe_ee_read32(struct ngbe_hw *hw, u32 addr, u32 *data)
return err;
}
+/**
+ * ngbe_ee_write_buffer - Write EEPROM word(s) using hostif
+ * @hw: pointer to hardware structure
+ * @offset: offset of word in the EEPROM to write
+ * @words: number of words
+ * @data: word(s) write to the EEPROM
+ *
+ * Write a 16 bit word(s) to the EEPROM using the hostif.
+ **/
+s32 ngbe_ee_writew_buffer(struct ngbe_hw *hw,
+ u32 offset, u32 words, void *data)
+{
+ const u32 mask = NGBE_MNGSEM_SWMBX | NGBE_MNGSEM_SWFLASH;
+ u32 addr = (offset << 1);
+ u32 len = (words << 1);
+ u8 *buf = (u8 *)data;
+ int err;
+
+ err = hw->mac.acquire_swfw_sync(hw, mask);
+ if (err)
+ return err;
+
+ while (len) {
+ u32 seg = (len <= NGBE_PMMBX_DATA_SIZE
+ ? len : NGBE_PMMBX_DATA_SIZE);
+
+ err = ngbe_hic_sr_write(hw, addr, buf, seg);
+ if (err)
+ break;
+
+ len -= seg;
+ buf += seg;
+ }
+
+ hw->mac.release_swfw_sync(hw, mask);
+ return err;
+}
+
/**
* ngbe_validate_eeprom_checksum_em - Validate EEPROM checksum
* @hw: pointer to hardware structure
diff --git a/drivers/net/ngbe/base/ngbe_eeprom.h b/drivers/net/ngbe/base/ngbe_eeprom.h
index 5f27425913..26ac686723 100644
--- a/drivers/net/ngbe/base/ngbe_eeprom.h
+++ b/drivers/net/ngbe/base/ngbe_eeprom.h
@@ -17,6 +17,11 @@ s32 ngbe_get_eeprom_semaphore(struct ngbe_hw *hw);
void ngbe_release_eeprom_semaphore(struct ngbe_hw *hw);
s32 ngbe_save_eeprom_version(struct ngbe_hw *hw);
+s32 ngbe_ee_readw_buffer(struct ngbe_hw *hw, u32 offset, u32 words,
+ void *data);
s32 ngbe_ee_read32(struct ngbe_hw *hw, u32 addr, u32 *data);
+s32 ngbe_ee_writew_buffer(struct ngbe_hw *hw, u32 offset, u32 words,
+ void *data);
+
#endif /* _NGBE_EEPROM_H_ */
diff --git a/drivers/net/ngbe/base/ngbe_hw.c b/drivers/net/ngbe/base/ngbe_hw.c
index 476e5f25cf..218e612461 100644
--- a/drivers/net/ngbe/base/ngbe_hw.c
+++ b/drivers/net/ngbe/base/ngbe_hw.c
@@ -1920,7 +1920,9 @@ s32 ngbe_init_ops_pf(struct ngbe_hw *hw)
/* EEPROM */
rom->init_params = ngbe_init_eeprom_params;
+ rom->readw_buffer = ngbe_ee_readw_buffer;
rom->read32 = ngbe_ee_read32;
+ rom->writew_buffer = ngbe_ee_writew_buffer;
rom->validate_checksum = ngbe_validate_eeprom_checksum_em;
mac->mcft_size = NGBE_EM_MC_TBL_SIZE;
diff --git a/drivers/net/ngbe/base/ngbe_mng.c b/drivers/net/ngbe/base/ngbe_mng.c
index 9416ea4c8d..a3dd8093ce 100644
--- a/drivers/net/ngbe/base/ngbe_mng.c
+++ b/drivers/net/ngbe/base/ngbe_mng.c
@@ -202,6 +202,47 @@ s32 ngbe_hic_sr_read(struct ngbe_hw *hw, u32 addr, u8 *buf, int len)
return 0;
}
+/**
+ * ngbe_hic_sr_write - Write EEPROM word using hostif
+ * @hw: pointer to hardware structure
+ * @offset: offset of word in the EEPROM to write
+ * @data: word write to the EEPROM
+ *
+ * Write a 16 bit word to the EEPROM using the hostif.
+ **/
+s32 ngbe_hic_sr_write(struct ngbe_hw *hw, u32 addr, u8 *buf, int len)
+{
+ struct ngbe_hic_write_shadow_ram command;
+ u32 value;
+ int err = 0, i = 0, j = 0;
+
+ if (len > NGBE_PMMBX_DATA_SIZE)
+ return NGBE_ERR_HOST_INTERFACE_COMMAND;
+
+ memset(&command, 0, sizeof(command));
+ command.hdr.req.cmd = FW_WRITE_SHADOW_RAM_CMD;
+ command.hdr.req.buf_lenh = 0;
+ command.hdr.req.buf_lenl = FW_WRITE_SHADOW_RAM_LEN;
+ command.hdr.req.checksum = FW_DEFAULT_CHECKSUM;
+ command.address = cpu_to_be32(addr);
+ command.length = cpu_to_be16(len);
+
+ while (i < (len >> 2)) {
+ value = ((u32 *)buf)[i];
+ wr32a(hw, NGBE_MNGMBX, FW_NVM_DATA_OFFSET + i, value);
+ i++;
+ }
+
+ for (i <<= 2; i < len; i++)
+ ((u8 *)&value)[j++] = ((u8 *)buf)[i];
+
+ wr32a(hw, NGBE_MNGMBX, FW_NVM_DATA_OFFSET + (i >> 2), value);
+
+ UNREFERENCED_PARAMETER(&command);
+
+ return err;
+}
+
s32 ngbe_hic_check_cap(struct ngbe_hw *hw)
{
struct ngbe_hic_read_shadow_ram command;
diff --git a/drivers/net/ngbe/base/ngbe_mng.h b/drivers/net/ngbe/base/ngbe_mng.h
index 6f368b028f..e3d0309cbc 100644
--- a/drivers/net/ngbe/base/ngbe_mng.h
+++ b/drivers/net/ngbe/base/ngbe_mng.h
@@ -18,6 +18,8 @@
#define FW_CEM_RESP_STATUS_SUCCESS 0x1
#define FW_READ_SHADOW_RAM_CMD 0x31
#define FW_READ_SHADOW_RAM_LEN 0x6
+#define FW_WRITE_SHADOW_RAM_CMD 0x33
+#define FW_WRITE_SHADOW_RAM_LEN 0xA /* 8 plus 1 WORD to write */
#define FW_DEFAULT_CHECKSUM 0xFF /* checksum always 0xFF */
#define FW_NVM_DATA_OFFSET 3
#define FW_EEPROM_CHECK_STATUS 0xE9
@@ -65,6 +67,17 @@ struct ngbe_hic_read_shadow_ram {
u16 pad3;
};
+struct ngbe_hic_write_shadow_ram {
+ union ngbe_hic_hdr2 hdr;
+ u32 address;
+ u16 length;
+ u16 pad2;
+ u16 data;
+ u16 pad3;
+};
+
s32 ngbe_hic_sr_read(struct ngbe_hw *hw, u32 addr, u8 *buf, int len);
+s32 ngbe_hic_sr_write(struct ngbe_hw *hw, u32 addr, u8 *buf, int len);
+
s32 ngbe_hic_check_cap(struct ngbe_hw *hw);
#endif /* _NGBE_MNG_H_ */
diff --git a/drivers/net/ngbe/base/ngbe_type.h b/drivers/net/ngbe/base/ngbe_type.h
index 886dffc0db..32d3ab5d03 100644
--- a/drivers/net/ngbe/base/ngbe_type.h
+++ b/drivers/net/ngbe/base/ngbe_type.h
@@ -231,7 +231,11 @@ typedef u8* (*ngbe_mc_addr_itr) (struct ngbe_hw *hw, u8 **mc_addr_ptr,
struct ngbe_rom_info {
s32 (*init_params)(struct ngbe_hw *hw);
+ s32 (*readw_buffer)(struct ngbe_hw *hw, u32 offset, u32 words,
+ void *data);
s32 (*read32)(struct ngbe_hw *hw, u32 addr, u32 *data);
+ s32 (*writew_buffer)(struct ngbe_hw *hw, u32 offset, u32 words,
+ void *data);
s32 (*validate_checksum)(struct ngbe_hw *hw, u16 *checksum_val);
enum ngbe_eeprom_type type;
diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c
index 6ed836df9e..1cf4ca54af 100644
--- a/drivers/net/ngbe/ngbe_ethdev.c
+++ b/drivers/net/ngbe/ngbe_ethdev.c
@@ -2769,6 +2769,55 @@ ngbe_dev_set_mc_addr_list(struct rte_eth_dev *dev,
ngbe_dev_addr_list_itr, TRUE);
}
+static int
+ngbe_get_eeprom_length(struct rte_eth_dev *dev)
+{
+ struct ngbe_hw *hw = ngbe_dev_hw(dev);
+
+ /* Return unit is byte count */
+ return hw->rom.word_size * 2;
+}
+
+static int
+ngbe_get_eeprom(struct rte_eth_dev *dev,
+ struct rte_dev_eeprom_info *in_eeprom)
+{
+ struct ngbe_hw *hw = ngbe_dev_hw(dev);
+ struct ngbe_rom_info *eeprom = &hw->rom;
+ uint16_t *data = in_eeprom->data;
+ int first, length;
+
+ first = in_eeprom->offset >> 1;
+ length = in_eeprom->length >> 1;
+ if (first > hw->rom.word_size ||
+ ((first + length) > hw->rom.word_size))
+ return -EINVAL;
+
+ in_eeprom->magic = hw->vendor_id | (hw->device_id << 16);
+
+ return eeprom->readw_buffer(hw, first, length, data);
+}
+
+static int
+ngbe_set_eeprom(struct rte_eth_dev *dev,
+ struct rte_dev_eeprom_info *in_eeprom)
+{
+ struct ngbe_hw *hw = ngbe_dev_hw(dev);
+ struct ngbe_rom_info *eeprom = &hw->rom;
+ uint16_t *data = in_eeprom->data;
+ int first, length;
+
+ first = in_eeprom->offset >> 1;
+ length = in_eeprom->length >> 1;
+ if (first > hw->rom.word_size ||
+ ((first + length) > hw->rom.word_size))
+ return -EINVAL;
+
+ in_eeprom->magic = hw->vendor_id | (hw->device_id << 16);
+
+ return eeprom->writew_buffer(hw, first, length, data);
+}
+
static const struct eth_dev_ops ngbe_eth_dev_ops = {
.dev_configure = ngbe_dev_configure,
.dev_infos_get = ngbe_dev_info_get,
@@ -2819,6 +2868,9 @@ static const struct eth_dev_ops ngbe_eth_dev_ops = {
.rss_hash_update = ngbe_dev_rss_hash_update,
.rss_hash_conf_get = ngbe_dev_rss_hash_conf_get,
.set_mc_addr_list = ngbe_dev_set_mc_addr_list,
+ .get_eeprom_length = ngbe_get_eeprom_length,
+ .get_eeprom = ngbe_get_eeprom,
+ .set_eeprom = ngbe_set_eeprom,
};
RTE_PMD_REGISTER_PCI(net_ngbe, rte_ngbe_pmd);
--
2.21.0.windows.1
^ permalink raw reply [flat|nested] 54+ messages in thread
* [dpdk-dev] [PATCH 23/32] net/ngbe: support register dump
2021-09-08 8:37 [dpdk-dev] [PATCH 00/32] net/ngbe: add many features Jiawen Wu
` (21 preceding siblings ...)
2021-09-08 8:37 ` [dpdk-dev] [PATCH 22/32] net/ngbe: support EEPROM dump Jiawen Wu
@ 2021-09-08 8:37 ` Jiawen Wu
2021-09-08 8:37 ` [dpdk-dev] [PATCH 24/32] net/ngbe: support timesync Jiawen Wu
` (8 subsequent siblings)
31 siblings, 0 replies; 54+ messages in thread
From: Jiawen Wu @ 2021-09-08 8:37 UTC (permalink / raw)
To: dev; +Cc: Jiawen Wu
Support to dump registers.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
doc/guides/nics/features/ngbe.ini | 1 +
drivers/net/ngbe/base/ngbe_type.h | 1 +
drivers/net/ngbe/ngbe_ethdev.c | 108 +++++++++++++++++++++++++++++
drivers/net/ngbe/ngbe_regs_group.h | 54 +++++++++++++++
4 files changed, 164 insertions(+)
create mode 100644 drivers/net/ngbe/ngbe_regs_group.h
diff --git a/doc/guides/nics/features/ngbe.ini b/doc/guides/nics/features/ngbe.ini
index 3c169ab774..1d6399a2e7 100644
--- a/doc/guides/nics/features/ngbe.ini
+++ b/doc/guides/nics/features/ngbe.ini
@@ -36,6 +36,7 @@ Extended stats = Y
Stats per queue = Y
FW version = Y
EEPROM dump = Y
+Registers dump = Y
Multiprocess aware = Y
Linux = Y
ARMv8 = Y
diff --git a/drivers/net/ngbe/base/ngbe_type.h b/drivers/net/ngbe/base/ngbe_type.h
index 32d3ab5d03..12847b7272 100644
--- a/drivers/net/ngbe/base/ngbe_type.h
+++ b/drivers/net/ngbe/base/ngbe_type.h
@@ -398,6 +398,7 @@ struct ngbe_hw {
u16 sub_device_id;
u16 sub_system_id;
u32 eeprom_id;
+ u8 revision_id;
bool adapter_stopped;
uint64_t isb_dma;
diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c
index 1cf4ca54af..4d94bc8b83 100644
--- a/drivers/net/ngbe/ngbe_ethdev.c
+++ b/drivers/net/ngbe/ngbe_ethdev.c
@@ -13,6 +13,67 @@
#include "ngbe.h"
#include "ngbe_ethdev.h"
#include "ngbe_rxtx.h"
+#include "ngbe_regs_group.h"
+
+static const struct reg_info ngbe_regs_general[] = {
+ {NGBE_RST, 1, 1, "NGBE_RST"},
+ {NGBE_STAT, 1, 1, "NGBE_STAT"},
+ {NGBE_PORTCTL, 1, 1, "NGBE_PORTCTL"},
+ {NGBE_GPIODATA, 1, 1, "NGBE_GPIODATA"},
+ {NGBE_GPIOCTL, 1, 1, "NGBE_GPIOCTL"},
+ {NGBE_LEDCTL, 1, 1, "NGBE_LEDCTL"},
+ {0, 0, 0, ""}
+};
+
+static const struct reg_info ngbe_regs_nvm[] = {
+ {0, 0, 0, ""}
+};
+
+static const struct reg_info ngbe_regs_interrupt[] = {
+ {0, 0, 0, ""}
+};
+
+static const struct reg_info ngbe_regs_fctl_others[] = {
+ {0, 0, 0, ""}
+};
+
+static const struct reg_info ngbe_regs_rxdma[] = {
+ {0, 0, 0, ""}
+};
+
+static const struct reg_info ngbe_regs_rx[] = {
+ {0, 0, 0, ""}
+};
+
+static struct reg_info ngbe_regs_tx[] = {
+ {0, 0, 0, ""}
+};
+
+static const struct reg_info ngbe_regs_wakeup[] = {
+ {0, 0, 0, ""}
+};
+
+static const struct reg_info ngbe_regs_mac[] = {
+ {0, 0, 0, ""}
+};
+
+static const struct reg_info ngbe_regs_diagnostic[] = {
+ {0, 0, 0, ""},
+};
+
+/* PF registers */
+static const struct reg_info *ngbe_regs_others[] = {
+ ngbe_regs_general,
+ ngbe_regs_nvm,
+ ngbe_regs_interrupt,
+ ngbe_regs_fctl_others,
+ ngbe_regs_rxdma,
+ ngbe_regs_rx,
+ ngbe_regs_tx,
+ ngbe_regs_wakeup,
+ ngbe_regs_mac,
+ ngbe_regs_diagnostic,
+ NULL};
static int ngbe_dev_close(struct rte_eth_dev *dev);
static int ngbe_dev_link_update(struct rte_eth_dev *dev,
@@ -2769,6 +2830,52 @@ ngbe_dev_set_mc_addr_list(struct rte_eth_dev *dev,
ngbe_dev_addr_list_itr, TRUE);
}
+static int
+ngbe_get_reg_length(struct rte_eth_dev *dev __rte_unused)
+{
+ int count = 0;
+ int g_ind = 0;
+ const struct reg_info *reg_group;
+ const struct reg_info **reg_set = ngbe_regs_others;
+
+ while ((reg_group = reg_set[g_ind++]))
+ count += ngbe_regs_group_count(reg_group);
+
+ return count;
+}
+
+static int
+ngbe_get_regs(struct rte_eth_dev *dev,
+ struct rte_dev_reg_info *regs)
+{
+ struct ngbe_hw *hw = ngbe_dev_hw(dev);
+ uint32_t *data = regs->data;
+ int g_ind = 0;
+ int count = 0;
+ const struct reg_info *reg_group;
+ const struct reg_info **reg_set = ngbe_regs_others;
+
+ if (data == NULL) {
+ regs->length = ngbe_get_reg_length(dev);
+ regs->width = sizeof(uint32_t);
+ return 0;
+ }
+
+ /* Support only full register dump */
+ if (regs->length == 0 ||
+ regs->length == (uint32_t)ngbe_get_reg_length(dev)) {
+ regs->version = hw->mac.type << 24 |
+ hw->revision_id << 16 |
+ hw->device_id;
+ while ((reg_group = reg_set[g_ind++]))
+ count += ngbe_read_regs_group(dev, &data[count],
+ reg_group);
+ return 0;
+ }
+
+ return -ENOTSUP;
+}
+
static int
ngbe_get_eeprom_length(struct rte_eth_dev *dev)
{
@@ -2868,6 +2975,7 @@ static const struct eth_dev_ops ngbe_eth_dev_ops = {
.rss_hash_update = ngbe_dev_rss_hash_update,
.rss_hash_conf_get = ngbe_dev_rss_hash_conf_get,
.set_mc_addr_list = ngbe_dev_set_mc_addr_list,
+ .get_reg = ngbe_get_regs,
.get_eeprom_length = ngbe_get_eeprom_length,
.get_eeprom = ngbe_get_eeprom,
.set_eeprom = ngbe_set_eeprom,
diff --git a/drivers/net/ngbe/ngbe_regs_group.h b/drivers/net/ngbe/ngbe_regs_group.h
new file mode 100644
index 0000000000..cc4b69fd54
--- /dev/null
+++ b/drivers/net/ngbe/ngbe_regs_group.h
@@ -0,0 +1,54 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018-2021 Beijing WangXun Technology Co., Ltd.
+ * Copyright(c) 2010-2017 Intel Corporation
+ */
+
+#ifndef _NGBE_REGS_GROUP_H_
+#define _NGBE_REGS_GROUP_H_
+
+#include "ngbe_ethdev.h"
+
+struct ngbe_hw;
+struct reg_info {
+ uint32_t base_addr;
+ uint32_t count;
+ uint32_t stride;
+ const char *name;
+};
+
+static inline int
+ngbe_read_regs(struct ngbe_hw *hw, const struct reg_info *reg,
+ uint32_t *reg_buf)
+{
+ unsigned int i;
+
+ for (i = 0; i < reg->count; i++)
+ reg_buf[i] = rd32(hw, reg->base_addr + i * reg->stride);
+ return reg->count;
+};
+
+static inline int
+ngbe_regs_group_count(const struct reg_info *regs)
+{
+ int count = 0;
+ int i = 0;
+
+ while (regs[i].count)
+ count += regs[i++].count;
+ return count;
+};
+
+static inline int
+ngbe_read_regs_group(struct rte_eth_dev *dev, uint32_t *reg_buf,
+ const struct reg_info *regs)
+{
+ int count = 0;
+ int i = 0;
+ struct ngbe_hw *hw = ngbe_dev_hw(dev);
+
+ while (regs[i].count)
+ count += ngbe_read_regs(hw, ®s[i++], ®_buf[count]);
+ return count;
+};
+
+#endif /* _NGBE_REGS_GROUP_H_ */
--
2.21.0.windows.1
^ permalink raw reply [flat|nested] 54+ messages in thread
* [dpdk-dev] [PATCH 24/32] net/ngbe: support timesync
2021-09-08 8:37 [dpdk-dev] [PATCH 00/32] net/ngbe: add many features Jiawen Wu
` (22 preceding siblings ...)
2021-09-08 8:37 ` [dpdk-dev] [PATCH 23/32] net/ngbe: support register dump Jiawen Wu
@ 2021-09-08 8:37 ` Jiawen Wu
2021-09-08 8:37 ` [dpdk-dev] [PATCH 25/32] net/ngbe: add Rx and Tx queue info get Jiawen Wu
` (7 subsequent siblings)
31 siblings, 0 replies; 54+ messages in thread
From: Jiawen Wu @ 2021-09-08 8:37 UTC (permalink / raw)
To: dev; +Cc: Jiawen Wu
Add to support IEEE1588/802.1AS timestamping, and IEEE1588 timestamp
offload on Tx.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
doc/guides/nics/features/ngbe.ini | 1 +
doc/guides/nics/ngbe.rst | 1 +
drivers/net/ngbe/ngbe_ethdev.c | 216 ++++++++++++++++++++++++++++++
drivers/net/ngbe/ngbe_ethdev.h | 10 ++
drivers/net/ngbe/ngbe_rxtx.c | 33 ++++-
5 files changed, 260 insertions(+), 1 deletion(-)
diff --git a/doc/guides/nics/features/ngbe.ini b/doc/guides/nics/features/ngbe.ini
index 1d6399a2e7..c780f1aa68 100644
--- a/doc/guides/nics/features/ngbe.ini
+++ b/doc/guides/nics/features/ngbe.ini
@@ -31,6 +31,7 @@ L4 checksum offload = P
Inner L3 checksum = P
Inner L4 checksum = P
Packet type parsing = Y
+Timesync = Y
Basic stats = Y
Extended stats = Y
Stats per queue = Y
diff --git a/doc/guides/nics/ngbe.rst b/doc/guides/nics/ngbe.rst
index 09175e83cd..67fc7c89cc 100644
--- a/doc/guides/nics/ngbe.rst
+++ b/doc/guides/nics/ngbe.rst
@@ -26,6 +26,7 @@ Features
- Link flow control
- Interrupt mode for RX
- Scattered and gather for TX and RX
+- IEEE 1588
- FW version
diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c
index 4d94bc8b83..506b94168c 100644
--- a/drivers/net/ngbe/ngbe_ethdev.c
+++ b/drivers/net/ngbe/ngbe_ethdev.c
@@ -2830,6 +2830,215 @@ ngbe_dev_set_mc_addr_list(struct rte_eth_dev *dev,
ngbe_dev_addr_list_itr, TRUE);
}
+static uint64_t
+ngbe_read_systime_cyclecounter(struct rte_eth_dev *dev)
+{
+ struct ngbe_hw *hw = ngbe_dev_hw(dev);
+ uint64_t systime_cycles;
+
+ systime_cycles = (uint64_t)rd32(hw, NGBE_TSTIMEL);
+ systime_cycles |= (uint64_t)rd32(hw, NGBE_TSTIMEH) << 32;
+
+ return systime_cycles;
+}
+
+static uint64_t
+ngbe_read_rx_tstamp_cyclecounter(struct rte_eth_dev *dev)
+{
+ struct ngbe_hw *hw = ngbe_dev_hw(dev);
+ uint64_t rx_tstamp_cycles;
+
+ /* TSRXSTMPL stores ns and TSRXSTMPH stores seconds. */
+ rx_tstamp_cycles = (uint64_t)rd32(hw, NGBE_TSRXSTMPL);
+ rx_tstamp_cycles |= (uint64_t)rd32(hw, NGBE_TSRXSTMPH) << 32;
+
+ return rx_tstamp_cycles;
+}
+
+static uint64_t
+ngbe_read_tx_tstamp_cyclecounter(struct rte_eth_dev *dev)
+{
+ struct ngbe_hw *hw = ngbe_dev_hw(dev);
+ uint64_t tx_tstamp_cycles;
+
+ /* TSTXSTMPL stores ns and TSTXSTMPH stores seconds. */
+ tx_tstamp_cycles = (uint64_t)rd32(hw, NGBE_TSTXSTMPL);
+ tx_tstamp_cycles |= (uint64_t)rd32(hw, NGBE_TSTXSTMPH) << 32;
+
+ return tx_tstamp_cycles;
+}
+
+static void
+ngbe_start_timecounters(struct rte_eth_dev *dev)
+{
+ struct ngbe_hw *hw = ngbe_dev_hw(dev);
+ struct ngbe_adapter *adapter = ngbe_dev_adapter(dev);
+ uint32_t incval = 0;
+ uint32_t shift = 0;
+
+ incval = NGBE_INCVAL_1GB;
+ shift = NGBE_INCVAL_SHIFT_1GB;
+
+ wr32(hw, NGBE_TSTIMEINC, NGBE_TSTIMEINC_IV(incval));
+
+ memset(&adapter->systime_tc, 0, sizeof(struct rte_timecounter));
+ memset(&adapter->rx_tstamp_tc, 0, sizeof(struct rte_timecounter));
+ memset(&adapter->tx_tstamp_tc, 0, sizeof(struct rte_timecounter));
+
+ adapter->systime_tc.cc_mask = NGBE_CYCLECOUNTER_MASK;
+ adapter->systime_tc.cc_shift = shift;
+ adapter->systime_tc.nsec_mask = (1ULL << shift) - 1;
+
+ adapter->rx_tstamp_tc.cc_mask = NGBE_CYCLECOUNTER_MASK;
+ adapter->rx_tstamp_tc.cc_shift = shift;
+ adapter->rx_tstamp_tc.nsec_mask = (1ULL << shift) - 1;
+
+ adapter->tx_tstamp_tc.cc_mask = NGBE_CYCLECOUNTER_MASK;
+ adapter->tx_tstamp_tc.cc_shift = shift;
+ adapter->tx_tstamp_tc.nsec_mask = (1ULL << shift) - 1;
+}
+
+static int
+ngbe_timesync_adjust_time(struct rte_eth_dev *dev, int64_t delta)
+{
+ struct ngbe_adapter *adapter = ngbe_dev_adapter(dev);
+
+ adapter->systime_tc.nsec += delta;
+ adapter->rx_tstamp_tc.nsec += delta;
+ adapter->tx_tstamp_tc.nsec += delta;
+
+ return 0;
+}
+
+static int
+ngbe_timesync_write_time(struct rte_eth_dev *dev, const struct timespec *ts)
+{
+ uint64_t ns;
+ struct ngbe_adapter *adapter = ngbe_dev_adapter(dev);
+
+ ns = rte_timespec_to_ns(ts);
+ /* Set the timecounters to a new value. */
+ adapter->systime_tc.nsec = ns;
+ adapter->rx_tstamp_tc.nsec = ns;
+ adapter->tx_tstamp_tc.nsec = ns;
+
+ return 0;
+}
+
+static int
+ngbe_timesync_read_time(struct rte_eth_dev *dev, struct timespec *ts)
+{
+ uint64_t ns, systime_cycles;
+ struct ngbe_adapter *adapter = ngbe_dev_adapter(dev);
+
+ systime_cycles = ngbe_read_systime_cyclecounter(dev);
+ ns = rte_timecounter_update(&adapter->systime_tc, systime_cycles);
+ *ts = rte_ns_to_timespec(ns);
+
+ return 0;
+}
+
+static int
+ngbe_timesync_enable(struct rte_eth_dev *dev)
+{
+ struct ngbe_hw *hw = ngbe_dev_hw(dev);
+ uint32_t tsync_ctl;
+
+ /* Stop the timesync system time. */
+ wr32(hw, NGBE_TSTIMEINC, 0x0);
+ /* Reset the timesync system time value. */
+ wr32(hw, NGBE_TSTIMEL, 0x0);
+ wr32(hw, NGBE_TSTIMEH, 0x0);
+
+ ngbe_start_timecounters(dev);
+
+ /* Enable L2 filtering of IEEE1588/802.1AS Ethernet frame types. */
+ wr32(hw, NGBE_ETFLT(NGBE_ETF_ID_1588),
+ RTE_ETHER_TYPE_1588 | NGBE_ETFLT_ENA | NGBE_ETFLT_1588);
+
+ /* Enable timestamping of received PTP packets. */
+ tsync_ctl = rd32(hw, NGBE_TSRXCTL);
+ tsync_ctl |= NGBE_TSRXCTL_ENA;
+ wr32(hw, NGBE_TSRXCTL, tsync_ctl);
+
+ /* Enable timestamping of transmitted PTP packets. */
+ tsync_ctl = rd32(hw, NGBE_TSTXCTL);
+ tsync_ctl |= NGBE_TSTXCTL_ENA;
+ wr32(hw, NGBE_TSTXCTL, tsync_ctl);
+
+ ngbe_flush(hw);
+
+ return 0;
+}
+
+static int
+ngbe_timesync_disable(struct rte_eth_dev *dev)
+{
+ struct ngbe_hw *hw = ngbe_dev_hw(dev);
+ uint32_t tsync_ctl;
+
+ /* Disable timestamping of transmitted PTP packets. */
+ tsync_ctl = rd32(hw, NGBE_TSTXCTL);
+ tsync_ctl &= ~NGBE_TSTXCTL_ENA;
+ wr32(hw, NGBE_TSTXCTL, tsync_ctl);
+
+ /* Disable timestamping of received PTP packets. */
+ tsync_ctl = rd32(hw, NGBE_TSRXCTL);
+ tsync_ctl &= ~NGBE_TSRXCTL_ENA;
+ wr32(hw, NGBE_TSRXCTL, tsync_ctl);
+
+ /* Disable L2 filtering of IEEE1588/802.1AS Ethernet frame types. */
+ wr32(hw, NGBE_ETFLT(NGBE_ETF_ID_1588), 0);
+
+ /* Stop incrementating the System Time registers. */
+ wr32(hw, NGBE_TSTIMEINC, 0);
+
+ return 0;
+}
+
+static int
+ngbe_timesync_read_rx_timestamp(struct rte_eth_dev *dev,
+ struct timespec *timestamp,
+ uint32_t flags __rte_unused)
+{
+ struct ngbe_hw *hw = ngbe_dev_hw(dev);
+ struct ngbe_adapter *adapter = ngbe_dev_adapter(dev);
+ uint32_t tsync_rxctl;
+ uint64_t rx_tstamp_cycles;
+ uint64_t ns;
+
+ tsync_rxctl = rd32(hw, NGBE_TSRXCTL);
+ if ((tsync_rxctl & NGBE_TSRXCTL_VLD) == 0)
+ return -EINVAL;
+
+ rx_tstamp_cycles = ngbe_read_rx_tstamp_cyclecounter(dev);
+ ns = rte_timecounter_update(&adapter->rx_tstamp_tc, rx_tstamp_cycles);
+ *timestamp = rte_ns_to_timespec(ns);
+
+ return 0;
+}
+
+static int
+ngbe_timesync_read_tx_timestamp(struct rte_eth_dev *dev,
+ struct timespec *timestamp)
+{
+ struct ngbe_hw *hw = ngbe_dev_hw(dev);
+ struct ngbe_adapter *adapter = ngbe_dev_adapter(dev);
+ uint32_t tsync_txctl;
+ uint64_t tx_tstamp_cycles;
+ uint64_t ns;
+
+ tsync_txctl = rd32(hw, NGBE_TSTXCTL);
+ if ((tsync_txctl & NGBE_TSTXCTL_VLD) == 0)
+ return -EINVAL;
+
+ tx_tstamp_cycles = ngbe_read_tx_tstamp_cyclecounter(dev);
+ ns = rte_timecounter_update(&adapter->tx_tstamp_tc, tx_tstamp_cycles);
+ *timestamp = rte_ns_to_timespec(ns);
+
+ return 0;
+}
+
static int
ngbe_get_reg_length(struct rte_eth_dev *dev __rte_unused)
{
@@ -2975,10 +3184,17 @@ static const struct eth_dev_ops ngbe_eth_dev_ops = {
.rss_hash_update = ngbe_dev_rss_hash_update,
.rss_hash_conf_get = ngbe_dev_rss_hash_conf_get,
.set_mc_addr_list = ngbe_dev_set_mc_addr_list,
+ .timesync_enable = ngbe_timesync_enable,
+ .timesync_disable = ngbe_timesync_disable,
+ .timesync_read_rx_timestamp = ngbe_timesync_read_rx_timestamp,
+ .timesync_read_tx_timestamp = ngbe_timesync_read_tx_timestamp,
.get_reg = ngbe_get_regs,
.get_eeprom_length = ngbe_get_eeprom_length,
.get_eeprom = ngbe_get_eeprom,
.set_eeprom = ngbe_set_eeprom,
+ .timesync_adjust_time = ngbe_timesync_adjust_time,
+ .timesync_read_time = ngbe_timesync_read_time,
+ .timesync_write_time = ngbe_timesync_write_time,
};
RTE_PMD_REGISTER_PCI(net_ngbe, rte_ngbe_pmd);
diff --git a/drivers/net/ngbe/ngbe_ethdev.h b/drivers/net/ngbe/ngbe_ethdev.h
index c16c6568be..b6e623ab0f 100644
--- a/drivers/net/ngbe/ngbe_ethdev.h
+++ b/drivers/net/ngbe/ngbe_ethdev.h
@@ -7,6 +7,7 @@
#define _NGBE_ETHDEV_H_
#include "ngbe_ptypes.h"
+#include <rte_time.h>
#include <rte_ethdev.h>
#include <rte_ethdev_core.h>
@@ -107,6 +108,9 @@ struct ngbe_adapter {
struct ngbe_vf_info *vfdata;
struct ngbe_uta_info uta_info;
bool rx_bulk_alloc_allowed;
+ struct rte_timecounter systime_tc;
+ struct rte_timecounter rx_tstamp_tc;
+ struct rte_timecounter tx_tstamp_tc;
/* For RSS reta table update */
uint8_t rss_reta_updated;
@@ -273,6 +277,12 @@ int ngbe_pf_host_configure(struct rte_eth_dev *eth_dev);
#define NGBE_DEFAULT_TX_HTHRESH 0
#define NGBE_DEFAULT_TX_WTHRESH 0
+/* Additional timesync values. */
+#define NGBE_INCVAL_1GB 0x2000000 /* all speed is same in Emerald */
+#define NGBE_INCVAL_SHIFT_1GB 22 /* all speed is same in Emerald */
+
+#define NGBE_CYCLECOUNTER_MASK 0xffffffffffffffffULL
+
/* store statistics names and its offset in stats structure */
struct rte_ngbe_xstats_name_off {
char name[RTE_ETH_XSTATS_NAME_SIZE];
diff --git a/drivers/net/ngbe/ngbe_rxtx.c b/drivers/net/ngbe/ngbe_rxtx.c
index 91cafed7fc..e0ca4af9d9 100644
--- a/drivers/net/ngbe/ngbe_rxtx.c
+++ b/drivers/net/ngbe/ngbe_rxtx.c
@@ -15,6 +15,13 @@
#include "base/ngbe.h"
#include "ngbe_ethdev.h"
#include "ngbe_rxtx.h"
+
+#ifdef RTE_LIBRTE_IEEE1588
+#define NGBE_TX_IEEE1588_TMST PKT_TX_IEEE1588_TMST
+#else
+#define NGBE_TX_IEEE1588_TMST 0
+#endif
+
/* Bit Mask to indicate what bits required for building Tx context */
static const u64 NGBE_TX_OFFLOAD_MASK = (PKT_TX_IP_CKSUM |
PKT_TX_OUTER_IPV6 |
@@ -25,7 +32,9 @@ static const u64 NGBE_TX_OFFLOAD_MASK = (PKT_TX_IP_CKSUM |
PKT_TX_L4_MASK |
PKT_TX_TCP_SEG |
PKT_TX_TUNNEL_MASK |
- PKT_TX_OUTER_IP_CKSUM);
+ PKT_TX_OUTER_IP_CKSUM |
+ NGBE_TX_IEEE1588_TMST);
+
#define NGBE_TX_OFFLOAD_NOTSUP_MASK \
(PKT_TX_OFFLOAD_MASK ^ NGBE_TX_OFFLOAD_MASK)
@@ -730,6 +739,11 @@ ngbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
*/
cmd_type_len = NGBE_TXD_FCS;
+#ifdef RTE_LIBRTE_IEEE1588
+ if (ol_flags & PKT_TX_IEEE1588_TMST)
+ cmd_type_len |= NGBE_TXD_1588;
+#endif
+
olinfo_status = 0;
if (tx_ol_req) {
if (ol_flags & PKT_TX_TCP_SEG) {
@@ -906,7 +920,20 @@ ngbe_rxd_pkt_info_to_pkt_flags(uint32_t pkt_info)
PKT_RX_RSS_HASH, 0, 0, 0,
0, 0, 0, PKT_RX_FDIR,
};
+#ifdef RTE_LIBRTE_IEEE1588
+ static uint64_t ip_pkt_etqf_map[8] = {
+ 0, 0, 0, PKT_RX_IEEE1588_PTP,
+ 0, 0, 0, 0,
+ };
+ int etfid = ngbe_etflt_id(NGBE_RXD_PTID(pkt_info));
+ if (likely(-1 != etfid))
+ return ip_pkt_etqf_map[etfid] |
+ ip_rss_types_map[NGBE_RXD_RSSTYPE(pkt_info)];
+ else
+ return ip_rss_types_map[NGBE_RXD_RSSTYPE(pkt_info)];
+#else
return ip_rss_types_map[NGBE_RXD_RSSTYPE(pkt_info)];
+#endif
}
static inline uint64_t
@@ -923,6 +950,10 @@ rx_desc_status_to_pkt_flags(uint32_t rx_status, uint64_t vlan_flags)
vlan_flags & PKT_RX_VLAN_STRIPPED)
? vlan_flags : 0;
+#ifdef RTE_LIBRTE_IEEE1588
+ if (rx_status & NGBE_RXD_STAT_1588)
+ pkt_flags = pkt_flags | PKT_RX_IEEE1588_TMST;
+#endif
return pkt_flags;
}
--
2.21.0.windows.1
^ permalink raw reply [flat|nested] 54+ messages in thread
* [dpdk-dev] [PATCH 25/32] net/ngbe: add Rx and Tx queue info get
2021-09-08 8:37 [dpdk-dev] [PATCH 00/32] net/ngbe: add many features Jiawen Wu
` (23 preceding siblings ...)
2021-09-08 8:37 ` [dpdk-dev] [PATCH 24/32] net/ngbe: support timesync Jiawen Wu
@ 2021-09-08 8:37 ` Jiawen Wu
2021-09-08 8:37 ` [dpdk-dev] [PATCH 26/32] net/ngbe: add Rx and Tx descriptor status Jiawen Wu
` (6 subsequent siblings)
31 siblings, 0 replies; 54+ messages in thread
From: Jiawen Wu @ 2021-09-08 8:37 UTC (permalink / raw)
To: dev; +Cc: Jiawen Wu
Add Rx and Tx queue information get operation.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
drivers/net/ngbe/ngbe_ethdev.c | 2 ++
drivers/net/ngbe/ngbe_ethdev.h | 6 ++++++
drivers/net/ngbe/ngbe_rxtx.c | 37 ++++++++++++++++++++++++++++++++++
3 files changed, 45 insertions(+)
diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c
index 506b94168c..2d0c9e3453 100644
--- a/drivers/net/ngbe/ngbe_ethdev.c
+++ b/drivers/net/ngbe/ngbe_ethdev.c
@@ -3184,6 +3184,8 @@ static const struct eth_dev_ops ngbe_eth_dev_ops = {
.rss_hash_update = ngbe_dev_rss_hash_update,
.rss_hash_conf_get = ngbe_dev_rss_hash_conf_get,
.set_mc_addr_list = ngbe_dev_set_mc_addr_list,
+ .rxq_info_get = ngbe_rxq_info_get,
+ .txq_info_get = ngbe_txq_info_get,
.timesync_enable = ngbe_timesync_enable,
.timesync_disable = ngbe_timesync_disable,
.timesync_read_rx_timestamp = ngbe_timesync_read_rx_timestamp,
diff --git a/drivers/net/ngbe/ngbe_ethdev.h b/drivers/net/ngbe/ngbe_ethdev.h
index b6e623ab0f..98df1c3bf0 100644
--- a/drivers/net/ngbe/ngbe_ethdev.h
+++ b/drivers/net/ngbe/ngbe_ethdev.h
@@ -200,6 +200,12 @@ int ngbe_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id);
int ngbe_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id);
+void ngbe_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
+ struct rte_eth_rxq_info *qinfo);
+
+void ngbe_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
+ struct rte_eth_txq_info *qinfo);
+
uint16_t ngbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
uint16_t nb_pkts);
diff --git a/drivers/net/ngbe/ngbe_rxtx.c b/drivers/net/ngbe/ngbe_rxtx.c
index e0ca4af9d9..ac97eec1c0 100644
--- a/drivers/net/ngbe/ngbe_rxtx.c
+++ b/drivers/net/ngbe/ngbe_rxtx.c
@@ -3092,3 +3092,40 @@ ngbe_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
return 0;
}
+
+void
+ngbe_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
+ struct rte_eth_rxq_info *qinfo)
+{
+ struct ngbe_rx_queue *rxq;
+
+ rxq = dev->data->rx_queues[queue_id];
+
+ qinfo->mp = rxq->mb_pool;
+ qinfo->scattered_rx = dev->data->scattered_rx;
+ qinfo->nb_desc = rxq->nb_rx_desc;
+
+ qinfo->conf.rx_free_thresh = rxq->rx_free_thresh;
+ qinfo->conf.rx_drop_en = rxq->drop_en;
+ qinfo->conf.rx_deferred_start = rxq->rx_deferred_start;
+ qinfo->conf.offloads = rxq->offloads;
+}
+
+void
+ngbe_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
+ struct rte_eth_txq_info *qinfo)
+{
+ struct ngbe_tx_queue *txq;
+
+ txq = dev->data->tx_queues[queue_id];
+
+ qinfo->nb_desc = txq->nb_tx_desc;
+
+ qinfo->conf.tx_thresh.pthresh = txq->pthresh;
+ qinfo->conf.tx_thresh.hthresh = txq->hthresh;
+ qinfo->conf.tx_thresh.wthresh = txq->wthresh;
+
+ qinfo->conf.tx_free_thresh = txq->tx_free_thresh;
+ qinfo->conf.offloads = txq->offloads;
+ qinfo->conf.tx_deferred_start = txq->tx_deferred_start;
+}
--
2.21.0.windows.1
^ permalink raw reply [flat|nested] 54+ messages in thread
* [dpdk-dev] [PATCH 26/32] net/ngbe: add Rx and Tx descriptor status
2021-09-08 8:37 [dpdk-dev] [PATCH 00/32] net/ngbe: add many features Jiawen Wu
` (24 preceding siblings ...)
2021-09-08 8:37 ` [dpdk-dev] [PATCH 25/32] net/ngbe: add Rx and Tx queue info get Jiawen Wu
@ 2021-09-08 8:37 ` Jiawen Wu
2021-09-08 8:37 ` [dpdk-dev] [PATCH 27/32] net/ngbe: add Tx done cleanup Jiawen Wu
` (5 subsequent siblings)
31 siblings, 0 replies; 54+ messages in thread
From: Jiawen Wu @ 2021-09-08 8:37 UTC (permalink / raw)
To: dev; +Cc: Jiawen Wu
Supports to get the number of used Rx descriptos,
and check the status of Rx and Tx descriptors.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
doc/guides/nics/features/ngbe.ini | 2 +
drivers/net/ngbe/ngbe_ethdev.c | 3 ++
drivers/net/ngbe/ngbe_ethdev.h | 6 +++
drivers/net/ngbe/ngbe_rxtx.c | 73 +++++++++++++++++++++++++++++++
4 files changed, 84 insertions(+)
diff --git a/doc/guides/nics/features/ngbe.ini b/doc/guides/nics/features/ngbe.ini
index c780f1aa68..56d5d71ea8 100644
--- a/doc/guides/nics/features/ngbe.ini
+++ b/doc/guides/nics/features/ngbe.ini
@@ -32,6 +32,8 @@ Inner L3 checksum = P
Inner L4 checksum = P
Packet type parsing = Y
Timesync = Y
+Rx descriptor status = Y
+Tx descriptor status = Y
Basic stats = Y
Extended stats = Y
Stats per queue = Y
diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c
index 2d0c9e3453..ec652aa359 100644
--- a/drivers/net/ngbe/ngbe_ethdev.c
+++ b/drivers/net/ngbe/ngbe_ethdev.c
@@ -370,6 +370,9 @@ eth_ngbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
PMD_INIT_FUNC_TRACE();
eth_dev->dev_ops = &ngbe_eth_dev_ops;
+ eth_dev->rx_queue_count = ngbe_dev_rx_queue_count;
+ eth_dev->rx_descriptor_status = ngbe_dev_rx_descriptor_status;
+ eth_dev->tx_descriptor_status = ngbe_dev_tx_descriptor_status;
eth_dev->rx_pkt_burst = &ngbe_recv_pkts;
eth_dev->tx_pkt_burst = &ngbe_xmit_pkts;
eth_dev->tx_pkt_prepare = &ngbe_prep_pkts;
diff --git a/drivers/net/ngbe/ngbe_ethdev.h b/drivers/net/ngbe/ngbe_ethdev.h
index 98df1c3bf0..aacc0b68b2 100644
--- a/drivers/net/ngbe/ngbe_ethdev.h
+++ b/drivers/net/ngbe/ngbe_ethdev.h
@@ -181,6 +181,12 @@ int ngbe_dev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t tx_queue_id,
uint16_t nb_tx_desc, unsigned int socket_id,
const struct rte_eth_txconf *tx_conf);
+uint32_t ngbe_dev_rx_queue_count(struct rte_eth_dev *dev,
+ uint16_t rx_queue_id);
+
+int ngbe_dev_rx_descriptor_status(void *rx_queue, uint16_t offset);
+int ngbe_dev_tx_descriptor_status(void *tx_queue, uint16_t offset);
+
int ngbe_dev_rx_init(struct rte_eth_dev *dev);
void ngbe_dev_tx_init(struct rte_eth_dev *dev);
diff --git a/drivers/net/ngbe/ngbe_rxtx.c b/drivers/net/ngbe/ngbe_rxtx.c
index ac97eec1c0..0b31474193 100644
--- a/drivers/net/ngbe/ngbe_rxtx.c
+++ b/drivers/net/ngbe/ngbe_rxtx.c
@@ -2263,6 +2263,79 @@ ngbe_dev_rx_queue_setup(struct rte_eth_dev *dev,
return 0;
}
+uint32_t
+ngbe_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+{
+#define NGBE_RXQ_SCAN_INTERVAL 4
+ volatile struct ngbe_rx_desc *rxdp;
+ struct ngbe_rx_queue *rxq;
+ uint32_t desc = 0;
+
+ rxq = dev->data->rx_queues[rx_queue_id];
+ rxdp = &rxq->rx_ring[rxq->rx_tail];
+
+ while ((desc < rxq->nb_rx_desc) &&
+ (rxdp->qw1.lo.status &
+ rte_cpu_to_le_32(NGBE_RXD_STAT_DD))) {
+ desc += NGBE_RXQ_SCAN_INTERVAL;
+ rxdp += NGBE_RXQ_SCAN_INTERVAL;
+ if (rxq->rx_tail + desc >= rxq->nb_rx_desc)
+ rxdp = &(rxq->rx_ring[rxq->rx_tail +
+ desc - rxq->nb_rx_desc]);
+ }
+
+ return desc;
+}
+
+int
+ngbe_dev_rx_descriptor_status(void *rx_queue, uint16_t offset)
+{
+ struct ngbe_rx_queue *rxq = rx_queue;
+ volatile uint32_t *status;
+ uint32_t nb_hold, desc;
+
+ if (unlikely(offset >= rxq->nb_rx_desc))
+ return -EINVAL;
+
+ nb_hold = rxq->nb_rx_hold;
+ if (offset >= rxq->nb_rx_desc - nb_hold)
+ return RTE_ETH_RX_DESC_UNAVAIL;
+
+ desc = rxq->rx_tail + offset;
+ if (desc >= rxq->nb_rx_desc)
+ desc -= rxq->nb_rx_desc;
+
+ status = &rxq->rx_ring[desc].qw1.lo.status;
+ if (*status & rte_cpu_to_le_32(NGBE_RXD_STAT_DD))
+ return RTE_ETH_RX_DESC_DONE;
+
+ return RTE_ETH_RX_DESC_AVAIL;
+}
+
+int
+ngbe_dev_tx_descriptor_status(void *tx_queue, uint16_t offset)
+{
+ struct ngbe_tx_queue *txq = tx_queue;
+ volatile uint32_t *status;
+ uint32_t desc;
+
+ if (unlikely(offset >= txq->nb_tx_desc))
+ return -EINVAL;
+
+ desc = txq->tx_tail + offset;
+ if (desc >= txq->nb_tx_desc) {
+ desc -= txq->nb_tx_desc;
+ if (desc >= txq->nb_tx_desc)
+ desc -= txq->nb_tx_desc;
+ }
+
+ status = &txq->tx_ring[desc].dw3;
+ if (*status & rte_cpu_to_le_32(NGBE_TXD_DD))
+ return RTE_ETH_TX_DESC_DONE;
+
+ return RTE_ETH_TX_DESC_FULL;
+}
+
void
ngbe_dev_clear_queues(struct rte_eth_dev *dev)
{
--
2.21.0.windows.1
^ permalink raw reply [flat|nested] 54+ messages in thread
* [dpdk-dev] [PATCH 27/32] net/ngbe: add Tx done cleanup
2021-09-08 8:37 [dpdk-dev] [PATCH 00/32] net/ngbe: add many features Jiawen Wu
` (25 preceding siblings ...)
2021-09-08 8:37 ` [dpdk-dev] [PATCH 26/32] net/ngbe: add Rx and Tx descriptor status Jiawen Wu
@ 2021-09-08 8:37 ` Jiawen Wu
2021-09-08 8:37 ` [dpdk-dev] [PATCH 28/32] net/ngbe: add IPsec context creation Jiawen Wu
` (4 subsequent siblings)
31 siblings, 0 replies; 54+ messages in thread
From: Jiawen Wu @ 2021-09-08 8:37 UTC (permalink / raw)
To: dev; +Cc: Jiawen Wu
Add support for API rte_eth_tx_done_cleanup().
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
drivers/net/ngbe/ngbe_ethdev.c | 1 +
drivers/net/ngbe/ngbe_rxtx.c | 89 ++++++++++++++++++++++++++++++++++
drivers/net/ngbe/ngbe_rxtx.h | 1 +
3 files changed, 91 insertions(+)
diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c
index ec652aa359..4eaf9b0724 100644
--- a/drivers/net/ngbe/ngbe_ethdev.c
+++ b/drivers/net/ngbe/ngbe_ethdev.c
@@ -3200,6 +3200,7 @@ static const struct eth_dev_ops ngbe_eth_dev_ops = {
.timesync_adjust_time = ngbe_timesync_adjust_time,
.timesync_read_time = ngbe_timesync_read_time,
.timesync_write_time = ngbe_timesync_write_time,
+ .tx_done_cleanup = ngbe_dev_tx_done_cleanup,
};
RTE_PMD_REGISTER_PCI(net_ngbe, rte_ngbe_pmd);
diff --git a/drivers/net/ngbe/ngbe_rxtx.c b/drivers/net/ngbe/ngbe_rxtx.c
index 0b31474193..bee4f04616 100644
--- a/drivers/net/ngbe/ngbe_rxtx.c
+++ b/drivers/net/ngbe/ngbe_rxtx.c
@@ -1717,6 +1717,95 @@ ngbe_tx_queue_release_mbufs(struct ngbe_tx_queue *txq)
}
}
+static int
+ngbe_tx_done_cleanup_full(struct ngbe_tx_queue *txq, uint32_t free_cnt)
+{
+ struct ngbe_tx_entry *swr_ring = txq->sw_ring;
+ uint16_t i, tx_last, tx_id;
+ uint16_t nb_tx_free_last;
+ uint16_t nb_tx_to_clean;
+ uint32_t pkt_cnt;
+
+ /* Start free mbuf from the next of tx_tail */
+ tx_last = txq->tx_tail;
+ tx_id = swr_ring[tx_last].next_id;
+
+ if (txq->nb_tx_free == 0 && ngbe_xmit_cleanup(txq))
+ return 0;
+
+ nb_tx_to_clean = txq->nb_tx_free;
+ nb_tx_free_last = txq->nb_tx_free;
+ if (!free_cnt)
+ free_cnt = txq->nb_tx_desc;
+
+ /* Loop through swr_ring to count the amount of
+ * freeable mubfs and packets.
+ */
+ for (pkt_cnt = 0; pkt_cnt < free_cnt; ) {
+ for (i = 0; i < nb_tx_to_clean &&
+ pkt_cnt < free_cnt &&
+ tx_id != tx_last; i++) {
+ if (swr_ring[tx_id].mbuf != NULL) {
+ rte_pktmbuf_free_seg(swr_ring[tx_id].mbuf);
+ swr_ring[tx_id].mbuf = NULL;
+
+ /*
+ * last segment in the packet,
+ * increment packet count
+ */
+ pkt_cnt += (swr_ring[tx_id].last_id == tx_id);
+ }
+
+ tx_id = swr_ring[tx_id].next_id;
+ }
+
+ if (pkt_cnt < free_cnt) {
+ if (ngbe_xmit_cleanup(txq))
+ break;
+
+ nb_tx_to_clean = txq->nb_tx_free - nb_tx_free_last;
+ nb_tx_free_last = txq->nb_tx_free;
+ }
+ }
+
+ return (int)pkt_cnt;
+}
+
+static int
+ngbe_tx_done_cleanup_simple(struct ngbe_tx_queue *txq,
+ uint32_t free_cnt)
+{
+ int i, n, cnt;
+
+ if (free_cnt == 0 || free_cnt > txq->nb_tx_desc)
+ free_cnt = txq->nb_tx_desc;
+
+ cnt = free_cnt - free_cnt % txq->tx_free_thresh;
+
+ for (i = 0; i < cnt; i += n) {
+ if (txq->nb_tx_desc - txq->nb_tx_free < txq->tx_free_thresh)
+ break;
+
+ n = ngbe_tx_free_bufs(txq);
+
+ if (n == 0)
+ break;
+ }
+
+ return i;
+}
+
+int
+ngbe_dev_tx_done_cleanup(void *tx_queue, uint32_t free_cnt)
+{
+ struct ngbe_tx_queue *txq = (struct ngbe_tx_queue *)tx_queue;
+ if (txq->offloads == 0 &&
+ txq->tx_free_thresh >= RTE_PMD_NGBE_TX_MAX_BURST)
+ return ngbe_tx_done_cleanup_simple(txq, free_cnt);
+
+ return ngbe_tx_done_cleanup_full(txq, free_cnt);
+}
+
static void
ngbe_tx_free_swring(struct ngbe_tx_queue *txq)
{
diff --git a/drivers/net/ngbe/ngbe_rxtx.h b/drivers/net/ngbe/ngbe_rxtx.h
index 812bc57c9e..d63b25c1aa 100644
--- a/drivers/net/ngbe/ngbe_rxtx.h
+++ b/drivers/net/ngbe/ngbe_rxtx.h
@@ -370,6 +370,7 @@ struct ngbe_txq_ops {
void ngbe_set_tx_function(struct rte_eth_dev *dev, struct ngbe_tx_queue *txq);
void ngbe_set_rx_function(struct rte_eth_dev *dev);
+int ngbe_dev_tx_done_cleanup(void *tx_queue, uint32_t free_cnt);
uint64_t ngbe_get_tx_port_offloads(struct rte_eth_dev *dev);
uint64_t ngbe_get_rx_queue_offloads(struct rte_eth_dev *dev);
--
2.21.0.windows.1
^ permalink raw reply [flat|nested] 54+ messages in thread
* [dpdk-dev] [PATCH 28/32] net/ngbe: add IPsec context creation
2021-09-08 8:37 [dpdk-dev] [PATCH 00/32] net/ngbe: add many features Jiawen Wu
` (26 preceding siblings ...)
2021-09-08 8:37 ` [dpdk-dev] [PATCH 27/32] net/ngbe: add Tx done cleanup Jiawen Wu
@ 2021-09-08 8:37 ` Jiawen Wu
2021-09-15 16:58 ` Ferruh Yigit
2021-09-16 9:04 ` Hemant Agrawal
2021-09-08 8:37 ` [dpdk-dev] [PATCH 29/32] net/ngbe: create and destroy security session Jiawen Wu
` (3 subsequent siblings)
31 siblings, 2 replies; 54+ messages in thread
From: Jiawen Wu @ 2021-09-08 8:37 UTC (permalink / raw)
To: dev; +Cc: Jiawen Wu
Initialize securiry context, and support to get security
capabilities.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
doc/guides/nics/features/ngbe.ini | 1 +
drivers/net/ngbe/meson.build | 3 +-
drivers/net/ngbe/ngbe_ethdev.c | 10 ++
drivers/net/ngbe/ngbe_ethdev.h | 4 +
drivers/net/ngbe/ngbe_ipsec.c | 178 ++++++++++++++++++++++++++++++
5 files changed, 195 insertions(+), 1 deletion(-)
create mode 100644 drivers/net/ngbe/ngbe_ipsec.c
diff --git a/doc/guides/nics/features/ngbe.ini b/doc/guides/nics/features/ngbe.ini
index 56d5d71ea8..facdb5f006 100644
--- a/doc/guides/nics/features/ngbe.ini
+++ b/doc/guides/nics/features/ngbe.ini
@@ -23,6 +23,7 @@ RSS reta update = Y
SR-IOV = Y
VLAN filter = Y
Flow control = Y
+Inline crypto = Y
CRC offload = P
VLAN offload = P
QinQ offload = P
diff --git a/drivers/net/ngbe/meson.build b/drivers/net/ngbe/meson.build
index b276ec3341..f222595b19 100644
--- a/drivers/net/ngbe/meson.build
+++ b/drivers/net/ngbe/meson.build
@@ -12,12 +12,13 @@ objs = [base_objs]
sources = files(
'ngbe_ethdev.c',
+ 'ngbe_ipsec.c',
'ngbe_ptypes.c',
'ngbe_pf.c',
'ngbe_rxtx.c',
)
-deps += ['hash']
+deps += ['hash', 'security']
includes += include_directories('base')
diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c
index 4eaf9b0724..b0e0f7411e 100644
--- a/drivers/net/ngbe/ngbe_ethdev.c
+++ b/drivers/net/ngbe/ngbe_ethdev.c
@@ -430,6 +430,12 @@ eth_ngbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
/* Unlock any pending hardware semaphore */
ngbe_swfw_lock_reset(hw);
+#ifdef RTE_LIB_SECURITY
+ /* Initialize security_ctx only for primary process*/
+ if (ngbe_ipsec_ctx_create(eth_dev))
+ return -ENOMEM;
+#endif
+
/* Get Hardware Flow Control setting */
hw->fc.requested_mode = ngbe_fc_full;
hw->fc.current_mode = ngbe_fc_full;
@@ -1282,6 +1288,10 @@ ngbe_dev_close(struct rte_eth_dev *dev)
rte_free(dev->data->hash_mac_addrs);
dev->data->hash_mac_addrs = NULL;
+#ifdef RTE_LIB_SECURITY
+ rte_free(dev->security_ctx);
+#endif
+
return ret;
}
diff --git a/drivers/net/ngbe/ngbe_ethdev.h b/drivers/net/ngbe/ngbe_ethdev.h
index aacc0b68b2..9eda024d65 100644
--- a/drivers/net/ngbe/ngbe_ethdev.h
+++ b/drivers/net/ngbe/ngbe_ethdev.h
@@ -264,6 +264,10 @@ void ngbe_pf_mbx_process(struct rte_eth_dev *eth_dev);
int ngbe_pf_host_configure(struct rte_eth_dev *eth_dev);
+#ifdef RTE_LIB_SECURITY
+int ngbe_ipsec_ctx_create(struct rte_eth_dev *dev);
+#endif
+
/* High threshold controlling when to start sending XOFF frames. */
#define NGBE_FC_XOFF_HITH 128 /*KB*/
/* Low threshold controlling when to start sending XON frames. */
diff --git a/drivers/net/ngbe/ngbe_ipsec.c b/drivers/net/ngbe/ngbe_ipsec.c
new file mode 100644
index 0000000000..5f8b0bab29
--- /dev/null
+++ b/drivers/net/ngbe/ngbe_ipsec.c
@@ -0,0 +1,178 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018-2021 Beijing WangXun Technology Co., Ltd.
+ * Copyright(c) 2010-2017 Intel Corporation
+ */
+
+#include <ethdev_pci.h>
+#include <rte_security_driver.h>
+#include <rte_cryptodev.h>
+
+#include "base/ngbe.h"
+#include "ngbe_ethdev.h"
+
+static const struct rte_security_capability *
+ngbe_crypto_capabilities_get(void *device __rte_unused)
+{
+ static const struct rte_cryptodev_capabilities
+ aes_gcm_gmac_crypto_capabilities[] = {
+ { /* AES GMAC (128-bit) */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_AES_GMAC,
+ .block_size = 16,
+ .key_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ },
+ .digest_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ },
+ .iv_size = {
+ .min = 12,
+ .max = 12,
+ .increment = 0
+ }
+ }, }
+ }, }
+ },
+ { /* AES GCM (128-bit) */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AEAD,
+ {.aead = {
+ .algo = RTE_CRYPTO_AEAD_AES_GCM,
+ .block_size = 16,
+ .key_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ },
+ .digest_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ },
+ .aad_size = {
+ .min = 0,
+ .max = 65535,
+ .increment = 1
+ },
+ .iv_size = {
+ .min = 12,
+ .max = 12,
+ .increment = 0
+ }
+ }, }
+ }, }
+ },
+ {
+ .op = RTE_CRYPTO_OP_TYPE_UNDEFINED,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_NOT_SPECIFIED
+ }, }
+ },
+ };
+
+ static const struct rte_security_capability
+ ngbe_security_capabilities[] = {
+ { /* IPsec Inline Crypto ESP Transport Egress */
+ .action = RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO,
+ .protocol = RTE_SECURITY_PROTOCOL_IPSEC,
+ {.ipsec = {
+ .proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP,
+ .mode = RTE_SECURITY_IPSEC_SA_MODE_TRANSPORT,
+ .direction = RTE_SECURITY_IPSEC_SA_DIR_EGRESS,
+ .options = { 0 }
+ } },
+ .crypto_capabilities = aes_gcm_gmac_crypto_capabilities,
+ .ol_flags = RTE_SECURITY_TX_OLOAD_NEED_MDATA
+ },
+ { /* IPsec Inline Crypto ESP Transport Ingress */
+ .action = RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO,
+ .protocol = RTE_SECURITY_PROTOCOL_IPSEC,
+ {.ipsec = {
+ .proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP,
+ .mode = RTE_SECURITY_IPSEC_SA_MODE_TRANSPORT,
+ .direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS,
+ .options = { 0 }
+ } },
+ .crypto_capabilities = aes_gcm_gmac_crypto_capabilities,
+ .ol_flags = 0
+ },
+ { /* IPsec Inline Crypto ESP Tunnel Egress */
+ .action = RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO,
+ .protocol = RTE_SECURITY_PROTOCOL_IPSEC,
+ {.ipsec = {
+ .proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP,
+ .mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL,
+ .direction = RTE_SECURITY_IPSEC_SA_DIR_EGRESS,
+ .options = { 0 }
+ } },
+ .crypto_capabilities = aes_gcm_gmac_crypto_capabilities,
+ .ol_flags = RTE_SECURITY_TX_OLOAD_NEED_MDATA
+ },
+ { /* IPsec Inline Crypto ESP Tunnel Ingress */
+ .action = RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO,
+ .protocol = RTE_SECURITY_PROTOCOL_IPSEC,
+ {.ipsec = {
+ .proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP,
+ .mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL,
+ .direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS,
+ .options = { 0 }
+ } },
+ .crypto_capabilities = aes_gcm_gmac_crypto_capabilities,
+ .ol_flags = 0
+ },
+ {
+ .action = RTE_SECURITY_ACTION_TYPE_NONE
+ }
+ };
+
+ return ngbe_security_capabilities;
+}
+
+static struct rte_security_ops ngbe_security_ops = {
+ .capabilities_get = ngbe_crypto_capabilities_get
+};
+
+static int
+ngbe_crypto_capable(struct rte_eth_dev *dev)
+{
+ struct ngbe_hw *hw = ngbe_dev_hw(dev);
+ uint32_t reg_i, reg, capable = 1;
+ /* test if rx crypto can be enabled and then write back initial value*/
+ reg_i = rd32(hw, NGBE_SECRXCTL);
+ wr32m(hw, NGBE_SECRXCTL, NGBE_SECRXCTL_ODSA, 0);
+ reg = rd32m(hw, NGBE_SECRXCTL, NGBE_SECRXCTL_ODSA);
+ if (reg != 0)
+ capable = 0;
+ wr32(hw, NGBE_SECRXCTL, reg_i);
+ return capable;
+}
+
+int
+ngbe_ipsec_ctx_create(struct rte_eth_dev *dev)
+{
+ struct rte_security_ctx *ctx = NULL;
+
+ if (ngbe_crypto_capable(dev)) {
+ ctx = rte_malloc("rte_security_instances_ops",
+ sizeof(struct rte_security_ctx), 0);
+ if (ctx) {
+ ctx->device = (void *)dev;
+ ctx->ops = &ngbe_security_ops;
+ ctx->sess_cnt = 0;
+ dev->security_ctx = ctx;
+ } else {
+ return -ENOMEM;
+ }
+ }
+ if (rte_security_dynfield_register() < 0)
+ return -rte_errno;
+ return 0;
+}
--
2.21.0.windows.1
^ permalink raw reply [flat|nested] 54+ messages in thread
* [dpdk-dev] [PATCH 29/32] net/ngbe: create and destroy security session
2021-09-08 8:37 [dpdk-dev] [PATCH 00/32] net/ngbe: add many features Jiawen Wu
` (27 preceding siblings ...)
2021-09-08 8:37 ` [dpdk-dev] [PATCH 28/32] net/ngbe: add IPsec context creation Jiawen Wu
@ 2021-09-08 8:37 ` Jiawen Wu
2021-09-08 8:37 ` [dpdk-dev] [PATCH 30/32] net/ngbe: support security operations Jiawen Wu
` (2 subsequent siblings)
31 siblings, 0 replies; 54+ messages in thread
From: Jiawen Wu @ 2021-09-08 8:37 UTC (permalink / raw)
To: dev; +Cc: Jiawen Wu
Support to configure a security session, add create and destroy
operations for a security session.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
drivers/net/ngbe/ngbe_ethdev.h | 8 +
drivers/net/ngbe/ngbe_ipsec.c | 377 +++++++++++++++++++++++++++++++++
drivers/net/ngbe/ngbe_ipsec.h | 78 +++++++
3 files changed, 463 insertions(+)
create mode 100644 drivers/net/ngbe/ngbe_ipsec.h
diff --git a/drivers/net/ngbe/ngbe_ethdev.h b/drivers/net/ngbe/ngbe_ethdev.h
index 9eda024d65..e8ce01e1f4 100644
--- a/drivers/net/ngbe/ngbe_ethdev.h
+++ b/drivers/net/ngbe/ngbe_ethdev.h
@@ -7,6 +7,9 @@
#define _NGBE_ETHDEV_H_
#include "ngbe_ptypes.h"
+#ifdef RTE_LIB_SECURITY
+#include "ngbe_ipsec.h"
+#endif
#include <rte_time.h>
#include <rte_ethdev.h>
#include <rte_ethdev_core.h>
@@ -107,6 +110,9 @@ struct ngbe_adapter {
struct ngbe_hwstrip hwstrip;
struct ngbe_vf_info *vfdata;
struct ngbe_uta_info uta_info;
+#ifdef RTE_LIB_SECURITY
+ struct ngbe_ipsec ipsec;
+#endif
bool rx_bulk_alloc_allowed;
struct rte_timecounter systime_tc;
struct rte_timecounter rx_tstamp_tc;
@@ -160,6 +166,8 @@ ngbe_dev_intr(struct rte_eth_dev *dev)
#define NGBE_DEV_UTA_INFO(dev) \
(&((struct ngbe_adapter *)(dev)->data->dev_private)->uta_info)
+#define NGBE_DEV_IPSEC(dev) \
+ (&((struct ngbe_adapter *)(dev)->data->dev_private)->ipsec)
/*
* Rx/Tx function prototypes
diff --git a/drivers/net/ngbe/ngbe_ipsec.c b/drivers/net/ngbe/ngbe_ipsec.c
index 5f8b0bab29..80151d45dc 100644
--- a/drivers/net/ngbe/ngbe_ipsec.c
+++ b/drivers/net/ngbe/ngbe_ipsec.c
@@ -9,6 +9,381 @@
#include "base/ngbe.h"
#include "ngbe_ethdev.h"
+#include "ngbe_ipsec.h"
+
+#define CMP_IP(a, b) (\
+ (a).ipv6[0] == (b).ipv6[0] && \
+ (a).ipv6[1] == (b).ipv6[1] && \
+ (a).ipv6[2] == (b).ipv6[2] && \
+ (a).ipv6[3] == (b).ipv6[3])
+
+static int
+ngbe_crypto_add_sa(struct ngbe_crypto_session *ic_session)
+{
+ struct rte_eth_dev *dev = ic_session->dev;
+ struct ngbe_hw *hw = ngbe_dev_hw(dev);
+ struct ngbe_ipsec *priv = NGBE_DEV_IPSEC(dev);
+ uint32_t reg_val;
+ int sa_index = -1;
+
+ if (ic_session->op == NGBE_OP_AUTHENTICATED_DECRYPTION) {
+ int i, ip_index = -1;
+ uint8_t *key;
+
+ /* Find a match in the IP table*/
+ for (i = 0; i < IPSEC_MAX_RX_IP_COUNT; i++) {
+ if (CMP_IP(priv->rx_ip_tbl[i].ip,
+ ic_session->dst_ip)) {
+ ip_index = i;
+ break;
+ }
+ }
+ /* If no match, find a free entry in the IP table*/
+ if (ip_index < 0) {
+ for (i = 0; i < IPSEC_MAX_RX_IP_COUNT; i++) {
+ if (priv->rx_ip_tbl[i].ref_count == 0) {
+ ip_index = i;
+ break;
+ }
+ }
+ }
+
+ /* Fail if no match and no free entries*/
+ if (ip_index < 0) {
+ PMD_DRV_LOG(ERR,
+ "No free entry left in the Rx IP table\n");
+ return -1;
+ }
+
+ /* Find a free entry in the SA table*/
+ for (i = 0; i < IPSEC_MAX_SA_COUNT; i++) {
+ if (priv->rx_sa_tbl[i].used == 0) {
+ sa_index = i;
+ break;
+ }
+ }
+ /* Fail if no free entries*/
+ if (sa_index < 0) {
+ PMD_DRV_LOG(ERR,
+ "No free entry left in the Rx SA table\n");
+ return -1;
+ }
+
+ priv->rx_ip_tbl[ip_index].ip.ipv6[0] =
+ ic_session->dst_ip.ipv6[0];
+ priv->rx_ip_tbl[ip_index].ip.ipv6[1] =
+ ic_session->dst_ip.ipv6[1];
+ priv->rx_ip_tbl[ip_index].ip.ipv6[2] =
+ ic_session->dst_ip.ipv6[2];
+ priv->rx_ip_tbl[ip_index].ip.ipv6[3] =
+ ic_session->dst_ip.ipv6[3];
+ priv->rx_ip_tbl[ip_index].ref_count++;
+
+ priv->rx_sa_tbl[sa_index].spi = ic_session->spi;
+ priv->rx_sa_tbl[sa_index].ip_index = ip_index;
+ priv->rx_sa_tbl[sa_index].mode = IPSRXMOD_VALID;
+ if (ic_session->op == NGBE_OP_AUTHENTICATED_DECRYPTION)
+ priv->rx_sa_tbl[sa_index].mode |=
+ (IPSRXMOD_PROTO | IPSRXMOD_DECRYPT);
+ if (ic_session->dst_ip.type == IPv6) {
+ priv->rx_sa_tbl[sa_index].mode |= IPSRXMOD_IPV6;
+ priv->rx_ip_tbl[ip_index].ip.type = IPv6;
+ } else if (ic_session->dst_ip.type == IPv4) {
+ priv->rx_ip_tbl[ip_index].ip.type = IPv4;
+ }
+ priv->rx_sa_tbl[sa_index].used = 1;
+
+ /* write IP table entry*/
+ reg_val = NGBE_IPSRXIDX_ENA | NGBE_IPSRXIDX_WRITE |
+ NGBE_IPSRXIDX_TB_IP | (ip_index << 3);
+ if (priv->rx_ip_tbl[ip_index].ip.type == IPv4) {
+ uint32_t ipv4 = priv->rx_ip_tbl[ip_index].ip.ipv4;
+ wr32(hw, NGBE_IPSRXADDR(0), rte_cpu_to_be_32(ipv4));
+ wr32(hw, NGBE_IPSRXADDR(1), 0);
+ wr32(hw, NGBE_IPSRXADDR(2), 0);
+ wr32(hw, NGBE_IPSRXADDR(3), 0);
+ } else {
+ wr32(hw, NGBE_IPSRXADDR(0),
+ priv->rx_ip_tbl[ip_index].ip.ipv6[0]);
+ wr32(hw, NGBE_IPSRXADDR(1),
+ priv->rx_ip_tbl[ip_index].ip.ipv6[1]);
+ wr32(hw, NGBE_IPSRXADDR(2),
+ priv->rx_ip_tbl[ip_index].ip.ipv6[2]);
+ wr32(hw, NGBE_IPSRXADDR(3),
+ priv->rx_ip_tbl[ip_index].ip.ipv6[3]);
+ }
+ wr32w(hw, NGBE_IPSRXIDX, reg_val, NGBE_IPSRXIDX_WRITE, 1000);
+
+ /* write SPI table entry*/
+ reg_val = NGBE_IPSRXIDX_ENA | NGBE_IPSRXIDX_WRITE |
+ NGBE_IPSRXIDX_TB_SPI | (sa_index << 3);
+ wr32(hw, NGBE_IPSRXSPI,
+ priv->rx_sa_tbl[sa_index].spi);
+ wr32(hw, NGBE_IPSRXADDRIDX,
+ priv->rx_sa_tbl[sa_index].ip_index);
+ wr32w(hw, NGBE_IPSRXIDX, reg_val, NGBE_IPSRXIDX_WRITE, 1000);
+
+ /* write Key table entry*/
+ key = malloc(ic_session->key_len);
+ if (!key)
+ return -ENOMEM;
+
+ memcpy(key, ic_session->key, ic_session->key_len);
+
+ reg_val = NGBE_IPSRXIDX_ENA | NGBE_IPSRXIDX_WRITE |
+ NGBE_IPSRXIDX_TB_KEY | (sa_index << 3);
+ wr32(hw, NGBE_IPSRXKEY(0),
+ rte_cpu_to_be_32(*(uint32_t *)&key[12]));
+ wr32(hw, NGBE_IPSRXKEY(1),
+ rte_cpu_to_be_32(*(uint32_t *)&key[8]));
+ wr32(hw, NGBE_IPSRXKEY(2),
+ rte_cpu_to_be_32(*(uint32_t *)&key[4]));
+ wr32(hw, NGBE_IPSRXKEY(3),
+ rte_cpu_to_be_32(*(uint32_t *)&key[0]));
+ wr32(hw, NGBE_IPSRXSALT,
+ rte_cpu_to_be_32(ic_session->salt));
+ wr32(hw, NGBE_IPSRXMODE,
+ priv->rx_sa_tbl[sa_index].mode);
+ wr32w(hw, NGBE_IPSRXIDX, reg_val, NGBE_IPSRXIDX_WRITE, 1000);
+
+ free(key);
+ } else { /* sess->dir == RTE_CRYPTO_OUTBOUND */
+ uint8_t *key;
+ int i;
+
+ /* Find a free entry in the SA table*/
+ for (i = 0; i < IPSEC_MAX_SA_COUNT; i++) {
+ if (priv->tx_sa_tbl[i].used == 0) {
+ sa_index = i;
+ break;
+ }
+ }
+ /* Fail if no free entries*/
+ if (sa_index < 0) {
+ PMD_DRV_LOG(ERR,
+ "No free entry left in the Tx SA table\n");
+ return -1;
+ }
+
+ priv->tx_sa_tbl[sa_index].spi =
+ rte_cpu_to_be_32(ic_session->spi);
+ priv->tx_sa_tbl[i].used = 1;
+ ic_session->sa_index = sa_index;
+
+ key = malloc(ic_session->key_len);
+ if (!key)
+ return -ENOMEM;
+
+ memcpy(key, ic_session->key, ic_session->key_len);
+
+ /* write Key table entry*/
+ reg_val = NGBE_IPSRXIDX_ENA |
+ NGBE_IPSRXIDX_WRITE | (sa_index << 3);
+ wr32(hw, NGBE_IPSTXKEY(0),
+ rte_cpu_to_be_32(*(uint32_t *)&key[12]));
+ wr32(hw, NGBE_IPSTXKEY(1),
+ rte_cpu_to_be_32(*(uint32_t *)&key[8]));
+ wr32(hw, NGBE_IPSTXKEY(2),
+ rte_cpu_to_be_32(*(uint32_t *)&key[4]));
+ wr32(hw, NGBE_IPSTXKEY(3),
+ rte_cpu_to_be_32(*(uint32_t *)&key[0]));
+ wr32(hw, NGBE_IPSTXSALT,
+ rte_cpu_to_be_32(ic_session->salt));
+ wr32w(hw, NGBE_IPSTXIDX, reg_val, NGBE_IPSTXIDX_WRITE, 1000);
+
+ free(key);
+ }
+
+ return 0;
+}
+
+static int
+ngbe_crypto_remove_sa(struct rte_eth_dev *dev,
+ struct ngbe_crypto_session *ic_session)
+{
+ struct ngbe_hw *hw = ngbe_dev_hw(dev);
+ struct ngbe_ipsec *priv = NGBE_DEV_IPSEC(dev);
+ uint32_t reg_val;
+ int sa_index = -1;
+
+ if (ic_session->op == NGBE_OP_AUTHENTICATED_DECRYPTION) {
+ int i, ip_index = -1;
+
+ /* Find a match in the IP table*/
+ for (i = 0; i < IPSEC_MAX_RX_IP_COUNT; i++) {
+ if (CMP_IP(priv->rx_ip_tbl[i].ip, ic_session->dst_ip)) {
+ ip_index = i;
+ break;
+ }
+ }
+
+ /* Fail if no match*/
+ if (ip_index < 0) {
+ PMD_DRV_LOG(ERR,
+ "Entry not found in the Rx IP table\n");
+ return -1;
+ }
+
+ /* Find a free entry in the SA table*/
+ for (i = 0; i < IPSEC_MAX_SA_COUNT; i++) {
+ if (priv->rx_sa_tbl[i].spi ==
+ rte_cpu_to_be_32(ic_session->spi)) {
+ sa_index = i;
+ break;
+ }
+ }
+ /* Fail if no match*/
+ if (sa_index < 0) {
+ PMD_DRV_LOG(ERR,
+ "Entry not found in the Rx SA table\n");
+ return -1;
+ }
+
+ /* Disable and clear Rx SPI and key table entryes*/
+ reg_val = NGBE_IPSRXIDX_WRITE |
+ NGBE_IPSRXIDX_TB_SPI | (sa_index << 3);
+ wr32(hw, NGBE_IPSRXSPI, 0);
+ wr32(hw, NGBE_IPSRXADDRIDX, 0);
+ wr32w(hw, NGBE_IPSRXIDX, reg_val, NGBE_IPSRXIDX_WRITE, 1000);
+ reg_val = NGBE_IPSRXIDX_WRITE |
+ NGBE_IPSRXIDX_TB_KEY | (sa_index << 3);
+ wr32(hw, NGBE_IPSRXKEY(0), 0);
+ wr32(hw, NGBE_IPSRXKEY(1), 0);
+ wr32(hw, NGBE_IPSRXKEY(2), 0);
+ wr32(hw, NGBE_IPSRXKEY(3), 0);
+ wr32(hw, NGBE_IPSRXSALT, 0);
+ wr32(hw, NGBE_IPSRXMODE, 0);
+ wr32w(hw, NGBE_IPSRXIDX, reg_val, NGBE_IPSRXIDX_WRITE, 1000);
+ priv->rx_sa_tbl[sa_index].used = 0;
+
+ /* If last used then clear the IP table entry*/
+ priv->rx_ip_tbl[ip_index].ref_count--;
+ if (priv->rx_ip_tbl[ip_index].ref_count == 0) {
+ reg_val = NGBE_IPSRXIDX_WRITE | NGBE_IPSRXIDX_TB_IP |
+ (ip_index << 3);
+ wr32(hw, NGBE_IPSRXADDR(0), 0);
+ wr32(hw, NGBE_IPSRXADDR(1), 0);
+ wr32(hw, NGBE_IPSRXADDR(2), 0);
+ wr32(hw, NGBE_IPSRXADDR(3), 0);
+ }
+ } else { /* session->dir == RTE_CRYPTO_OUTBOUND */
+ int i;
+
+ /* Find a match in the SA table*/
+ for (i = 0; i < IPSEC_MAX_SA_COUNT; i++) {
+ if (priv->tx_sa_tbl[i].spi ==
+ rte_cpu_to_be_32(ic_session->spi)) {
+ sa_index = i;
+ break;
+ }
+ }
+ /* Fail if no match entries*/
+ if (sa_index < 0) {
+ PMD_DRV_LOG(ERR,
+ "Entry not found in the Tx SA table\n");
+ return -1;
+ }
+ reg_val = NGBE_IPSRXIDX_WRITE | (sa_index << 3);
+ wr32(hw, NGBE_IPSTXKEY(0), 0);
+ wr32(hw, NGBE_IPSTXKEY(1), 0);
+ wr32(hw, NGBE_IPSTXKEY(2), 0);
+ wr32(hw, NGBE_IPSTXKEY(3), 0);
+ wr32(hw, NGBE_IPSTXSALT, 0);
+ wr32w(hw, NGBE_IPSTXIDX, reg_val, NGBE_IPSTXIDX_WRITE, 1000);
+
+ priv->tx_sa_tbl[sa_index].used = 0;
+ }
+
+ return 0;
+}
+
+static int
+ngbe_crypto_create_session(void *device,
+ struct rte_security_session_conf *conf,
+ struct rte_security_session *session,
+ struct rte_mempool *mempool)
+{
+ struct rte_eth_dev *eth_dev = (struct rte_eth_dev *)device;
+ struct ngbe_crypto_session *ic_session = NULL;
+ struct rte_crypto_aead_xform *aead_xform;
+ struct rte_eth_conf *dev_conf = ð_dev->data->dev_conf;
+
+ if (rte_mempool_get(mempool, (void **)&ic_session)) {
+ PMD_DRV_LOG(ERR, "Cannot get object from ic_session mempool");
+ return -ENOMEM;
+ }
+
+ if (conf->crypto_xform->type != RTE_CRYPTO_SYM_XFORM_AEAD ||
+ conf->crypto_xform->aead.algo !=
+ RTE_CRYPTO_AEAD_AES_GCM) {
+ PMD_DRV_LOG(ERR, "Unsupported crypto transformation mode\n");
+ rte_mempool_put(mempool, (void *)ic_session);
+ return -ENOTSUP;
+ }
+ aead_xform = &conf->crypto_xform->aead;
+
+ if (conf->ipsec.direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS) {
+ if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_SECURITY) {
+ ic_session->op = NGBE_OP_AUTHENTICATED_DECRYPTION;
+ } else {
+ PMD_DRV_LOG(ERR, "IPsec decryption not enabled\n");
+ rte_mempool_put(mempool, (void *)ic_session);
+ return -ENOTSUP;
+ }
+ } else {
+ if (dev_conf->txmode.offloads & DEV_TX_OFFLOAD_SECURITY) {
+ ic_session->op = NGBE_OP_AUTHENTICATED_ENCRYPTION;
+ } else {
+ PMD_DRV_LOG(ERR, "IPsec encryption not enabled\n");
+ rte_mempool_put(mempool, (void *)ic_session);
+ return -ENOTSUP;
+ }
+ }
+
+ ic_session->key = aead_xform->key.data;
+ ic_session->key_len = aead_xform->key.length;
+ memcpy(&ic_session->salt,
+ &aead_xform->key.data[aead_xform->key.length], 4);
+ ic_session->spi = conf->ipsec.spi;
+ ic_session->dev = eth_dev;
+
+ set_sec_session_private_data(session, ic_session);
+
+ if (ic_session->op == NGBE_OP_AUTHENTICATED_ENCRYPTION) {
+ if (ngbe_crypto_add_sa(ic_session)) {
+ PMD_DRV_LOG(ERR, "Failed to add SA\n");
+ rte_mempool_put(mempool, (void *)ic_session);
+ return -EPERM;
+ }
+ }
+
+ return 0;
+}
+
+static int
+ngbe_crypto_remove_session(void *device,
+ struct rte_security_session *session)
+{
+ struct rte_eth_dev *eth_dev = device;
+ struct ngbe_crypto_session *ic_session =
+ (struct ngbe_crypto_session *)
+ get_sec_session_private_data(session);
+ struct rte_mempool *mempool = rte_mempool_from_obj(ic_session);
+
+ if (eth_dev != ic_session->dev) {
+ PMD_DRV_LOG(ERR, "Session not bound to this device\n");
+ return -ENODEV;
+ }
+
+ if (ngbe_crypto_remove_sa(eth_dev, ic_session)) {
+ PMD_DRV_LOG(ERR, "Failed to remove session\n");
+ return -EFAULT;
+ }
+
+ rte_mempool_put(mempool, (void *)ic_session);
+
+ return 0;
+}
static const struct rte_security_capability *
ngbe_crypto_capabilities_get(void *device __rte_unused)
@@ -137,6 +512,8 @@ ngbe_crypto_capabilities_get(void *device __rte_unused)
}
static struct rte_security_ops ngbe_security_ops = {
+ .session_create = ngbe_crypto_create_session,
+ .session_destroy = ngbe_crypto_remove_session,
.capabilities_get = ngbe_crypto_capabilities_get
};
diff --git a/drivers/net/ngbe/ngbe_ipsec.h b/drivers/net/ngbe/ngbe_ipsec.h
new file mode 100644
index 0000000000..8442bb2157
--- /dev/null
+++ b/drivers/net/ngbe/ngbe_ipsec.h
@@ -0,0 +1,78 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018-2021 Beijing WangXun Technology Co., Ltd.
+ * Copyright(c) 2010-2017 Intel Corporation
+ */
+
+#ifndef NGBE_IPSEC_H_
+#define NGBE_IPSEC_H_
+
+#include <rte_ethdev.h>
+#include <rte_ethdev_core.h>
+#include <rte_security.h>
+
+#define IPSRXMOD_VALID 0x00000001
+#define IPSRXMOD_PROTO 0x00000004
+#define IPSRXMOD_DECRYPT 0x00000008
+#define IPSRXMOD_IPV6 0x00000010
+
+#define IPSEC_MAX_RX_IP_COUNT 16
+#define IPSEC_MAX_SA_COUNT 16
+
+enum ngbe_operation {
+ NGBE_OP_AUTHENTICATED_ENCRYPTION,
+ NGBE_OP_AUTHENTICATED_DECRYPTION
+};
+
+/**
+ * Generic IP address structure
+ * TODO: Find better location for this rte_net.h possibly.
+ **/
+struct ipaddr {
+ enum ipaddr_type {
+ IPv4,
+ IPv6
+ } type;
+ /**< IP Address Type - IPv4/IPv6 */
+
+ union {
+ uint32_t ipv4;
+ uint32_t ipv6[4];
+ };
+};
+
+/** inline crypto private session structure */
+struct ngbe_crypto_session {
+ enum ngbe_operation op;
+ const uint8_t *key;
+ uint32_t key_len;
+ uint32_t salt;
+ uint32_t sa_index;
+ uint32_t spi;
+ struct ipaddr src_ip;
+ struct ipaddr dst_ip;
+ struct rte_eth_dev *dev;
+} __rte_cache_aligned;
+
+struct ngbe_crypto_rx_ip_table {
+ struct ipaddr ip;
+ uint16_t ref_count;
+};
+struct ngbe_crypto_rx_sa_table {
+ uint32_t spi;
+ uint32_t ip_index;
+ uint8_t mode;
+ uint8_t used;
+};
+
+struct ngbe_crypto_tx_sa_table {
+ uint32_t spi;
+ uint8_t used;
+};
+
+struct ngbe_ipsec {
+ struct ngbe_crypto_rx_ip_table rx_ip_tbl[IPSEC_MAX_RX_IP_COUNT];
+ struct ngbe_crypto_rx_sa_table rx_sa_tbl[IPSEC_MAX_SA_COUNT];
+ struct ngbe_crypto_tx_sa_table tx_sa_tbl[IPSEC_MAX_SA_COUNT];
+};
+
+#endif /*NGBE_IPSEC_H_*/
--
2.21.0.windows.1
^ permalink raw reply [flat|nested] 54+ messages in thread
* [dpdk-dev] [PATCH 30/32] net/ngbe: support security operations
2021-09-08 8:37 [dpdk-dev] [PATCH 00/32] net/ngbe: add many features Jiawen Wu
` (28 preceding siblings ...)
2021-09-08 8:37 ` [dpdk-dev] [PATCH 29/32] net/ngbe: create and destroy security session Jiawen Wu
@ 2021-09-08 8:37 ` Jiawen Wu
2021-09-08 8:37 ` [dpdk-dev] [PATCH 31/32] net/ngbe: add security offload in Rx and Tx Jiawen Wu
2021-09-08 8:37 ` [dpdk-dev] [PATCH 32/32] doc: update for ngbe Jiawen Wu
31 siblings, 0 replies; 54+ messages in thread
From: Jiawen Wu @ 2021-09-08 8:37 UTC (permalink / raw)
To: dev; +Cc: Jiawen Wu
Support to update a security session and clear a security session
statistics.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
drivers/net/ngbe/ngbe_ipsec.c | 41 +++++++++++++++++++++++++++++++++++
drivers/net/ngbe/ngbe_ipsec.h | 15 +++++++++++++
2 files changed, 56 insertions(+)
diff --git a/drivers/net/ngbe/ngbe_ipsec.c b/drivers/net/ngbe/ngbe_ipsec.c
index 80151d45dc..cc79d7d88f 100644
--- a/drivers/net/ngbe/ngbe_ipsec.c
+++ b/drivers/net/ngbe/ngbe_ipsec.c
@@ -360,6 +360,12 @@ ngbe_crypto_create_session(void *device,
return 0;
}
+static unsigned int
+ngbe_crypto_session_get_size(__rte_unused void *device)
+{
+ return sizeof(struct ngbe_crypto_session);
+}
+
static int
ngbe_crypto_remove_session(void *device,
struct rte_security_session *session)
@@ -385,6 +391,39 @@ ngbe_crypto_remove_session(void *device,
return 0;
}
+static inline uint8_t
+ngbe_crypto_compute_pad_len(struct rte_mbuf *m)
+{
+ if (m->nb_segs == 1) {
+ /* 16 bytes ICV + 2 bytes ESP trailer + payload padding size
+ * payload padding size is stored at <pkt_len - 18>
+ */
+ uint8_t *esp_pad_len = rte_pktmbuf_mtod_offset(m, uint8_t *,
+ rte_pktmbuf_pkt_len(m) -
+ (ESP_TRAILER_SIZE + ESP_ICV_SIZE));
+ return *esp_pad_len + ESP_TRAILER_SIZE + ESP_ICV_SIZE;
+ }
+ return 0;
+}
+
+static int
+ngbe_crypto_update_mb(void *device __rte_unused,
+ struct rte_security_session *session,
+ struct rte_mbuf *m, void *params __rte_unused)
+{
+ struct ngbe_crypto_session *ic_session =
+ get_sec_session_private_data(session);
+ if (ic_session->op == NGBE_OP_AUTHENTICATED_ENCRYPTION) {
+ union ngbe_crypto_tx_desc_md *mdata =
+ (union ngbe_crypto_tx_desc_md *)
+ rte_security_dynfield(m);
+ mdata->enc = 1;
+ mdata->sa_idx = ic_session->sa_index;
+ mdata->pad_len = ngbe_crypto_compute_pad_len(m);
+ }
+ return 0;
+}
+
static const struct rte_security_capability *
ngbe_crypto_capabilities_get(void *device __rte_unused)
{
@@ -513,7 +552,9 @@ ngbe_crypto_capabilities_get(void *device __rte_unused)
static struct rte_security_ops ngbe_security_ops = {
.session_create = ngbe_crypto_create_session,
+ .session_get_size = ngbe_crypto_session_get_size,
.session_destroy = ngbe_crypto_remove_session,
+ .set_pkt_metadata = ngbe_crypto_update_mb,
.capabilities_get = ngbe_crypto_capabilities_get
};
diff --git a/drivers/net/ngbe/ngbe_ipsec.h b/drivers/net/ngbe/ngbe_ipsec.h
index 8442bb2157..fa5f21027b 100644
--- a/drivers/net/ngbe/ngbe_ipsec.h
+++ b/drivers/net/ngbe/ngbe_ipsec.h
@@ -18,6 +18,9 @@
#define IPSEC_MAX_RX_IP_COUNT 16
#define IPSEC_MAX_SA_COUNT 16
+#define ESP_ICV_SIZE 16
+#define ESP_TRAILER_SIZE 2
+
enum ngbe_operation {
NGBE_OP_AUTHENTICATED_ENCRYPTION,
NGBE_OP_AUTHENTICATED_DECRYPTION
@@ -69,6 +72,18 @@ struct ngbe_crypto_tx_sa_table {
uint8_t used;
};
+union ngbe_crypto_tx_desc_md {
+ uint64_t data;
+ struct {
+ /**< SA table index */
+ uint32_t sa_idx;
+ /**< ICV and ESP trailer length */
+ uint8_t pad_len;
+ /**< enable encryption */
+ uint8_t enc;
+ };
+};
+
struct ngbe_ipsec {
struct ngbe_crypto_rx_ip_table rx_ip_tbl[IPSEC_MAX_RX_IP_COUNT];
struct ngbe_crypto_rx_sa_table rx_sa_tbl[IPSEC_MAX_SA_COUNT];
--
2.21.0.windows.1
^ permalink raw reply [flat|nested] 54+ messages in thread
* [dpdk-dev] [PATCH 31/32] net/ngbe: add security offload in Rx and Tx
2021-09-08 8:37 [dpdk-dev] [PATCH 00/32] net/ngbe: add many features Jiawen Wu
` (29 preceding siblings ...)
2021-09-08 8:37 ` [dpdk-dev] [PATCH 30/32] net/ngbe: support security operations Jiawen Wu
@ 2021-09-08 8:37 ` Jiawen Wu
2021-09-08 8:37 ` [dpdk-dev] [PATCH 32/32] doc: update for ngbe Jiawen Wu
31 siblings, 0 replies; 54+ messages in thread
From: Jiawen Wu @ 2021-09-08 8:37 UTC (permalink / raw)
To: dev; +Cc: Jiawen Wu
Add security offload in Rx and Tx process.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
drivers/net/ngbe/ngbe_ipsec.c | 106 ++++++++++++++++++++++++++++++++++
drivers/net/ngbe/ngbe_ipsec.h | 2 +
drivers/net/ngbe/ngbe_rxtx.c | 91 ++++++++++++++++++++++++++++-
drivers/net/ngbe/ngbe_rxtx.h | 14 ++++-
4 files changed, 210 insertions(+), 3 deletions(-)
diff --git a/drivers/net/ngbe/ngbe_ipsec.c b/drivers/net/ngbe/ngbe_ipsec.c
index cc79d7d88f..54e05a834f 100644
--- a/drivers/net/ngbe/ngbe_ipsec.c
+++ b/drivers/net/ngbe/ngbe_ipsec.c
@@ -17,6 +17,55 @@
(a).ipv6[2] == (b).ipv6[2] && \
(a).ipv6[3] == (b).ipv6[3])
+static void
+ngbe_crypto_clear_ipsec_tables(struct rte_eth_dev *dev)
+{
+ struct ngbe_hw *hw = ngbe_dev_hw(dev);
+ struct ngbe_ipsec *priv = NGBE_DEV_IPSEC(dev);
+ int i = 0;
+
+ /* clear Rx IP table*/
+ for (i = 0; i < IPSEC_MAX_RX_IP_COUNT; i++) {
+ uint16_t index = i << 3;
+ uint32_t reg_val = NGBE_IPSRXIDX_WRITE |
+ NGBE_IPSRXIDX_TB_IP | index;
+ wr32(hw, NGBE_IPSRXADDR(0), 0);
+ wr32(hw, NGBE_IPSRXADDR(1), 0);
+ wr32(hw, NGBE_IPSRXADDR(2), 0);
+ wr32(hw, NGBE_IPSRXADDR(3), 0);
+ wr32w(hw, NGBE_IPSRXIDX, reg_val, NGBE_IPSRXIDX_WRITE, 1000);
+ }
+
+ /* clear Rx SPI and Rx/Tx SA tables*/
+ for (i = 0; i < IPSEC_MAX_SA_COUNT; i++) {
+ uint32_t index = i << 3;
+ uint32_t reg_val = NGBE_IPSRXIDX_WRITE |
+ NGBE_IPSRXIDX_TB_SPI | index;
+ wr32(hw, NGBE_IPSRXSPI, 0);
+ wr32(hw, NGBE_IPSRXADDRIDX, 0);
+ wr32w(hw, NGBE_IPSRXIDX, reg_val, NGBE_IPSRXIDX_WRITE, 1000);
+ reg_val = NGBE_IPSRXIDX_WRITE | NGBE_IPSRXIDX_TB_KEY | index;
+ wr32(hw, NGBE_IPSRXKEY(0), 0);
+ wr32(hw, NGBE_IPSRXKEY(1), 0);
+ wr32(hw, NGBE_IPSRXKEY(2), 0);
+ wr32(hw, NGBE_IPSRXKEY(3), 0);
+ wr32(hw, NGBE_IPSRXSALT, 0);
+ wr32(hw, NGBE_IPSRXMODE, 0);
+ wr32w(hw, NGBE_IPSRXIDX, reg_val, NGBE_IPSRXIDX_WRITE, 1000);
+ reg_val = NGBE_IPSTXIDX_WRITE | index;
+ wr32(hw, NGBE_IPSTXKEY(0), 0);
+ wr32(hw, NGBE_IPSTXKEY(1), 0);
+ wr32(hw, NGBE_IPSTXKEY(2), 0);
+ wr32(hw, NGBE_IPSTXKEY(3), 0);
+ wr32(hw, NGBE_IPSTXSALT, 0);
+ wr32w(hw, NGBE_IPSTXIDX, reg_val, NGBE_IPSTXIDX_WRITE, 1000);
+ }
+
+ memset(priv->rx_ip_tbl, 0, sizeof(priv->rx_ip_tbl));
+ memset(priv->rx_sa_tbl, 0, sizeof(priv->rx_sa_tbl));
+ memset(priv->tx_sa_tbl, 0, sizeof(priv->tx_sa_tbl));
+}
+
static int
ngbe_crypto_add_sa(struct ngbe_crypto_session *ic_session)
{
@@ -550,6 +599,63 @@ ngbe_crypto_capabilities_get(void *device __rte_unused)
return ngbe_security_capabilities;
}
+int
+ngbe_crypto_enable_ipsec(struct rte_eth_dev *dev)
+{
+ struct ngbe_hw *hw = ngbe_dev_hw(dev);
+ uint32_t reg;
+ uint64_t rx_offloads;
+ uint64_t tx_offloads;
+
+ rx_offloads = dev->data->dev_conf.rxmode.offloads;
+ tx_offloads = dev->data->dev_conf.txmode.offloads;
+
+ /* sanity checks */
+ if (rx_offloads & DEV_RX_OFFLOAD_TCP_LRO) {
+ PMD_DRV_LOG(ERR, "RSC and IPsec not supported");
+ return -1;
+ }
+ if (rx_offloads & DEV_RX_OFFLOAD_KEEP_CRC) {
+ PMD_DRV_LOG(ERR, "HW CRC strip needs to be enabled for IPsec");
+ return -1;
+ }
+
+ /* Set NGBE_SECTXBUFFAF to 0x14 as required in the datasheet*/
+ wr32(hw, NGBE_SECTXBUFAF, 0x14);
+
+ /* IFG needs to be set to 3 when we are using security. Otherwise a Tx
+ * hang will occur with heavy traffic.
+ */
+ reg = rd32(hw, NGBE_SECTXIFG);
+ reg = (reg & ~NGBE_SECTXIFG_MIN_MASK) | NGBE_SECTXIFG_MIN(0x3);
+ wr32(hw, NGBE_SECTXIFG, reg);
+
+ reg = rd32(hw, NGBE_SECRXCTL);
+ reg |= NGBE_SECRXCTL_CRCSTRIP;
+ wr32(hw, NGBE_SECRXCTL, reg);
+
+ if (rx_offloads & DEV_RX_OFFLOAD_SECURITY) {
+ wr32m(hw, NGBE_SECRXCTL, NGBE_SECRXCTL_ODSA, 0);
+ reg = rd32m(hw, NGBE_SECRXCTL, NGBE_SECRXCTL_ODSA);
+ if (reg != 0) {
+ PMD_DRV_LOG(ERR, "Error enabling Rx Crypto");
+ return -1;
+ }
+ }
+ if (tx_offloads & DEV_TX_OFFLOAD_SECURITY) {
+ wr32(hw, NGBE_SECTXCTL, NGBE_SECTXCTL_STFWD);
+ reg = rd32(hw, NGBE_SECTXCTL);
+ if (reg != NGBE_SECTXCTL_STFWD) {
+ PMD_DRV_LOG(ERR, "Error enabling Rx Crypto");
+ return -1;
+ }
+ }
+
+ ngbe_crypto_clear_ipsec_tables(dev);
+
+ return 0;
+}
+
static struct rte_security_ops ngbe_security_ops = {
.session_create = ngbe_crypto_create_session,
.session_get_size = ngbe_crypto_session_get_size,
diff --git a/drivers/net/ngbe/ngbe_ipsec.h b/drivers/net/ngbe/ngbe_ipsec.h
index fa5f21027b..13273d91d8 100644
--- a/drivers/net/ngbe/ngbe_ipsec.h
+++ b/drivers/net/ngbe/ngbe_ipsec.h
@@ -90,4 +90,6 @@ struct ngbe_ipsec {
struct ngbe_crypto_tx_sa_table tx_sa_tbl[IPSEC_MAX_SA_COUNT];
};
+int ngbe_crypto_enable_ipsec(struct rte_eth_dev *dev);
+
#endif /*NGBE_IPSEC_H_*/
diff --git a/drivers/net/ngbe/ngbe_rxtx.c b/drivers/net/ngbe/ngbe_rxtx.c
index bee4f04616..04c8ec4e88 100644
--- a/drivers/net/ngbe/ngbe_rxtx.c
+++ b/drivers/net/ngbe/ngbe_rxtx.c
@@ -33,6 +33,9 @@ static const u64 NGBE_TX_OFFLOAD_MASK = (PKT_TX_IP_CKSUM |
PKT_TX_TCP_SEG |
PKT_TX_TUNNEL_MASK |
PKT_TX_OUTER_IP_CKSUM |
+#ifdef RTE_LIB_SECURITY
+ PKT_TX_SEC_OFFLOAD |
+#endif
NGBE_TX_IEEE1588_TMST);
#define NGBE_TX_OFFLOAD_NOTSUP_MASK \
@@ -274,7 +277,8 @@ ngbe_xmit_pkts_simple(void *tx_queue, struct rte_mbuf **tx_pkts,
static inline void
ngbe_set_xmit_ctx(struct ngbe_tx_queue *txq,
volatile struct ngbe_tx_ctx_desc *ctx_txd,
- uint64_t ol_flags, union ngbe_tx_offload tx_offload)
+ uint64_t ol_flags, union ngbe_tx_offload tx_offload,
+ __rte_unused uint64_t *mdata)
{
union ngbe_tx_offload tx_offload_mask;
uint32_t type_tucmd_mlhl;
@@ -361,6 +365,19 @@ ngbe_set_xmit_ctx(struct ngbe_tx_queue *txq,
vlan_macip_lens |= NGBE_TXD_VLAN(tx_offload.vlan_tci);
}
+#ifdef RTE_LIB_SECURITY
+ if (ol_flags & PKT_TX_SEC_OFFLOAD) {
+ union ngbe_crypto_tx_desc_md *md =
+ (union ngbe_crypto_tx_desc_md *)mdata;
+ tunnel_seed |= NGBE_TXD_IPSEC_SAIDX(md->sa_idx);
+ type_tucmd_mlhl |= md->enc ?
+ (NGBE_TXD_IPSEC_ESP | NGBE_TXD_IPSEC_ESPENC) : 0;
+ type_tucmd_mlhl |= NGBE_TXD_IPSEC_ESPLEN(md->pad_len);
+ tx_offload_mask.sa_idx |= ~0;
+ tx_offload_mask.sec_pad_len |= ~0;
+ }
+#endif
+
txq->ctx_cache[ctx_idx].flags = ol_flags;
txq->ctx_cache[ctx_idx].tx_offload.data[0] =
tx_offload_mask.data[0] & tx_offload.data[0];
@@ -592,6 +609,9 @@ ngbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
uint32_t ctx = 0;
uint32_t new_ctx;
union ngbe_tx_offload tx_offload;
+#ifdef RTE_LIB_SECURITY
+ uint8_t use_ipsec;
+#endif
tx_offload.data[0] = 0;
tx_offload.data[1] = 0;
@@ -618,6 +638,9 @@ ngbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
* are needed for offload functionality.
*/
ol_flags = tx_pkt->ol_flags;
+#ifdef RTE_LIB_SECURITY
+ use_ipsec = txq->using_ipsec && (ol_flags & PKT_TX_SEC_OFFLOAD);
+#endif
/* If hardware offload required */
tx_ol_req = ol_flags & NGBE_TX_OFFLOAD_MASK;
@@ -633,6 +656,16 @@ ngbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_offload.outer_l3_len = tx_pkt->outer_l3_len;
tx_offload.outer_tun_len = 0;
+#ifdef RTE_LIB_SECURITY
+ if (use_ipsec) {
+ union ngbe_crypto_tx_desc_md *ipsec_mdata =
+ (union ngbe_crypto_tx_desc_md *)
+ rte_security_dynfield(tx_pkt);
+ tx_offload.sa_idx = ipsec_mdata->sa_idx;
+ tx_offload.sec_pad_len = ipsec_mdata->pad_len;
+ }
+#endif
+
/* If new context need be built or reuse the exist ctx*/
ctx = what_ctx_update(txq, tx_ol_req, tx_offload);
/* Only allocate context descriptor if required */
@@ -776,7 +809,8 @@ ngbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
}
ngbe_set_xmit_ctx(txq, ctx_txd, tx_ol_req,
- tx_offload);
+ tx_offload,
+ rte_security_dynfield(tx_pkt));
txe->last_id = tx_last;
tx_id = txe->next_id;
@@ -795,6 +829,10 @@ ngbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
}
olinfo_status |= NGBE_TXD_PAYLEN(pkt_len);
+#ifdef RTE_LIB_SECURITY
+ if (use_ipsec)
+ olinfo_status |= NGBE_TXD_IPSEC;
+#endif
m_seg = tx_pkt;
do {
@@ -978,6 +1016,13 @@ rx_desc_error_to_pkt_flags(uint32_t rx_status)
pkt_flags |= PKT_RX_OUTER_IP_CKSUM_BAD;
}
+#ifdef RTE_LIB_SECURITY
+ if (rx_status & NGBE_RXD_STAT_SECP) {
+ pkt_flags |= PKT_RX_SEC_OFFLOAD;
+ if (rx_status & NGBE_RXD_ERR_SECERR)
+ pkt_flags |= PKT_RX_SEC_OFFLOAD_FAILED;
+ }
+#endif
return pkt_flags;
}
@@ -1800,6 +1845,9 @@ ngbe_dev_tx_done_cleanup(void *tx_queue, uint32_t free_cnt)
{
struct ngbe_tx_queue *txq = (struct ngbe_tx_queue *)tx_queue;
if (txq->offloads == 0 &&
+#ifdef RTE_LIB_SECURITY
+ !(txq->using_ipsec) &&
+#endif
txq->tx_free_thresh >= RTE_PMD_NGBE_TX_MAX_BURST)
return ngbe_tx_done_cleanup_simple(txq, free_cnt);
@@ -1885,6 +1933,9 @@ ngbe_set_tx_function(struct rte_eth_dev *dev, struct ngbe_tx_queue *txq)
{
/* Use a simple Tx queue (no offloads, no multi segs) if possible */
if (txq->offloads == 0 &&
+#ifdef RTE_LIB_SECURITY
+ !(txq->using_ipsec) &&
+#endif
txq->tx_free_thresh >= RTE_PMD_NGBE_TX_MAX_BURST) {
PMD_INIT_LOG(DEBUG, "Using simple tx code path");
dev->tx_pkt_burst = ngbe_xmit_pkts_simple;
@@ -1926,6 +1977,10 @@ ngbe_get_tx_port_offloads(struct rte_eth_dev *dev)
if (hw->is_pf)
tx_offload_capa |= DEV_TX_OFFLOAD_QINQ_INSERT;
+#ifdef RTE_LIB_SECURITY
+ if (dev->security_ctx)
+ tx_offload_capa |= DEV_TX_OFFLOAD_SECURITY;
+#endif
return tx_offload_capa;
}
@@ -2012,6 +2067,10 @@ ngbe_dev_tx_queue_setup(struct rte_eth_dev *dev,
txq->offloads = offloads;
txq->ops = &def_txq_ops;
txq->tx_deferred_start = tx_conf->tx_deferred_start;
+#ifdef RTE_LIB_SECURITY
+ txq->using_ipsec = !!(dev->data->dev_conf.txmode.offloads &
+ DEV_TX_OFFLOAD_SECURITY);
+#endif
txq->tdt_reg_addr = NGBE_REG_ADDR(hw, NGBE_TXWP(txq->reg_idx));
txq->tdc_reg_addr = NGBE_REG_ADDR(hw, NGBE_TXCFG(txq->reg_idx));
@@ -2220,6 +2279,11 @@ ngbe_get_rx_port_offloads(struct rte_eth_dev *dev)
offloads |= (DEV_RX_OFFLOAD_QINQ_STRIP |
DEV_RX_OFFLOAD_VLAN_EXTEND);
+#ifdef RTE_LIB_SECURITY
+ if (dev->security_ctx)
+ offloads |= DEV_RX_OFFLOAD_SECURITY;
+#endif
+
return offloads;
}
@@ -2745,6 +2809,7 @@ ngbe_dev_mq_rx_configure(struct rte_eth_dev *dev)
void
ngbe_set_rx_function(struct rte_eth_dev *dev)
{
+ uint16_t i;
struct ngbe_adapter *adapter = ngbe_dev_adapter(dev);
if (dev->data->scattered_rx) {
@@ -2788,6 +2853,15 @@ ngbe_set_rx_function(struct rte_eth_dev *dev)
dev->rx_pkt_burst = ngbe_recv_pkts;
}
+
+#ifdef RTE_LIB_SECURITY
+ for (i = 0; i < dev->data->nb_rx_queues; i++) {
+ struct ngbe_rx_queue *rxq = dev->data->rx_queues[i];
+
+ rxq->using_ipsec = !!(dev->data->dev_conf.rxmode.offloads &
+ DEV_RX_OFFLOAD_SECURITY);
+ }
+#endif
}
/*
@@ -3052,6 +3126,19 @@ ngbe_dev_rxtx_start(struct rte_eth_dev *dev)
if (hw->is_pf && dev->data->dev_conf.lpbk_mode)
ngbe_setup_loopback_link(hw);
+#ifdef RTE_LIB_SECURITY
+ if ((dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_SECURITY) ||
+ (dev->data->dev_conf.txmode.offloads & DEV_TX_OFFLOAD_SECURITY)) {
+ ret = ngbe_crypto_enable_ipsec(dev);
+ if (ret != 0) {
+ PMD_DRV_LOG(ERR,
+ "ngbe_crypto_enable_ipsec fails with %d.",
+ ret);
+ return ret;
+ }
+ }
+#endif
+
return 0;
}
diff --git a/drivers/net/ngbe/ngbe_rxtx.h b/drivers/net/ngbe/ngbe_rxtx.h
index d63b25c1aa..67c1260f6f 100644
--- a/drivers/net/ngbe/ngbe_rxtx.h
+++ b/drivers/net/ngbe/ngbe_rxtx.h
@@ -261,7 +261,10 @@ struct ngbe_rx_queue {
uint16_t rx_nb_avail; /**< nr of staged pkts ready to ret to app */
uint16_t rx_next_avail; /**< idx of next staged pkt to ret to app */
uint16_t rx_free_trigger; /**< triggers rx buffer allocation */
-
+#ifdef RTE_LIB_SECURITY
+ uint8_t using_ipsec;
+ /** indicates that IPsec Rx feature is in use */
+#endif
uint16_t rx_free_thresh; /**< max free Rx desc to hold */
uint16_t queue_id; /**< RX queue index */
uint16_t reg_idx; /**< RX queue register index */
@@ -305,6 +308,11 @@ union ngbe_tx_offload {
uint64_t outer_tun_len:8; /**< Outer TUN (Tunnel) Hdr Length. */
uint64_t outer_l2_len:8; /**< Outer L2 (MAC) Hdr Length. */
uint64_t outer_l3_len:16; /**< Outer L3 (IP) Hdr Length. */
+#ifdef RTE_LIB_SECURITY
+ /* inline ipsec related*/
+ uint64_t sa_idx:8; /**< TX SA database entry index */
+ uint64_t sec_pad_len:4; /**< padding length */
+#endif
};
};
@@ -355,6 +363,10 @@ struct ngbe_tx_queue {
uint8_t tx_deferred_start; /**< not in global dev start */
const struct ngbe_txq_ops *ops; /**< txq ops */
+#ifdef RTE_LIB_SECURITY
+ uint8_t using_ipsec;
+ /**< indicates that IPsec TX feature is in use */
+#endif
};
struct ngbe_txq_ops {
--
2.21.0.windows.1
^ permalink raw reply [flat|nested] 54+ messages in thread
* [dpdk-dev] [PATCH 32/32] doc: update for ngbe
2021-09-08 8:37 [dpdk-dev] [PATCH 00/32] net/ngbe: add many features Jiawen Wu
` (30 preceding siblings ...)
2021-09-08 8:37 ` [dpdk-dev] [PATCH 31/32] net/ngbe: add security offload in Rx and Tx Jiawen Wu
@ 2021-09-08 8:37 ` Jiawen Wu
2021-09-15 16:58 ` Ferruh Yigit
31 siblings, 1 reply; 54+ messages in thread
From: Jiawen Wu @ 2021-09-08 8:37 UTC (permalink / raw)
To: dev; +Cc: Jiawen Wu
Add ngbe PMD new features in release note 21.11.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
doc/guides/rel_notes/release_21_11.rst | 10 ++++++++++
1 file changed, 10 insertions(+)
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index 675b573834..81093cf6c0 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -62,6 +62,16 @@ New Features
* Added bus-level parsing of the devargs syntax.
* Kept compatibility with the legacy syntax as parsing fallback.
+* **Updated Wangxun ngbe driver.**
+ Updated the Wangxun ngbe driver. Add more features to complete the driver,
+ some of them including:
+
+ * Added offloads and packet type on RxTx.
+ * Added device basic statistics and extended stats.
+ * Added VLAN and MAC filters.
+ * Added multi-queue and RSS.
+ * Added SRIOV.
+ * Added IPsec.
Removed Items
-------------
--
2.21.0.windows.1
^ permalink raw reply [flat|nested] 54+ messages in thread
* Re: [dpdk-dev] [PATCH 02/32] net/ngbe: support scattered Rx
2021-09-08 8:37 ` [dpdk-dev] [PATCH 02/32] net/ngbe: support scattered Rx Jiawen Wu
@ 2021-09-15 13:22 ` Ferruh Yigit
0 siblings, 0 replies; 54+ messages in thread
From: Ferruh Yigit @ 2021-09-15 13:22 UTC (permalink / raw)
To: Jiawen Wu, dev
On 9/8/2021 9:37 AM, Jiawen Wu wrote:
> Add scattered Rx function to support receiving segmented mbufs.
>
> Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
> ---
> doc/guides/nics/features/ngbe.ini | 1 +
> doc/guides/nics/ngbe.rst | 1 +
> drivers/net/ngbe/ngbe_ethdev.c | 20 +-
> drivers/net/ngbe/ngbe_ethdev.h | 8 +
> drivers/net/ngbe/ngbe_rxtx.c | 541 ++++++++++++++++++++++++++++++
> drivers/net/ngbe/ngbe_rxtx.h | 5 +
> 6 files changed, 574 insertions(+), 2 deletions(-)
>
> diff --git a/doc/guides/nics/features/ngbe.ini b/doc/guides/nics/features/ngbe.ini
> index 8b7588184a..f85754eb7a 100644
> --- a/doc/guides/nics/features/ngbe.ini
> +++ b/doc/guides/nics/features/ngbe.ini
> @@ -8,6 +8,7 @@ Speed capabilities = Y
> Link status = Y
> Link status event = Y
> Queue start/stop = Y
> +Scattered Rx = Y
> Packet type parsing = Y
> Multiprocess aware = Y
> Linux = Y
> diff --git a/doc/guides/nics/ngbe.rst b/doc/guides/nics/ngbe.rst
> index d044397cd5..463452ce8c 100644
> --- a/doc/guides/nics/ngbe.rst
> +++ b/doc/guides/nics/ngbe.rst
> @@ -13,6 +13,7 @@ Features
>
> - Packet type information
> - Link state information
> +- Scattered for RX
>
>
> Prerequisites
> diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c
> index 4388d93560..fba0a2dcfd 100644
> --- a/drivers/net/ngbe/ngbe_ethdev.c
> +++ b/drivers/net/ngbe/ngbe_ethdev.c
> @@ -140,8 +140,16 @@ eth_ngbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
> eth_dev->rx_pkt_burst = &ngbe_recv_pkts;
> eth_dev->tx_pkt_burst = &ngbe_xmit_pkts_simple;
>
> - if (rte_eal_process_type() != RTE_PROC_PRIMARY)
> + /*
> + * For secondary processes, we don't initialise any further as primary
> + * has already done this work. Only check we don't need a different
> + * Rx and Tx function.
> + */
> + if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
> + ngbe_set_rx_function(eth_dev);
> +
> return 0;
> + }
>
> rte_eth_copy_pci_info(eth_dev, pci_dev);
>
> @@ -528,6 +536,9 @@ ngbe_dev_stop(struct rte_eth_dev *dev)
>
> ngbe_dev_clear_queues(dev);
>
> + /* Clear stored conf */
> + dev->data->scattered_rx = 0;
> +
> /* Clear recorded link status */
> memset(&link, 0, sizeof(link));
> rte_eth_linkstatus_set(dev, &link);
> @@ -628,6 +639,8 @@ ngbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
> dev_info->max_tx_queues = (uint16_t)hw->mac.max_tx_queues;
> dev_info->min_rx_bufsize = 1024;
> dev_info->max_rx_pktlen = 15872;
> + dev_info->rx_offload_capa = (ngbe_get_rx_port_offloads(dev) |
> + dev_info->rx_queue_offload_capa);
>
> dev_info->default_rxconf = (struct rte_eth_rxconf) {
> .rx_thresh = {
> @@ -670,7 +683,10 @@ ngbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
> const uint32_t *
> ngbe_dev_supported_ptypes_get(struct rte_eth_dev *dev)
> {
> - if (dev->rx_pkt_burst == ngbe_recv_pkts)
> + if (dev->rx_pkt_burst == ngbe_recv_pkts ||
> + dev->rx_pkt_burst == ngbe_recv_pkts_sc_single_alloc ||
> + dev->rx_pkt_burst == ngbe_recv_pkts_sc_bulk_alloc ||
> + dev->rx_pkt_burst == ngbe_recv_pkts_bulk_alloc)
> return ngbe_get_supported_ptypes();
>
> return NULL;
> diff --git a/drivers/net/ngbe/ngbe_ethdev.h b/drivers/net/ngbe/ngbe_ethdev.h
> index 486c6c3839..e7fe9a03b7 100644
> --- a/drivers/net/ngbe/ngbe_ethdev.h
> +++ b/drivers/net/ngbe/ngbe_ethdev.h
> @@ -106,6 +106,14 @@ int ngbe_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id);
> uint16_t ngbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
> uint16_t nb_pkts);
>
> +uint16_t ngbe_recv_pkts_bulk_alloc(void *rx_queue, struct rte_mbuf **rx_pkts,
> + uint16_t nb_pkts);
> +
> +uint16_t ngbe_recv_pkts_sc_single_alloc(void *rx_queue,
> + struct rte_mbuf **rx_pkts, uint16_t nb_pkts);
> +uint16_t ngbe_recv_pkts_sc_bulk_alloc(void *rx_queue,
> + struct rte_mbuf **rx_pkts, uint16_t nb_pkts);
> +
> uint16_t ngbe_xmit_pkts_simple(void *tx_queue, struct rte_mbuf **tx_pkts,
> uint16_t nb_pkts);
>
> diff --git a/drivers/net/ngbe/ngbe_rxtx.c b/drivers/net/ngbe/ngbe_rxtx.c
> index a3ef0f7577..49fa978853 100644
> --- a/drivers/net/ngbe/ngbe_rxtx.c
> +++ b/drivers/net/ngbe/ngbe_rxtx.c
> @@ -263,6 +263,243 @@ ngbe_rxd_pkt_info_to_pkt_type(uint32_t pkt_info, uint16_t ptid_mask)
> return ngbe_decode_ptype(ptid);
> }
>
> +/*
> + * LOOK_AHEAD defines how many desc statuses to check beyond the
> + * current descriptor.
> + * It must be a pound define for optimal performance.
> + * Do not change the value of LOOK_AHEAD, as the ngbe_rx_scan_hw_ring
> + * function only works with LOOK_AHEAD=8.
> + */
> +#define LOOK_AHEAD 8
> +#if (LOOK_AHEAD != 8)
> +#error "PMD NGBE: LOOK_AHEAD must be 8\n"
> +#endif
> +static inline int
> +ngbe_rx_scan_hw_ring(struct ngbe_rx_queue *rxq)
> +{
> + volatile struct ngbe_rx_desc *rxdp;
> + struct ngbe_rx_entry *rxep;
> + struct rte_mbuf *mb;
> + uint16_t pkt_len;
> + int nb_dd;
> + uint32_t s[LOOK_AHEAD];
> + uint32_t pkt_info[LOOK_AHEAD];
> + int i, j, nb_rx = 0;
> + uint32_t status;
> +
> + /* get references to current descriptor and S/W ring entry */
> + rxdp = &rxq->rx_ring[rxq->rx_tail];
> + rxep = &rxq->sw_ring[rxq->rx_tail];
> +
> + status = rxdp->qw1.lo.status;
> + /* check to make sure there is at least 1 packet to receive */
> + if (!(status & rte_cpu_to_le_32(NGBE_RXD_STAT_DD)))
> + return 0;
> +
> + /*
> + * Scan LOOK_AHEAD descriptors at a time to determine which descriptors
> + * reference packets that are ready to be received.
> + */
> + for (i = 0; i < RTE_PMD_NGBE_RX_MAX_BURST;
> + i += LOOK_AHEAD, rxdp += LOOK_AHEAD, rxep += LOOK_AHEAD) {
> + /* Read desc statuses backwards to avoid race condition */
> + for (j = 0; j < LOOK_AHEAD; j++)
> + s[j] = rte_le_to_cpu_32(rxdp[j].qw1.lo.status);
> +
> + rte_atomic_thread_fence(__ATOMIC_ACQUIRE);
> +
> + /* Compute how many status bits were set */
> + for (nb_dd = 0; nb_dd < LOOK_AHEAD &&
> + (s[nb_dd] & NGBE_RXD_STAT_DD); nb_dd++)
> + ;
> +
> + for (j = 0; j < nb_dd; j++)
> + pkt_info[j] = rte_le_to_cpu_32(rxdp[j].qw0.dw0);
> +
> + nb_rx += nb_dd;
> +
> + /* Translate descriptor info to mbuf format */
> + for (j = 0; j < nb_dd; ++j) {
> + mb = rxep[j].mbuf;
> + pkt_len = rte_le_to_cpu_16(rxdp[j].qw1.hi.len);
> + mb->data_len = pkt_len;
> + mb->pkt_len = pkt_len;
> +
> + mb->packet_type =
> + ngbe_rxd_pkt_info_to_pkt_type(pkt_info[j],
> + rxq->pkt_type_mask);
> + }
> +
> + /* Move mbuf pointers from the S/W ring to the stage */
> + for (j = 0; j < LOOK_AHEAD; ++j)
> + rxq->rx_stage[i + j] = rxep[j].mbuf;
> +
> + /* stop if all requested packets could not be received */
> + if (nb_dd != LOOK_AHEAD)
> + break;
> + }
> +
> + /* clear software ring entries so we can cleanup correctly */
> + for (i = 0; i < nb_rx; ++i)
> + rxq->sw_ring[rxq->rx_tail + i].mbuf = NULL;
> +
> + return nb_rx;
> +}
> +
> +static inline int
> +ngbe_rx_alloc_bufs(struct ngbe_rx_queue *rxq, bool reset_mbuf)
> +{
> + volatile struct ngbe_rx_desc *rxdp;
> + struct ngbe_rx_entry *rxep;
> + struct rte_mbuf *mb;
> + uint16_t alloc_idx;
> + __le64 dma_addr;
> + int diag, i;
> +
> + /* allocate buffers in bulk directly into the S/W ring */
> + alloc_idx = rxq->rx_free_trigger - (rxq->rx_free_thresh - 1);
> + rxep = &rxq->sw_ring[alloc_idx];
> + diag = rte_mempool_get_bulk(rxq->mb_pool, (void *)rxep,
> + rxq->rx_free_thresh);
> + if (unlikely(diag != 0))
> + return -ENOMEM;
> +
> + rxdp = &rxq->rx_ring[alloc_idx];
> + for (i = 0; i < rxq->rx_free_thresh; ++i) {
> + /* populate the static rte mbuf fields */
> + mb = rxep[i].mbuf;
> + if (reset_mbuf)
> + mb->port = rxq->port_id;
> +
> + rte_mbuf_refcnt_set(mb, 1);
> + mb->data_off = RTE_PKTMBUF_HEADROOM;
> +
> + /* populate the descriptors */
> + dma_addr = rte_cpu_to_le_64(rte_mbuf_data_iova_default(mb));
> + NGBE_RXD_HDRADDR(&rxdp[i], 0);
> + NGBE_RXD_PKTADDR(&rxdp[i], dma_addr);
> + }
> +
> + /* update state of internal queue structure */
> + rxq->rx_free_trigger = rxq->rx_free_trigger + rxq->rx_free_thresh;
> + if (rxq->rx_free_trigger >= rxq->nb_rx_desc)
> + rxq->rx_free_trigger = rxq->rx_free_thresh - 1;
> +
> + /* no errors */
> + return 0;
> +}
> +
> +static inline uint16_t
> +ngbe_rx_fill_from_stage(struct ngbe_rx_queue *rxq, struct rte_mbuf **rx_pkts,
> + uint16_t nb_pkts)
> +{
> + struct rte_mbuf **stage = &rxq->rx_stage[rxq->rx_next_avail];
> + int i;
> +
> + /* how many packets are ready to return? */
> + nb_pkts = (uint16_t)RTE_MIN(nb_pkts, rxq->rx_nb_avail);
> +
> + /* copy mbuf pointers to the application's packet list */
> + for (i = 0; i < nb_pkts; ++i)
> + rx_pkts[i] = stage[i];
> +
> + /* update internal queue state */
> + rxq->rx_nb_avail = (uint16_t)(rxq->rx_nb_avail - nb_pkts);
> + rxq->rx_next_avail = (uint16_t)(rxq->rx_next_avail + nb_pkts);
> +
> + return nb_pkts;
> +}
> +
> +static inline uint16_t
> +ngbe_rx_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
> + uint16_t nb_pkts)
> +{
> + struct ngbe_rx_queue *rxq = (struct ngbe_rx_queue *)rx_queue;
> + struct rte_eth_dev *dev = &rte_eth_devices[rxq->port_id];
> + uint16_t nb_rx = 0;
> +
> + /* Any previously recv'd pkts will be returned from the Rx stage */
> + if (rxq->rx_nb_avail)
> + return ngbe_rx_fill_from_stage(rxq, rx_pkts, nb_pkts);
> +
> + /* Scan the H/W ring for packets to receive */
> + nb_rx = (uint16_t)ngbe_rx_scan_hw_ring(rxq);
> +
> + /* update internal queue state */
> + rxq->rx_next_avail = 0;
> + rxq->rx_nb_avail = nb_rx;
> + rxq->rx_tail = (uint16_t)(rxq->rx_tail + nb_rx);
> +
> + /* if required, allocate new buffers to replenish descriptors */
> + if (rxq->rx_tail > rxq->rx_free_trigger) {
> + uint16_t cur_free_trigger = rxq->rx_free_trigger;
> +
> + if (ngbe_rx_alloc_bufs(rxq, true) != 0) {
> + int i, j;
> +
> + PMD_RX_LOG(DEBUG, "RX mbuf alloc failed port_id=%u "
> + "queue_id=%u", (uint16_t)rxq->port_id,
> + (uint16_t)rxq->queue_id);
> +
> + dev->data->rx_mbuf_alloc_failed +=
> + rxq->rx_free_thresh;
> +
> + /*
> + * Need to rewind any previous receives if we cannot
> + * allocate new buffers to replenish the old ones.
> + */
> + rxq->rx_nb_avail = 0;
> + rxq->rx_tail = (uint16_t)(rxq->rx_tail - nb_rx);
> + for (i = 0, j = rxq->rx_tail; i < nb_rx; ++i, ++j)
> + rxq->sw_ring[j].mbuf = rxq->rx_stage[i];
> +
> + return 0;
> + }
> +
> + /* update tail pointer */
> + rte_wmb();
> + ngbe_set32_relaxed(rxq->rdt_reg_addr, cur_free_trigger);
> + }
> +
> + if (rxq->rx_tail >= rxq->nb_rx_desc)
> + rxq->rx_tail = 0;
> +
> + /* received any packets this loop? */
> + if (rxq->rx_nb_avail)
> + return ngbe_rx_fill_from_stage(rxq, rx_pkts, nb_pkts);
> +
> + return 0;
> +}
> +
> +/* split requests into chunks of size RTE_PMD_NGBE_RX_MAX_BURST */
> +uint16_t
> +ngbe_recv_pkts_bulk_alloc(void *rx_queue, struct rte_mbuf **rx_pkts,
> + uint16_t nb_pkts)
> +{
> + uint16_t nb_rx;
> +
> + if (unlikely(nb_pkts == 0))
> + return 0;
> +
> + if (likely(nb_pkts <= RTE_PMD_NGBE_RX_MAX_BURST))
> + return ngbe_rx_recv_pkts(rx_queue, rx_pkts, nb_pkts);
> +
> + /* request is relatively large, chunk it up */
> + nb_rx = 0;
> + while (nb_pkts) {
> + uint16_t ret, n;
> +
> + n = (uint16_t)RTE_MIN(nb_pkts, RTE_PMD_NGBE_RX_MAX_BURST);
> + ret = ngbe_rx_recv_pkts(rx_queue, &rx_pkts[nb_rx], n);
> + nb_rx = (uint16_t)(nb_rx + ret);
> + nb_pkts = (uint16_t)(nb_pkts - ret);
> + if (ret < n)
> + break;
> + }
> +
> + return nb_rx;
> +}
> +
> uint16_t
> ngbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
> uint16_t nb_pkts)
> @@ -426,6 +663,246 @@ ngbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
> return nb_rx;
> }
>
> +static inline void
> +ngbe_fill_cluster_head_buf(struct rte_mbuf *head, struct ngbe_rx_desc *desc,
> + struct ngbe_rx_queue *rxq, uint32_t staterr)
> +{
> + uint32_t pkt_info;
> +
> + RTE_SET_USED(staterr);
> + head->port = rxq->port_id;
> +
> + pkt_info = rte_le_to_cpu_32(desc->qw0.dw0);
> + head->packet_type = ngbe_rxd_pkt_info_to_pkt_type(pkt_info,
> + rxq->pkt_type_mask);
> +}
> +
> +/**
> + * ngbe_recv_pkts_sc - receive handler for scatter case.
> + *
> + * @rx_queue Rx queue handle
> + * @rx_pkts table of received packets
> + * @nb_pkts size of rx_pkts table
> + * @bulk_alloc if TRUE bulk allocation is used for a HW ring refilling
> + *
> + * Returns the number of received packets/clusters (according to the "bulk
> + * receive" interface).
> + */
> +static inline uint16_t
> +ngbe_recv_pkts_sc(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts,
> + bool bulk_alloc)
> +{
> + struct ngbe_rx_queue *rxq = rx_queue;
> + struct rte_eth_dev *dev = &rte_eth_devices[rxq->port_id];
> + volatile struct ngbe_rx_desc *rx_ring = rxq->rx_ring;
> + struct ngbe_rx_entry *sw_ring = rxq->sw_ring;
> + struct ngbe_scattered_rx_entry *sw_sc_ring = rxq->sw_sc_ring;
> + uint16_t rx_id = rxq->rx_tail;
> + uint16_t nb_rx = 0;
> + uint16_t nb_hold = rxq->nb_rx_hold;
> + uint16_t prev_id = rxq->rx_tail;
> +
> + while (nb_rx < nb_pkts) {
> + bool eop;
> + struct ngbe_rx_entry *rxe;
> + struct ngbe_scattered_rx_entry *sc_entry;
> + struct ngbe_scattered_rx_entry *next_sc_entry = NULL;
> + struct ngbe_rx_entry *next_rxe = NULL;
> + struct rte_mbuf *first_seg;
> + struct rte_mbuf *rxm;
> + struct rte_mbuf *nmb = NULL;
> + struct ngbe_rx_desc rxd;
> + uint16_t data_len;
> + uint16_t next_id;
> + volatile struct ngbe_rx_desc *rxdp;
> + uint32_t staterr;
> +
> +next_desc:
> + rxdp = &rx_ring[rx_id];
> + staterr = rte_le_to_cpu_32(rxdp->qw1.lo.status);
> +
> + if (!(staterr & NGBE_RXD_STAT_DD))
> + break;
> +
> + rxd = *rxdp;
> +
> + PMD_RX_LOG(DEBUG, "port_id=%u queue_id=%u rx_id=%u "
> + "staterr=0x%x data_len=%u",
> + rxq->port_id, rxq->queue_id, rx_id, staterr,
> + rte_le_to_cpu_16(rxd.qw1.hi.len));
> +
> + if (!bulk_alloc) {
> + nmb = rte_mbuf_raw_alloc(rxq->mb_pool);
> + if (nmb == NULL) {
> + PMD_RX_LOG(DEBUG, "Rx mbuf alloc failed "
> + "port_id=%u queue_id=%u",
> + rxq->port_id, rxq->queue_id);
> +
> + dev->data->rx_mbuf_alloc_failed++;
> + break;
> + }
> + } else if (nb_hold > rxq->rx_free_thresh) {
> + uint16_t next_rdt = rxq->rx_free_trigger;
> +
> + if (!ngbe_rx_alloc_bufs(rxq, false)) {
> + rte_wmb();
> + ngbe_set32_relaxed(rxq->rdt_reg_addr,
> + next_rdt);
> + nb_hold -= rxq->rx_free_thresh;
> + } else {
> + PMD_RX_LOG(DEBUG, "Rx bulk alloc failed "
> + "port_id=%u queue_id=%u",
> + rxq->port_id, rxq->queue_id);
> +
> + dev->data->rx_mbuf_alloc_failed++;
> + break;
> + }
> + }
> +
> + nb_hold++;
> + rxe = &sw_ring[rx_id];
> + eop = staterr & NGBE_RXD_STAT_EOP;
> +
> + next_id = rx_id + 1;
> + if (next_id == rxq->nb_rx_desc)
> + next_id = 0;
> +
> + /* Prefetch next mbuf while processing current one. */
> + rte_ngbe_prefetch(sw_ring[next_id].mbuf);
> +
> + /*
> + * When next Rx descriptor is on a cache-line boundary,
> + * prefetch the next 4 RX descriptors and the next 4 pointers
> + * to mbufs.
> + */
> + if ((next_id & 0x3) == 0) {
> + rte_ngbe_prefetch(&rx_ring[next_id]);
> + rte_ngbe_prefetch(&sw_ring[next_id]);
> + }
> +
> + rxm = rxe->mbuf;
> +
> + if (!bulk_alloc) {
> + __le64 dma =
> + rte_cpu_to_le_64(rte_mbuf_data_iova_default(nmb));
> + /*
> + * Update Rx descriptor with the physical address of the
> + * new data buffer of the new allocated mbuf.
> + */
> + rxe->mbuf = nmb;
> +
> + rxm->data_off = RTE_PKTMBUF_HEADROOM;
> + NGBE_RXD_HDRADDR(rxdp, 0);
> + NGBE_RXD_PKTADDR(rxdp, dma);
> + } else {
> + rxe->mbuf = NULL;
> + }
> +
> + /*
> + * Set data length & data buffer address of mbuf.
> + */
> + data_len = rte_le_to_cpu_16(rxd.qw1.hi.len);
> + rxm->data_len = data_len;
> +
> + if (!eop) {
> + uint16_t nextp_id;
> +
> + nextp_id = next_id;
> + next_sc_entry = &sw_sc_ring[nextp_id];
> + next_rxe = &sw_ring[nextp_id];
> + rte_ngbe_prefetch(next_rxe);
> + }
> +
> + sc_entry = &sw_sc_ring[rx_id];
> + first_seg = sc_entry->fbuf;
> + sc_entry->fbuf = NULL;
> +
> + /*
> + * If this is the first buffer of the received packet,
> + * set the pointer to the first mbuf of the packet and
> + * initialize its context.
> + * Otherwise, update the total length and the number of segments
> + * of the current scattered packet, and update the pointer to
> + * the last mbuf of the current packet.
> + */
> + if (first_seg == NULL) {
> + first_seg = rxm;
> + first_seg->pkt_len = data_len;
> + first_seg->nb_segs = 1;
> + } else {
> + first_seg->pkt_len += data_len;
> + first_seg->nb_segs++;
> + }
> +
> + prev_id = rx_id;
> + rx_id = next_id;
> +
> + /*
> + * If this is not the last buffer of the received packet, update
> + * the pointer to the first mbuf at the NEXTP entry in the
> + * sw_sc_ring and continue to parse the Rx ring.
> + */
> + if (!eop && next_rxe) {
> + rxm->next = next_rxe->mbuf;
> + next_sc_entry->fbuf = first_seg;
> + goto next_desc;
> + }
> +
> + /* Initialize the first mbuf of the returned packet */
> + ngbe_fill_cluster_head_buf(first_seg, &rxd, rxq, staterr);
> +
> + /* Prefetch data of first segment, if configured to do so. */
> + rte_packet_prefetch((char *)first_seg->buf_addr +
> + first_seg->data_off);
> +
> + /*
> + * Store the mbuf address into the next entry of the array
> + * of returned packets.
> + */
> + rx_pkts[nb_rx++] = first_seg;
> + }
> +
> + /*
> + * Record index of the next Rx descriptor to probe.
> + */
> + rxq->rx_tail = rx_id;
> +
> + /*
> + * If the number of free Rx descriptors is greater than the Rx free
> + * threshold of the queue, advance the Receive Descriptor Tail (RDT)
> + * register.
> + * Update the RDT with the value of the last processed Rx descriptor
> + * minus 1, to guarantee that the RDT register is never equal to the
> + * RDH register, which creates a "full" ring situation from the
> + * hardware point of view...
> + */
> + if (!bulk_alloc && nb_hold > rxq->rx_free_thresh) {
> + PMD_RX_LOG(DEBUG, "port_id=%u queue_id=%u rx_tail=%u "
> + "nb_hold=%u nb_rx=%u",
> + rxq->port_id, rxq->queue_id, rx_id, nb_hold, nb_rx);
> +
> + rte_wmb();
> + ngbe_set32_relaxed(rxq->rdt_reg_addr, prev_id);
> + nb_hold = 0;
> + }
> +
> + rxq->nb_rx_hold = nb_hold;
> + return nb_rx;
> +}
> +
> +uint16_t
> +ngbe_recv_pkts_sc_single_alloc(void *rx_queue, struct rte_mbuf **rx_pkts,
> + uint16_t nb_pkts)
> +{
> + return ngbe_recv_pkts_sc(rx_queue, rx_pkts, nb_pkts, false);
> +}
> +
> +uint16_t
> +ngbe_recv_pkts_sc_bulk_alloc(void *rx_queue, struct rte_mbuf **rx_pkts,
> + uint16_t nb_pkts)
> +{
> + return ngbe_recv_pkts_sc(rx_queue, rx_pkts, nb_pkts, true);
> +}
>
> /*********************************************************************
> *
> @@ -777,6 +1254,12 @@ ngbe_reset_rx_queue(struct ngbe_adapter *adapter, struct ngbe_rx_queue *rxq)
> rxq->pkt_last_seg = NULL;
> }
>
> +uint64_t
> +ngbe_get_rx_port_offloads(struct rte_eth_dev *dev __rte_unused)
> +{
> + return DEV_RX_OFFLOAD_SCATTER;
> +}
> +
> int
> ngbe_dev_rx_queue_setup(struct rte_eth_dev *dev,
> uint16_t queue_idx,
> @@ -790,10 +1273,13 @@ ngbe_dev_rx_queue_setup(struct rte_eth_dev *dev,
> struct ngbe_hw *hw;
> uint16_t len;
> struct ngbe_adapter *adapter = ngbe_dev_adapter(dev);
> + uint64_t offloads;
>
> PMD_INIT_FUNC_TRACE();
> hw = ngbe_dev_hw(dev);
>
> + offloads = rx_conf->offloads | dev->data->dev_conf.rxmode.offloads;
> +
> /* Free memory prior to re-allocation if needed... */
> if (dev->data->rx_queues[queue_idx] != NULL) {
> ngbe_rx_queue_release(dev->data->rx_queues[queue_idx]);
> @@ -814,6 +1300,7 @@ ngbe_dev_rx_queue_setup(struct rte_eth_dev *dev,
> rxq->port_id = dev->data->port_id;
> rxq->drop_en = rx_conf->rx_drop_en;
> rxq->rx_deferred_start = rx_conf->rx_deferred_start;
> + rxq->offloads = offloads;
> rxq->pkt_type_mask = NGBE_PTID_MASK;
>
> /*
> @@ -978,6 +1465,54 @@ ngbe_alloc_rx_queue_mbufs(struct ngbe_rx_queue *rxq)
> return 0;
> }
>
> +void
> +ngbe_set_rx_function(struct rte_eth_dev *dev)
> +{
> + struct ngbe_adapter *adapter = ngbe_dev_adapter(dev);
> +
> + if (dev->data->scattered_rx) {
> + /*
> + * Set the scattered callback: there are bulk and
> + * single allocation versions.
> + */
> + if (adapter->rx_bulk_alloc_allowed) {
> + PMD_INIT_LOG(DEBUG, "Using a Scattered with bulk "
> + "allocation callback (port=%d).",
> + dev->data->port_id);
> + dev->rx_pkt_burst = ngbe_recv_pkts_sc_bulk_alloc;
> + } else {
> + PMD_INIT_LOG(DEBUG, "Using Regular (non-vector, "
> + "single allocation) "
> + "Scattered Rx callback "
> + "(port=%d).",
> + dev->data->port_id);
> +
> + dev->rx_pkt_burst = ngbe_recv_pkts_sc_single_alloc;
> + }
> + /*
> + * Below we set "simple" callbacks according to port/queues parameters.
> + * If parameters allow we are going to choose between the following
> + * callbacks:
> + * - Bulk Allocation
> + * - Single buffer allocation (the simplest one)
> + */
> + } else if (adapter->rx_bulk_alloc_allowed) {
> + PMD_INIT_LOG(DEBUG, "Rx Burst Bulk Alloc Preconditions are "
> + "satisfied. Rx Burst Bulk Alloc function "
> + "will be used on port=%d.",
> + dev->data->port_id);
> +
> + dev->rx_pkt_burst = ngbe_recv_pkts_bulk_alloc;
> + } else {
> + PMD_INIT_LOG(DEBUG, "Rx Burst Bulk Alloc Preconditions are not "
> + "satisfied, or Scattered Rx is requested "
> + "(port=%d).",
> + dev->data->port_id);
> +
> + dev->rx_pkt_burst = ngbe_recv_pkts;
> + }
> +}
> +
> /*
> * Initializes Receive Unit.
> */
> @@ -992,6 +1527,7 @@ ngbe_dev_rx_init(struct rte_eth_dev *dev)
> uint32_t srrctl;
> uint16_t buf_size;
> uint16_t i;
> + struct rte_eth_rxmode *rx_conf = &dev->data->dev_conf.rxmode;
>
> PMD_INIT_FUNC_TRACE();
> hw = ngbe_dev_hw(dev);
> @@ -1048,6 +1584,11 @@ ngbe_dev_rx_init(struct rte_eth_dev *dev)
> wr32(hw, NGBE_RXCFG(rxq->reg_idx), srrctl);
> }
>
> + if (rx_conf->offloads & DEV_RX_OFFLOAD_SCATTER)
> + dev->data->scattered_rx = 1;
> +
> + ngbe_set_rx_function(dev);
> +
> return 0;
> }
>
> diff --git a/drivers/net/ngbe/ngbe_rxtx.h b/drivers/net/ngbe/ngbe_rxtx.h
> index 788d684def..07b5ac3fbe 100644
> --- a/drivers/net/ngbe/ngbe_rxtx.h
> +++ b/drivers/net/ngbe/ngbe_rxtx.h
> @@ -243,6 +243,7 @@ struct ngbe_rx_queue {
> uint16_t port_id; /**< Device port identifier */
> uint8_t drop_en; /**< If not 0, set SRRCTL.Drop_En */
> uint8_t rx_deferred_start; /**< not in global dev start */
> + uint64_t offloads; /**< Rx offloads with DEV_RX_OFFLOAD_* */
Why this 'offload' field is needed, it holds the queue offload value but as far
as I can see it is not used.
> /** need to alloc dummy mbuf, for wraparound when scanning hw ring */
> struct rte_mbuf fake_mbuf;
> /** hold packets to return to application */
> @@ -308,4 +309,8 @@ struct ngbe_txq_ops {
> void (*reset)(struct ngbe_tx_queue *txq);
> };
>
> +void ngbe_set_rx_function(struct rte_eth_dev *dev);
> +
> +uint64_t ngbe_get_rx_port_offloads(struct rte_eth_dev *dev);
> +
> #endif /* _NGBE_RXTX_H_ */
>
^ permalink raw reply [flat|nested] 54+ messages in thread
* Re: [dpdk-dev] [PATCH 01/32] net/ngbe: add packet type
2021-09-08 8:37 ` [dpdk-dev] [PATCH 01/32] net/ngbe: add packet type Jiawen Wu
@ 2021-09-15 16:47 ` Ferruh Yigit
2021-09-22 8:01 ` Jiawen Wu
0 siblings, 1 reply; 54+ messages in thread
From: Ferruh Yigit @ 2021-09-15 16:47 UTC (permalink / raw)
To: Jiawen Wu, dev
On 9/8/2021 9:37 AM, Jiawen Wu wrote:
> Add packet type marco definition and convert ptype to ptid.
>
> Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
> ---
> doc/guides/nics/features/ngbe.ini | 1 +
> doc/guides/nics/ngbe.rst | 1 +
> drivers/net/ngbe/meson.build | 1 +
> drivers/net/ngbe/ngbe_ethdev.c | 9 +
> drivers/net/ngbe/ngbe_ethdev.h | 4 +
> drivers/net/ngbe/ngbe_ptypes.c | 300 ++++++++++++++++++++++++++++++
> drivers/net/ngbe/ngbe_ptypes.h | 240 ++++++++++++++++++++++++
> drivers/net/ngbe/ngbe_rxtx.c | 16 ++
> drivers/net/ngbe/ngbe_rxtx.h | 2 +
> 9 files changed, 574 insertions(+)
> create mode 100644 drivers/net/ngbe/ngbe_ptypes.c
> create mode 100644 drivers/net/ngbe/ngbe_ptypes.h
>
> diff --git a/doc/guides/nics/features/ngbe.ini b/doc/guides/nics/features/ngbe.ini
> index 08d5f1b0dc..8b7588184a 100644
> --- a/doc/guides/nics/features/ngbe.ini
> +++ b/doc/guides/nics/features/ngbe.ini
> @@ -8,6 +8,7 @@ Speed capabilities = Y
> Link status = Y
> Link status event = Y
> Queue start/stop = Y
> +Packet type parsing = Y
"Packet type parsing" also requires to support
'rte_eth_dev_get_supported_ptypes()' & 'rte_eth_dev_set_ptypes()' APIs.
Current implementation seems parses the packet type and updates mbuf field for
it but doesn't support above APIs, can you please add them too? There is already
'ngbe_dev_supported_ptypes_get()' function but dev_ops seems not set.
<...>
> +++ b/drivers/net/ngbe/ngbe_ptypes.c
> @@ -0,0 +1,300 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2018-2021 Beijing WangXun Technology Co., Ltd.
> + */
> +
> +#include <rte_mbuf.h>
> +#include <rte_memory.h>
> +
> +#include "base/ngbe_type.h"
> +#include "ngbe_ptypes.h"
> +
> +/* The ngbe_ptype_lookup is used to convert from the 8-bit ptid in the
> + * hardware to a bit-field that can be used by SW to more easily determine the
> + * packet type.
> + *
> + * Macros are used to shorten the table lines and make this table human
> + * readable.
> + *
> + * We store the PTYPE in the top byte of the bit field - this is just so that
> + * we can check that the table doesn't have a row missing, as the index into
> + * the table should be the PTYPE.
> + */
> +#define TPTE(ptid, l2, l3, l4, tun, el2, el3, el4) \
> + [ptid] = (RTE_PTYPE_L2_##l2 | \
> + RTE_PTYPE_L3_##l3 | \
> + RTE_PTYPE_L4_##l4 | \
> + RTE_PTYPE_TUNNEL_##tun | \
> + RTE_PTYPE_INNER_L2_##el2 | \
> + RTE_PTYPE_INNER_L3_##el3 | \
> + RTE_PTYPE_INNER_L4_##el4)
> +
> +#define RTE_PTYPE_L2_NONE 0
> +#define RTE_PTYPE_L3_NONE 0
> +#define RTE_PTYPE_L4_NONE 0
> +#define RTE_PTYPE_TUNNEL_NONE 0
> +#define RTE_PTYPE_INNER_L2_NONE 0
> +#define RTE_PTYPE_INNER_L3_NONE 0
> +#define RTE_PTYPE_INNER_L4_NONE 0
Why you are defining new PTYPEs? If these are for driver internal you can drop
the 'RTE_' prefix.
<...>
> +
> +#ifndef RTE_PTYPE_UNKNOWN
> +#define RTE_PTYPE_UNKNOWN 0x00000000
> +#define RTE_PTYPE_L2_ETHER 0x00000001
> +#define RTE_PTYPE_L2_ETHER_TIMESYNC 0x00000002
> +#define RTE_PTYPE_L2_ETHER_ARP 0x00000003
> +#define RTE_PTYPE_L2_ETHER_LLDP 0x00000004
> +#define RTE_PTYPE_L2_ETHER_NSH 0x00000005
> +#define RTE_PTYPE_L2_ETHER_FCOE 0x00000009
> +#define RTE_PTYPE_L3_IPV4 0x00000010
> +#define RTE_PTYPE_L3_IPV4_EXT 0x00000030
> +#define RTE_PTYPE_L3_IPV6 0x00000040
> +#define RTE_PTYPE_L3_IPV4_EXT_UNKNOWN 0x00000090
> +#define RTE_PTYPE_L3_IPV6_EXT 0x000000c0
> +#define RTE_PTYPE_L3_IPV6_EXT_UNKNOWN 0x000000e0
> +#define RTE_PTYPE_L4_TCP 0x00000100
> +#define RTE_PTYPE_L4_UDP 0x00000200
> +#define RTE_PTYPE_L4_FRAG 0x00000300
> +#define RTE_PTYPE_L4_SCTP 0x00000400
> +#define RTE_PTYPE_L4_ICMP 0x00000500
> +#define RTE_PTYPE_L4_NONFRAG 0x00000600
> +#define RTE_PTYPE_TUNNEL_IP 0x00001000
> +#define RTE_PTYPE_TUNNEL_GRE 0x00002000
> +#define RTE_PTYPE_TUNNEL_VXLAN 0x00003000
> +#define RTE_PTYPE_TUNNEL_NVGRE 0x00004000
> +#define RTE_PTYPE_TUNNEL_GENEVE 0x00005000
> +#define RTE_PTYPE_TUNNEL_GRENAT 0x00006000
> +#define RTE_PTYPE_INNER_L2_ETHER 0x00010000
> +#define RTE_PTYPE_INNER_L2_ETHER_VLAN 0x00020000
> +#define RTE_PTYPE_INNER_L3_IPV4 0x00100000
> +#define RTE_PTYPE_INNER_L3_IPV4_EXT 0x00200000
> +#define RTE_PTYPE_INNER_L3_IPV6 0x00300000
> +#define RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN 0x00400000
> +#define RTE_PTYPE_INNER_L3_IPV6_EXT 0x00500000
> +#define RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN 0x00600000
> +#define RTE_PTYPE_INNER_L4_TCP 0x01000000
> +#define RTE_PTYPE_INNER_L4_UDP 0x02000000
> +#define RTE_PTYPE_INNER_L4_FRAG 0x03000000
> +#define RTE_PTYPE_INNER_L4_SCTP 0x04000000
> +#define RTE_PTYPE_INNER_L4_ICMP 0x05000000
> +#define RTE_PTYPE_INNER_L4_NONFRAG 0x06000000
> +#endif /* !RTE_PTYPE_UNKNOWN */
These are already defined in the mbuf public header, why there are defined again?
<...>
> @@ -378,6 +389,10 @@ ngbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
> rxm->data_len = pkt_len;
> rxm->port = rxq->port_id;
>
> + pkt_info = rte_le_to_cpu_32(rxd.qw0.dw0);
> + rxm->packet_type = ngbe_rxd_pkt_info_to_pkt_type(pkt_info,
> + rxq->pkt_type_mask);
> +
> /*
> * Store the mbuf address into the next entry of the array
> * of returned packets.
> @@ -799,6 +814,7 @@ ngbe_dev_rx_queue_setup(struct rte_eth_dev *dev,
> rxq->port_id = dev->data->port_id;
> rxq->drop_en = rx_conf->rx_drop_en;
> rxq->rx_deferred_start = rx_conf->rx_deferred_start;
> + rxq->pkt_type_mask = NGBE_PTID_MASK;
What is the use of the 'pkt_type_mask', it seems it is a fixed value, why
keeping it per queue?
^ permalink raw reply [flat|nested] 54+ messages in thread
* Re: [dpdk-dev] [PATCH 03/32] net/ngbe: support Rx checksum offload
2021-09-08 8:37 ` [dpdk-dev] [PATCH 03/32] net/ngbe: support Rx checksum offload Jiawen Wu
@ 2021-09-15 16:48 ` Ferruh Yigit
0 siblings, 0 replies; 54+ messages in thread
From: Ferruh Yigit @ 2021-09-15 16:48 UTC (permalink / raw)
To: Jiawen Wu, dev
On 9/8/2021 9:37 AM, Jiawen Wu wrote:
> Support IP/L4 checksum on Rx, and convert it to mbuf flags.
>
> Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
> ---
> doc/guides/nics/features/ngbe.ini | 2 +
> doc/guides/nics/ngbe.rst | 1 +
> drivers/net/ngbe/ngbe_rxtx.c | 75 +++++++++++++++++++++++++++++--
> 3 files changed, 75 insertions(+), 3 deletions(-)
>
> diff --git a/doc/guides/nics/features/ngbe.ini b/doc/guides/nics/features/ngbe.ini
> index f85754eb7a..2777ed5a62 100644
> --- a/doc/guides/nics/features/ngbe.ini
> +++ b/doc/guides/nics/features/ngbe.ini
> @@ -9,6 +9,8 @@ Link status = Y
> Link status event = Y
> Queue start/stop = Y
> Scattered Rx = Y
> +L3 checksum offload = P
> +L4 checksum offload = P
Why partially supported? Can you please details in the commit log.
<...>
> +static inline uint64_t
> +rx_desc_error_to_pkt_flags(uint32_t rx_status)
> +{
> + uint64_t pkt_flags = 0;
> +
> + /* checksum offload can't be disabled */
> + if (rx_status & NGBE_RXD_STAT_IPCS) {
> + pkt_flags |= (rx_status & NGBE_RXD_ERR_IPCS
> + ? PKT_RX_IP_CKSUM_BAD : PKT_RX_IP_CKSUM_GOOD);
> + }
> +
> + if (rx_status & NGBE_RXD_STAT_L4CS) {
> + pkt_flags |= (rx_status & NGBE_RXD_ERR_L4CS
> + ? PKT_RX_L4_CKSUM_BAD : PKT_RX_L4_CKSUM_GOOD);
> + }
> +
> + if (rx_status & NGBE_RXD_STAT_EIPCS &&
> + rx_status & NGBE_RXD_ERR_EIPCS) {
You can join both lines and can drop the {} for the single line if block.
^ permalink raw reply [flat|nested] 54+ messages in thread
* Re: [dpdk-dev] [PATCH 05/32] net/ngbe: support CRC offload
2021-09-08 8:37 ` [dpdk-dev] [PATCH 05/32] net/ngbe: support CRC offload Jiawen Wu
@ 2021-09-15 16:48 ` Ferruh Yigit
0 siblings, 0 replies; 54+ messages in thread
From: Ferruh Yigit @ 2021-09-15 16:48 UTC (permalink / raw)
To: Jiawen Wu, dev
On 9/8/2021 9:37 AM, Jiawen Wu wrote:
> Support to strip or keep CRC in Rx path.
>
> Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
> ---
> doc/guides/nics/features/ngbe.ini | 1 +
> drivers/net/ngbe/ngbe_rxtx.c | 53 +++++++++++++++++++++++++++++--
> drivers/net/ngbe/ngbe_rxtx.h | 1 +
> 3 files changed, 53 insertions(+), 2 deletions(-)
>
> diff --git a/doc/guides/nics/features/ngbe.ini b/doc/guides/nics/features/ngbe.ini
> index 32f74a3084..2a472d9434 100644
> --- a/doc/guides/nics/features/ngbe.ini
> +++ b/doc/guides/nics/features/ngbe.ini
> @@ -10,6 +10,7 @@ Link status event = Y
> Queue start/stop = Y
> Scattered Rx = Y
> TSO = Y
> +CRC offload = P
Again, can you please describe why it is supported partially?
^ permalink raw reply [flat|nested] 54+ messages in thread
* Re: [dpdk-dev] [PATCH 06/32] net/ngbe: support jumbo frame
2021-09-08 8:37 ` [dpdk-dev] [PATCH 06/32] net/ngbe: support jumbo frame Jiawen Wu
@ 2021-09-15 16:48 ` Ferruh Yigit
0 siblings, 0 replies; 54+ messages in thread
From: Ferruh Yigit @ 2021-09-15 16:48 UTC (permalink / raw)
To: Jiawen Wu, dev
On 9/8/2021 9:37 AM, Jiawen Wu wrote:
> Add to support Rx jumbo frames.
>
> Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
> ---
> doc/guides/nics/features/ngbe.ini | 1 +
> doc/guides/nics/ngbe.rst | 1 +
> drivers/net/ngbe/ngbe_rxtx.c | 11 ++++++++++-
> 3 files changed, 12 insertions(+), 1 deletion(-)
>
> diff --git a/doc/guides/nics/features/ngbe.ini b/doc/guides/nics/features/ngbe.ini
> index 2a472d9434..30fdfe62c7 100644
> --- a/doc/guides/nics/features/ngbe.ini
> +++ b/doc/guides/nics/features/ngbe.ini
> @@ -8,6 +8,7 @@ Speed capabilities = Y
> Link status = Y
> Link status event = Y
> Queue start/stop = Y
> +Jumbo frame = Y
> Scattered Rx = Y
> TSO = Y
> CRC offload = P
> diff --git a/doc/guides/nics/ngbe.rst b/doc/guides/nics/ngbe.rst
> index 6a6ae39243..702a455041 100644
> --- a/doc/guides/nics/ngbe.rst
> +++ b/doc/guides/nics/ngbe.rst
> @@ -14,6 +14,7 @@ Features
> - Packet type information
> - Checksum offload
> - TSO offload
> +- Jumbo frames
> - Link state information
> - Scattered and gather for TX and RX
>
> diff --git a/drivers/net/ngbe/ngbe_rxtx.c b/drivers/net/ngbe/ngbe_rxtx.c
> index f9d8cf9d19..4238fbe3b8 100644
> --- a/drivers/net/ngbe/ngbe_rxtx.c
> +++ b/drivers/net/ngbe/ngbe_rxtx.c
> @@ -2008,6 +2008,7 @@ ngbe_get_rx_port_offloads(struct rte_eth_dev *dev __rte_unused)
> DEV_RX_OFFLOAD_UDP_CKSUM |
> DEV_RX_OFFLOAD_TCP_CKSUM |
> DEV_RX_OFFLOAD_KEEP_CRC |
> + DEV_RX_OFFLOAD_JUMBO_FRAME |
There is a patch to remove this offload flag [1], would you be OK to postpone
this patch?
[1]
https://patches.dpdk.org/project/dpdk/list/?series=17956
^ permalink raw reply [flat|nested] 54+ messages in thread
* Re: [dpdk-dev] [PATCH 08/32] net/ngbe: support basic statistics
2021-09-08 8:37 ` [dpdk-dev] [PATCH 08/32] net/ngbe: support basic statistics Jiawen Wu
@ 2021-09-15 16:50 ` Ferruh Yigit
2021-10-14 2:51 ` Jiawen Wu
0 siblings, 1 reply; 54+ messages in thread
From: Ferruh Yigit @ 2021-09-15 16:50 UTC (permalink / raw)
To: Jiawen Wu; +Cc: dev
On 9/8/2021 9:37 AM, Jiawen Wu wrote:
> Support to read and clear basic statistics, and configure per-queue
> stats counter mapping.
>
> Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
> ---
> doc/guides/nics/features/ngbe.ini | 2 +
> doc/guides/nics/ngbe.rst | 1 +
> drivers/net/ngbe/base/ngbe_dummy.h | 5 +
> drivers/net/ngbe/base/ngbe_hw.c | 101 ++++++++++
> drivers/net/ngbe/base/ngbe_hw.h | 1 +
> drivers/net/ngbe/base/ngbe_type.h | 134 +++++++++++++
> drivers/net/ngbe/ngbe_ethdev.c | 300 +++++++++++++++++++++++++++++
> drivers/net/ngbe/ngbe_ethdev.h | 19 ++
> 8 files changed, 563 insertions(+)
>
<...>
> +static int
> +ngbe_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
> +{
> + struct ngbe_hw *hw = ngbe_dev_hw(dev);
> + struct ngbe_hw_stats *hw_stats = NGBE_DEV_STATS(dev);
> + struct ngbe_stat_mappings *stat_mappings =
> + NGBE_DEV_STAT_MAPPINGS(dev);
> + uint32_t i, j;
> +
> + ngbe_read_stats_registers(hw, hw_stats);
> +
> + if (stats == NULL)
> + return -EINVAL;
> +
> + /* Fill out the rte_eth_stats statistics structure */
> + stats->ipackets = hw_stats->rx_packets;
> + stats->ibytes = hw_stats->rx_bytes;
> + stats->opackets = hw_stats->tx_packets;
> + stats->obytes = hw_stats->tx_bytes;
> +
> + memset(&stats->q_ipackets, 0, sizeof(stats->q_ipackets));
> + memset(&stats->q_opackets, 0, sizeof(stats->q_opackets));
> + memset(&stats->q_ibytes, 0, sizeof(stats->q_ibytes));
> + memset(&stats->q_obytes, 0, sizeof(stats->q_obytes));
> + memset(&stats->q_errors, 0, sizeof(stats->q_errors));
> + for (i = 0; i < NGBE_MAX_QP; i++) {
> + uint32_t n = i / NB_QMAP_FIELDS_PER_QSM_REG;
> + uint32_t offset = (i % NB_QMAP_FIELDS_PER_QSM_REG) * 8;
> + uint32_t q_map;
> +
> + q_map = (stat_mappings->rqsm[n] >> offset)
> + & QMAP_FIELD_RESERVED_BITS_MASK;
> + j = (q_map < RTE_ETHDEV_QUEUE_STAT_CNTRS
> + ? q_map : q_map % RTE_ETHDEV_QUEUE_STAT_CNTRS);
> + stats->q_ipackets[j] += hw_stats->qp[i].rx_qp_packets;
> + stats->q_ibytes[j] += hw_stats->qp[i].rx_qp_bytes;
> +
> + q_map = (stat_mappings->tqsm[n] >> offset)
> + & QMAP_FIELD_RESERVED_BITS_MASK;
> + j = (q_map < RTE_ETHDEV_QUEUE_STAT_CNTRS
> + ? q_map : q_map % RTE_ETHDEV_QUEUE_STAT_CNTRS);
> + stats->q_opackets[j] += hw_stats->qp[i].tx_qp_packets;
> + stats->q_obytes[j] += hw_stats->qp[i].tx_qp_bytes;
> + }
> +
> + /* Rx Errors */
> + stats->imissed = hw_stats->rx_total_missed_packets +
> + hw_stats->rx_dma_drop;
> + stats->ierrors = hw_stats->rx_crc_errors +
> + hw_stats->rx_mac_short_packet_dropped +
> + hw_stats->rx_length_errors +
> + hw_stats->rx_undersize_errors +
> + hw_stats->rx_oversize_errors +
> + hw_stats->rx_illegal_byte_errors +
> + hw_stats->rx_error_bytes +
> + hw_stats->rx_fragment_errors;
> +
> + /* Tx Errors */
> + stats->oerrors = 0;
> + return 0;
You can consider keeping 'stats->rx_nombuf' stats too, this needs to be
calculated by driver.
<...>
> +
> static int
> ngbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
> {
> @@ -1462,6 +1759,9 @@ static const struct eth_dev_ops ngbe_eth_dev_ops = {
> .dev_close = ngbe_dev_close,
> .dev_reset = ngbe_dev_reset,
> .link_update = ngbe_dev_link_update,
> + .stats_get = ngbe_dev_stats_get,
> + .stats_reset = ngbe_dev_stats_reset,
> + .queue_stats_mapping_set = ngbe_dev_queue_stats_mapping_set,
'queue_stats_mapping_set' is only needed when number of stats registers are less
than number of queues. If this is not the case for you please drop this support.
And we are switching to qstats on extending stat, please see
'RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS'. This is mainly done to remove the compile
time 'RTE_ETHDEV_QUEUE_STAT_CNTRS' limitation.
Btw, 'RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS' seems missing, you should have it in
the driver.
^ permalink raw reply [flat|nested] 54+ messages in thread
* Re: [dpdk-dev] [PATCH 10/32] net/ngbe: support MTU set
2021-09-08 8:37 ` [dpdk-dev] [PATCH 10/32] net/ngbe: support MTU set Jiawen Wu
@ 2021-09-15 16:52 ` Ferruh Yigit
0 siblings, 0 replies; 54+ messages in thread
From: Ferruh Yigit @ 2021-09-15 16:52 UTC (permalink / raw)
To: Jiawen Wu, dev
On 9/8/2021 9:37 AM, Jiawen Wu wrote:
> Support updating port MTU.
>
Although this won't conflict more, if it is not urgent, can you please hold the
feature until following set is clarified:
https://patches.dpdk.org/project/dpdk/list/?series=17956
> Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
> ---
> doc/guides/nics/features/ngbe.ini | 1 +
> drivers/net/ngbe/base/ngbe_type.h | 3 +++
> drivers/net/ngbe/ngbe_ethdev.c | 41 +++++++++++++++++++++++++++++++
> 3 files changed, 45 insertions(+)
<...>
^ permalink raw reply [flat|nested] 54+ messages in thread
* Re: [dpdk-dev] [PATCH 12/32] net/ngbe: support getting FW version
2021-09-08 8:37 ` [dpdk-dev] [PATCH 12/32] net/ngbe: support getting FW version Jiawen Wu
@ 2021-09-15 16:53 ` Ferruh Yigit
0 siblings, 0 replies; 54+ messages in thread
From: Ferruh Yigit @ 2021-09-15 16:53 UTC (permalink / raw)
To: Jiawen Wu, dev
On 9/8/2021 9:37 AM, Jiawen Wu wrote:
> Add firmware version get operation.
>
> Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
<...>
> +static int
> +ngbe_fw_version_get(struct rte_eth_dev *dev, char *fw_version, size_t fw_size)
> +{
> + struct ngbe_hw *hw = ngbe_dev_hw(dev);
> + int ret;
> +
> + ret = snprintf(fw_version, fw_size, "0x%08x", hw->eeprom_id);
> +
> + if (ret < 0)
> + return -EINVAL;
> +
> + ret += 1; /* add the size of '\0' */
> + if (fw_size < (size_t)ret)
> + return ret;
> + else
> + return 0;
You can drop 'else' leg of the branch.
> +
> + return 0;
> +}
> +
<...>
^ permalink raw reply [flat|nested] 54+ messages in thread
* Re: [dpdk-dev] [PATCH 14/32] net/ngbe: support Rx interrupt
2021-09-08 8:37 ` [dpdk-dev] [PATCH 14/32] net/ngbe: support Rx interrupt Jiawen Wu
@ 2021-09-15 16:53 ` Ferruh Yigit
2021-10-14 10:11 ` Jiawen Wu
0 siblings, 1 reply; 54+ messages in thread
From: Ferruh Yigit @ 2021-09-15 16:53 UTC (permalink / raw)
To: Jiawen Wu, dev
On 9/8/2021 9:37 AM, Jiawen Wu wrote:
> Support Rx queue interrupt.
>
> Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
> ---
> doc/guides/nics/features/ngbe.ini | 1 +
> doc/guides/nics/ngbe.rst | 1 +
> drivers/net/ngbe/ngbe_ethdev.c | 35 +++++++++++++++++++++++++++++++
> 3 files changed, 37 insertions(+)
>
> diff --git a/doc/guides/nics/features/ngbe.ini b/doc/guides/nics/features/ngbe.ini
> index 1006c3935b..d14469eb43 100644
> --- a/doc/guides/nics/features/ngbe.ini
> +++ b/doc/guides/nics/features/ngbe.ini
> @@ -7,6 +7,7 @@
> Speed capabilities = Y
> Link status = Y
> Link status event = Y
> +Rx interrupt = Y
This also requires configuring Rx interrupts if user 'dev_conf.intr_conf.rxq'
config requests it.
Is an application can request and use Rx interrupts with current status of the
driver? Did you test it?
^ permalink raw reply [flat|nested] 54+ messages in thread
* Re: [dpdk-dev] [PATCH 16/32] net/ngbe: support VLAN filter
2021-09-08 8:37 ` [dpdk-dev] [PATCH 16/32] net/ngbe: support VLAN filter Jiawen Wu
@ 2021-09-15 16:54 ` Ferruh Yigit
0 siblings, 0 replies; 54+ messages in thread
From: Ferruh Yigit @ 2021-09-15 16:54 UTC (permalink / raw)
To: Jiawen Wu, dev
On 9/8/2021 9:37 AM, Jiawen Wu wrote:
> Support to filter of a VLAN tag identifier.
>
> Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
<...>
> @@ -2411,7 +2536,10 @@ static const struct eth_dev_ops ngbe_eth_dev_ops = {
> .queue_stats_mapping_set = ngbe_dev_queue_stats_mapping_set,
> .fw_version_get = ngbe_fw_version_get,
> .mtu_set = ngbe_dev_mtu_set,
> + .vlan_filter_set = ngbe_vlan_filter_set,
> + .vlan_tpid_set = ngbe_vlan_tpid_set,
> .vlan_offload_set = ngbe_vlan_offload_set,
> + .vlan_strip_queue_set = ngbe_vlan_strip_queue_set,
Since this enable/disables VLAN strip per queue, does the feature fits better to
patch 7/32 which enable/disables VLAN filter?
^ permalink raw reply [flat|nested] 54+ messages in thread
* Re: [dpdk-dev] [PATCH 19/32] net/ngbe: add mailbox process operations
2021-09-08 8:37 ` [dpdk-dev] [PATCH 19/32] net/ngbe: add mailbox process operations Jiawen Wu
@ 2021-09-15 16:56 ` Ferruh Yigit
0 siblings, 0 replies; 54+ messages in thread
From: Ferruh Yigit @ 2021-09-15 16:56 UTC (permalink / raw)
To: Jiawen Wu, dev; +Cc: Ray Kinsella
On 9/8/2021 9:37 AM, Jiawen Wu wrote:
> Add check operation for vf function level reset,
> mailbox messages and ack from vf.
> Waiting to process the messages.
>
> Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
<...>
> --- a/drivers/net/ngbe/meson.build
> +++ b/drivers/net/ngbe/meson.build
> @@ -20,3 +20,5 @@ sources = files(
> deps += ['hash']
>
> includes += include_directories('base')
> +
> +install_headers('rte_pmd_ngbe.h')
Why installing this header?
Normally it is not expected drivers to have public headers, only some PMDs has
public APIs (we call them PMD specific APIs) but that is not somehting we want.
For this case why driver needs public symbols, can you please describe?
^ permalink raw reply [flat|nested] 54+ messages in thread
* Re: [dpdk-dev] [PATCH 04/32] net/ngbe: support TSO
2021-09-08 8:37 ` [dpdk-dev] [PATCH 04/32] net/ngbe: support TSO Jiawen Wu
@ 2021-09-15 16:57 ` Ferruh Yigit
0 siblings, 0 replies; 54+ messages in thread
From: Ferruh Yigit @ 2021-09-15 16:57 UTC (permalink / raw)
To: Jiawen Wu, dev
On 9/8/2021 9:37 AM, Jiawen Wu wrote:
> Add transmit datapath with offloads, and support TCP segmentation
> offload.
>
> Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
<...>
> +/* Takes an ethdev and a queue and sets up the tx function to be used based on
> + * the queue parameters. Used in tx_queue_setup by primary process and then
> + * in dev_init by secondary process when attaching to an existing ethdev.
> + */
> +void
> +ngbe_set_tx_function(struct rte_eth_dev *dev, struct ngbe_tx_queue *txq)
> +{
> + /* Use a simple Tx queue (no offloads, no multi segs) if possible */
> + if (txq->offloads == 0 &&
> + txq->tx_free_thresh >= RTE_PMD_NGBE_TX_MAX_BURST) {
> + PMD_INIT_LOG(DEBUG, "Using simple tx code path");
> + dev->tx_pkt_burst = ngbe_xmit_pkts_simple;
> + dev->tx_pkt_prepare = NULL;
> + } else {
> + PMD_INIT_LOG(DEBUG, "Using full-featured tx code path");
> + PMD_INIT_LOG(DEBUG,
> + " - offloads = 0x%" PRIx64,
> + txq->offloads);
> + PMD_INIT_LOG(DEBUG,
> + " - tx_free_thresh = %lu [RTE_PMD_NGBE_TX_MAX_BURST=%lu]",
> + (unsigned long)txq->tx_free_thresh,
> + (unsigned long)RTE_PMD_NGBE_TX_MAX_BURST);
> + dev->tx_pkt_burst = ngbe_xmit_pkts;
> + dev->tx_pkt_prepare = ngbe_prep_pkts;
> + }
> +}
Since driver has multiple Rx/Tx functions now, you may want to implement new
APIs to get info about current burst function (in a separate patch):
'rte_eth_rx_burst_mode_get()'
'rte_eth_tx_burst_mode_get()'
^ permalink raw reply [flat|nested] 54+ messages in thread
* Re: [dpdk-dev] [PATCH 28/32] net/ngbe: add IPsec context creation
2021-09-08 8:37 ` [dpdk-dev] [PATCH 28/32] net/ngbe: add IPsec context creation Jiawen Wu
@ 2021-09-15 16:58 ` Ferruh Yigit
2021-09-16 9:00 ` Hemant Agrawal
2021-09-16 9:04 ` Hemant Agrawal
1 sibling, 1 reply; 54+ messages in thread
From: Ferruh Yigit @ 2021-09-15 16:58 UTC (permalink / raw)
To: Hemant Agrawal
Cc: Jiawen Wu, dev, Bruce Richardson, Thomas Monjalon,
David Marchand, Akhil Goyal
On 9/8/2021 9:37 AM, Jiawen Wu wrote:
> Initialize securiry context, and support to get security
> capabilities.
>
> Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
<...>
> --- a/drivers/net/ngbe/ngbe_ethdev.c
> +++ b/drivers/net/ngbe/ngbe_ethdev.c
> @@ -430,6 +430,12 @@ eth_ngbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
> /* Unlock any pending hardware semaphore */
> ngbe_swfw_lock_reset(hw);
>
> +#ifdef RTE_LIB_SECURITY
> + /* Initialize security_ctx only for primary process*/
> + if (ngbe_ipsec_ctx_create(eth_dev))
> + return -ENOMEM;
> +#endif
Hi Hemant,
I see 'RTE_LIB_SECURITY' is still used in some PMDs, as this new PMD also uses it?
Previously I assume this macro was to mark that security library is enabled, is
this macro still valid? Who should set this macro now?
Also can you please help reviewing this and next a few patches since they are
related to the security?
^ permalink raw reply [flat|nested] 54+ messages in thread
* Re: [dpdk-dev] [PATCH 32/32] doc: update for ngbe
2021-09-08 8:37 ` [dpdk-dev] [PATCH 32/32] doc: update for ngbe Jiawen Wu
@ 2021-09-15 16:58 ` Ferruh Yigit
0 siblings, 0 replies; 54+ messages in thread
From: Ferruh Yigit @ 2021-09-15 16:58 UTC (permalink / raw)
To: Jiawen Wu, dev
On 9/8/2021 9:37 AM, Jiawen Wu wrote:
> Add ngbe PMD new features in release note 21.11.
>
Can you please distribute the content of this patch to the commit that adds the
feature documented here?
As a result there shouldn't be a separate patch for release notes update.
Thanks.
> Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
> ---
> doc/guides/rel_notes/release_21_11.rst | 10 ++++++++++
> 1 file changed, 10 insertions(+)
>
> diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
> index 675b573834..81093cf6c0 100644
> --- a/doc/guides/rel_notes/release_21_11.rst
> +++ b/doc/guides/rel_notes/release_21_11.rst
> @@ -62,6 +62,16 @@ New Features
> * Added bus-level parsing of the devargs syntax.
> * Kept compatibility with the legacy syntax as parsing fallback.
>
> +* **Updated Wangxun ngbe driver.**
> + Updated the Wangxun ngbe driver. Add more features to complete the driver,
> + some of them including:
> +
> + * Added offloads and packet type on RxTx.
> + * Added device basic statistics and extended stats.
> + * Added VLAN and MAC filters.
> + * Added multi-queue and RSS.
> + * Added SRIOV.
> + * Added IPsec.
>
> Removed Items
> -------------
>
^ permalink raw reply [flat|nested] 54+ messages in thread
* Re: [dpdk-dev] [PATCH 28/32] net/ngbe: add IPsec context creation
2021-09-15 16:58 ` Ferruh Yigit
@ 2021-09-16 9:00 ` Hemant Agrawal
2021-09-16 17:15 ` Ferruh Yigit
0 siblings, 1 reply; 54+ messages in thread
From: Hemant Agrawal @ 2021-09-16 9:00 UTC (permalink / raw)
To: Ferruh Yigit, Hemant Agrawal
Cc: Jiawen Wu, dev, Bruce Richardson, Thomas Monjalon,
David Marchand, Akhil Goyal
On 9/15/2021 10:28 PM, Ferruh Yigit wrote:
> On 9/8/2021 9:37 AM, Jiawen Wu wrote:
>> Initialize securiry context, and support to get security
>> capabilities.
>>
>> Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
> <...>
>
>> --- a/drivers/net/ngbe/ngbe_ethdev.c
>> +++ b/drivers/net/ngbe/ngbe_ethdev.c
>> @@ -430,6 +430,12 @@ eth_ngbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
>> /* Unlock any pending hardware semaphore */
>> ngbe_swfw_lock_reset(hw);
>>
>> +#ifdef RTE_LIB_SECURITY
>> + /* Initialize security_ctx only for primary process*/
>> + if (ngbe_ipsec_ctx_create(eth_dev))
>> + return -ENOMEM;
>> +#endif
> Hi Hemant,
>
> I see 'RTE_LIB_SECURITY' is still used in some PMDs, as this new PMD also uses it?
> Previously I assume this macro was to mark that security library is enabled, is
> this macro still valid? Who should set this macro now?
>
> Also can you please help reviewing this and next a few patches since they are
> related to the security?
Hi Ferruh,
It indicate if the driver is using SECURITY library functions. In
Ethernet driver, it typically means the inline security offload.
Ok, I will try to review.
regards,
Hemant
>
^ permalink raw reply [flat|nested] 54+ messages in thread
* Re: [dpdk-dev] [PATCH 28/32] net/ngbe: add IPsec context creation
2021-09-08 8:37 ` [dpdk-dev] [PATCH 28/32] net/ngbe: add IPsec context creation Jiawen Wu
2021-09-15 16:58 ` Ferruh Yigit
@ 2021-09-16 9:04 ` Hemant Agrawal
1 sibling, 0 replies; 54+ messages in thread
From: Hemant Agrawal @ 2021-09-16 9:04 UTC (permalink / raw)
To: Jiawen Wu, dev, gakhil
On 9/8/2021 2:07 PM, Jiawen Wu wrote:
> Initialize securiry context, and support to get security
> capabilities.
>
> Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
> ---
> doc/guides/nics/features/ngbe.ini | 1 +
> drivers/net/ngbe/meson.build | 3 +-
> drivers/net/ngbe/ngbe_ethdev.c | 10 ++
> drivers/net/ngbe/ngbe_ethdev.h | 4 +
> drivers/net/ngbe/ngbe_ipsec.c | 178 ++++++++++++++++++++++++++++++
> 5 files changed, 195 insertions(+), 1 deletion(-)
> create mode 100644 drivers/net/ngbe/ngbe_ipsec.c
>
> diff --git a/doc/guides/nics/features/ngbe.ini b/doc/guides/nics/features/ngbe.ini
> index 56d5d71ea8..facdb5f006 100644
> --- a/doc/guides/nics/features/ngbe.ini
> +++ b/doc/guides/nics/features/ngbe.ini
> @@ -23,6 +23,7 @@ RSS reta update = Y
> SR-IOV = Y
> VLAN filter = Y
> Flow control = Y
> +Inline crypto = Y
> CRC offload = P
> VLAN offload = P
> QinQ offload = P
> diff --git a/drivers/net/ngbe/meson.build b/drivers/net/ngbe/meson.build
> index b276ec3341..f222595b19 100644
> --- a/drivers/net/ngbe/meson.build
> +++ b/drivers/net/ngbe/meson.build
> @@ -12,12 +12,13 @@ objs = [base_objs]
>
> sources = files(
> 'ngbe_ethdev.c',
> + 'ngbe_ipsec.c',
Ideally you shall be creating a crypto/security driver and have your
ipsec related functions there.
@akhil - what is your opinion here?
> 'ngbe_ptypes.c',
> 'ngbe_pf.c',
> 'ngbe_rxtx.c',
> )
>
> -deps += ['hash']
> +deps += ['hash', 'security']
>
> includes += include_directories('base')
>
> diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c
> index 4eaf9b0724..b0e0f7411e 100644
> --- a/drivers/net/ngbe/ngbe_ethdev.c
> +++ b/drivers/net/ngbe/ngbe_ethdev.c
> @@ -430,6 +430,12 @@ eth_ngbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
> /* Unlock any pending hardware semaphore */
> ngbe_swfw_lock_reset(hw);
>
> +#ifdef RTE_LIB_SECURITY
> + /* Initialize security_ctx only for primary process*/
> + if (ngbe_ipsec_ctx_create(eth_dev))
> + return -ENOMEM;
> +#endif
> +
> /* Get Hardware Flow Control setting */
> hw->fc.requested_mode = ngbe_fc_full;
> hw->fc.current_mode = ngbe_fc_full;
> @@ -1282,6 +1288,10 @@ ngbe_dev_close(struct rte_eth_dev *dev)
> rte_free(dev->data->hash_mac_addrs);
> dev->data->hash_mac_addrs = NULL;
>
> +#ifdef RTE_LIB_SECURITY
> + rte_free(dev->security_ctx);
> +#endif
> +
> return ret;
> }
>
> diff --git a/drivers/net/ngbe/ngbe_ethdev.h b/drivers/net/ngbe/ngbe_ethdev.h
> index aacc0b68b2..9eda024d65 100644
> --- a/drivers/net/ngbe/ngbe_ethdev.h
> +++ b/drivers/net/ngbe/ngbe_ethdev.h
> @@ -264,6 +264,10 @@ void ngbe_pf_mbx_process(struct rte_eth_dev *eth_dev);
>
> int ngbe_pf_host_configure(struct rte_eth_dev *eth_dev);
>
> +#ifdef RTE_LIB_SECURITY
> +int ngbe_ipsec_ctx_create(struct rte_eth_dev *dev);
> +#endif
> +
> /* High threshold controlling when to start sending XOFF frames. */
> #define NGBE_FC_XOFF_HITH 128 /*KB*/
> /* Low threshold controlling when to start sending XON frames. */
> diff --git a/drivers/net/ngbe/ngbe_ipsec.c b/drivers/net/ngbe/ngbe_ipsec.c
> new file mode 100644
> index 0000000000..5f8b0bab29
> --- /dev/null
> +++ b/drivers/net/ngbe/ngbe_ipsec.c
> @@ -0,0 +1,178 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2018-2021 Beijing WangXun Technology Co., Ltd.
> + * Copyright(c) 2010-2017 Intel Corporation
> + */
> +
> +#include <ethdev_pci.h>
> +#include <rte_security_driver.h>
> +#include <rte_cryptodev.h>
> +
> +#include "base/ngbe.h"
> +#include "ngbe_ethdev.h"
> +
> +static const struct rte_security_capability *
> +ngbe_crypto_capabilities_get(void *device __rte_unused)
> +{
> + static const struct rte_cryptodev_capabilities
> + aes_gcm_gmac_crypto_capabilities[] = {
> + { /* AES GMAC (128-bit) */
> + .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
> + {.sym = {
> + .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
> + {.auth = {
> + .algo = RTE_CRYPTO_AUTH_AES_GMAC,
> + .block_size = 16,
> + .key_size = {
> + .min = 16,
> + .max = 16,
> + .increment = 0
> + },
> + .digest_size = {
> + .min = 16,
> + .max = 16,
> + .increment = 0
> + },
> + .iv_size = {
> + .min = 12,
> + .max = 12,
> + .increment = 0
> + }
> + }, }
> + }, }
> + },
> + { /* AES GCM (128-bit) */
> + .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
> + {.sym = {
> + .xform_type = RTE_CRYPTO_SYM_XFORM_AEAD,
> + {.aead = {
> + .algo = RTE_CRYPTO_AEAD_AES_GCM,
> + .block_size = 16,
> + .key_size = {
> + .min = 16,
> + .max = 16,
> + .increment = 0
> + },
> + .digest_size = {
> + .min = 16,
> + .max = 16,
> + .increment = 0
> + },
> + .aad_size = {
> + .min = 0,
> + .max = 65535,
> + .increment = 1
> + },
> + .iv_size = {
> + .min = 12,
> + .max = 12,
> + .increment = 0
> + }
> + }, }
> + }, }
> + },
> + {
> + .op = RTE_CRYPTO_OP_TYPE_UNDEFINED,
> + {.sym = {
> + .xform_type = RTE_CRYPTO_SYM_XFORM_NOT_SPECIFIED
> + }, }
> + },
> + };
> +
> + static const struct rte_security_capability
> + ngbe_security_capabilities[] = {
> + { /* IPsec Inline Crypto ESP Transport Egress */
> + .action = RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO,
> + .protocol = RTE_SECURITY_PROTOCOL_IPSEC,
> + {.ipsec = {
> + .proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP,
> + .mode = RTE_SECURITY_IPSEC_SA_MODE_TRANSPORT,
> + .direction = RTE_SECURITY_IPSEC_SA_DIR_EGRESS,
> + .options = { 0 }
> + } },
> + .crypto_capabilities = aes_gcm_gmac_crypto_capabilities,
> + .ol_flags = RTE_SECURITY_TX_OLOAD_NEED_MDATA
> + },
> + { /* IPsec Inline Crypto ESP Transport Ingress */
> + .action = RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO,
> + .protocol = RTE_SECURITY_PROTOCOL_IPSEC,
> + {.ipsec = {
> + .proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP,
> + .mode = RTE_SECURITY_IPSEC_SA_MODE_TRANSPORT,
> + .direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS,
> + .options = { 0 }
> + } },
> + .crypto_capabilities = aes_gcm_gmac_crypto_capabilities,
> + .ol_flags = 0
> + },
> + { /* IPsec Inline Crypto ESP Tunnel Egress */
> + .action = RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO,
> + .protocol = RTE_SECURITY_PROTOCOL_IPSEC,
> + {.ipsec = {
> + .proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP,
> + .mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL,
> + .direction = RTE_SECURITY_IPSEC_SA_DIR_EGRESS,
> + .options = { 0 }
> + } },
> + .crypto_capabilities = aes_gcm_gmac_crypto_capabilities,
> + .ol_flags = RTE_SECURITY_TX_OLOAD_NEED_MDATA
> + },
> + { /* IPsec Inline Crypto ESP Tunnel Ingress */
> + .action = RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO,
> + .protocol = RTE_SECURITY_PROTOCOL_IPSEC,
> + {.ipsec = {
> + .proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP,
> + .mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL,
> + .direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS,
> + .options = { 0 }
> + } },
> + .crypto_capabilities = aes_gcm_gmac_crypto_capabilities,
> + .ol_flags = 0
> + },
> + {
> + .action = RTE_SECURITY_ACTION_TYPE_NONE
> + }
> + };
> +
> + return ngbe_security_capabilities;
> +}
> +
> +static struct rte_security_ops ngbe_security_ops = {
> + .capabilities_get = ngbe_crypto_capabilities_get
> +};
> +
> +static int
> +ngbe_crypto_capable(struct rte_eth_dev *dev)
> +{
> + struct ngbe_hw *hw = ngbe_dev_hw(dev);
> + uint32_t reg_i, reg, capable = 1;
> + /* test if rx crypto can be enabled and then write back initial value*/
> + reg_i = rd32(hw, NGBE_SECRXCTL);
> + wr32m(hw, NGBE_SECRXCTL, NGBE_SECRXCTL_ODSA, 0);
> + reg = rd32m(hw, NGBE_SECRXCTL, NGBE_SECRXCTL_ODSA);
> + if (reg != 0)
> + capable = 0;
> + wr32(hw, NGBE_SECRXCTL, reg_i);
> + return capable;
> +}
> +
> +int
> +ngbe_ipsec_ctx_create(struct rte_eth_dev *dev)
> +{
> + struct rte_security_ctx *ctx = NULL;
> +
> + if (ngbe_crypto_capable(dev)) {
> + ctx = rte_malloc("rte_security_instances_ops",
> + sizeof(struct rte_security_ctx), 0);
> + if (ctx) {
> + ctx->device = (void *)dev;
> + ctx->ops = &ngbe_security_ops;
> + ctx->sess_cnt = 0;
> + dev->security_ctx = ctx;
> + } else {
> + return -ENOMEM;
> + }
> + }
> + if (rte_security_dynfield_register() < 0)
> + return -rte_errno;
> + return 0;
> +}
^ permalink raw reply [flat|nested] 54+ messages in thread
* Re: [dpdk-dev] [PATCH 28/32] net/ngbe: add IPsec context creation
2021-09-16 9:00 ` Hemant Agrawal
@ 2021-09-16 17:15 ` Ferruh Yigit
0 siblings, 0 replies; 54+ messages in thread
From: Ferruh Yigit @ 2021-09-16 17:15 UTC (permalink / raw)
To: hemant.agrawal
Cc: Jiawen Wu, dev, Bruce Richardson, Thomas Monjalon,
David Marchand, Akhil Goyal
On 9/16/2021 10:00 AM, Hemant Agrawal wrote:
>
> On 9/15/2021 10:28 PM, Ferruh Yigit wrote:
>> On 9/8/2021 9:37 AM, Jiawen Wu wrote:
>>> Initialize securiry context, and support to get security
>>> capabilities.
>>>
>>> Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
>> <...>
>>
>>> --- a/drivers/net/ngbe/ngbe_ethdev.c
>>> +++ b/drivers/net/ngbe/ngbe_ethdev.c
>>> @@ -430,6 +430,12 @@ eth_ngbe_dev_init(struct rte_eth_dev *eth_dev, void
>>> *init_params __rte_unused)
>>> /* Unlock any pending hardware semaphore */
>>> ngbe_swfw_lock_reset(hw);
>>> +#ifdef RTE_LIB_SECURITY
>>> + /* Initialize security_ctx only for primary process*/
>>> + if (ngbe_ipsec_ctx_create(eth_dev))
>>> + return -ENOMEM;
>>> +#endif
>> Hi Hemant,
>>
>> I see 'RTE_LIB_SECURITY' is still used in some PMDs, as this new PMD also uses
>> it?
>> Previously I assume this macro was to mark that security library is enabled, is
>> this macro still valid? Who should set this macro now?
>>
>> Also can you please help reviewing this and next a few patches since they are
>> related to the security?
>
> Hi Ferruh,
>
> It indicate if the driver is using SECURITY library functions. In Ethernet
> driver, it typically means the inline security offload.
>
Got it, but right now who sets this macro? It isn't set automatically when
security library is enabled/compiled, right?
> Ok, I will try to review.
>
>
> regards,
>
> Hemant
>
>>
^ permalink raw reply [flat|nested] 54+ messages in thread
* Re: [dpdk-dev] [PATCH 01/32] net/ngbe: add packet type
2021-09-15 16:47 ` Ferruh Yigit
@ 2021-09-22 8:01 ` Jiawen Wu
0 siblings, 0 replies; 54+ messages in thread
From: Jiawen Wu @ 2021-09-22 8:01 UTC (permalink / raw)
To: 'Ferruh Yigit', dev
On September 16, 2021 12:48 AM, Ferruh Yigit wrote:
> On 9/8/2021 9:37 AM, Jiawen Wu wrote:
> > Add packet type marco definition and convert ptype to ptid.
> >
> > Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
> > ---
> > doc/guides/nics/features/ngbe.ini | 1 +
> > doc/guides/nics/ngbe.rst | 1 +
> > drivers/net/ngbe/meson.build | 1 +
> > drivers/net/ngbe/ngbe_ethdev.c | 9 +
> > drivers/net/ngbe/ngbe_ethdev.h | 4 +
> > drivers/net/ngbe/ngbe_ptypes.c | 300
> ++++++++++++++++++++++++++++++
> > drivers/net/ngbe/ngbe_ptypes.h | 240 ++++++++++++++++++++++++
> > drivers/net/ngbe/ngbe_rxtx.c | 16 ++
> > drivers/net/ngbe/ngbe_rxtx.h | 2 +
> > 9 files changed, 574 insertions(+)
> > create mode 100644 drivers/net/ngbe/ngbe_ptypes.c create mode
> 100644
> > drivers/net/ngbe/ngbe_ptypes.h
> >
> > diff --git a/doc/guides/nics/features/ngbe.ini
> > b/doc/guides/nics/features/ngbe.ini
> > index 08d5f1b0dc..8b7588184a 100644
> > --- a/doc/guides/nics/features/ngbe.ini
> > +++ b/doc/guides/nics/features/ngbe.ini
> > @@ -8,6 +8,7 @@ Speed capabilities = Y
> > Link status = Y
> > Link status event = Y
> > Queue start/stop = Y
> > +Packet type parsing = Y
>
> "Packet type parsing" also requires to support
> 'rte_eth_dev_get_supported_ptypes()' & 'rte_eth_dev_set_ptypes()' APIs.
>
> Current implementation seems parses the packet type and updates mbuf field
> for it but doesn't support above APIs, can you please add them too? There is
> already 'ngbe_dev_supported_ptypes_get()' function but dev_ops seems not
> set.
>
Oops.., I forgot it.
> <...>
>
> > +++ b/drivers/net/ngbe/ngbe_ptypes.c
> > @@ -0,0 +1,300 @@
> > +/* SPDX-License-Identifier: BSD-3-Clause
> > + * Copyright(c) 2018-2021 Beijing WangXun Technology Co., Ltd.
> > + */
> > +
> > +#include <rte_mbuf.h>
> > +#include <rte_memory.h>
> > +
> > +#include "base/ngbe_type.h"
> > +#include "ngbe_ptypes.h"
> > +
> > +/* The ngbe_ptype_lookup is used to convert from the 8-bit ptid in
> > +the
> > + * hardware to a bit-field that can be used by SW to more easily
> > +determine the
> > + * packet type.
> > + *
> > + * Macros are used to shorten the table lines and make this table
> > +human
> > + * readable.
> > + *
> > + * We store the PTYPE in the top byte of the bit field - this is just
> > +so that
> > + * we can check that the table doesn't have a row missing, as the
> > +index into
> > + * the table should be the PTYPE.
> > + */
> > +#define TPTE(ptid, l2, l3, l4, tun, el2, el3, el4) \
> > + [ptid] = (RTE_PTYPE_L2_##l2 | \
> > + RTE_PTYPE_L3_##l3 | \
> > + RTE_PTYPE_L4_##l4 | \
> > + RTE_PTYPE_TUNNEL_##tun | \
> > + RTE_PTYPE_INNER_L2_##el2 | \
> > + RTE_PTYPE_INNER_L3_##el3 | \
> > + RTE_PTYPE_INNER_L4_##el4)
> > +
> > +#define RTE_PTYPE_L2_NONE 0
> > +#define RTE_PTYPE_L3_NONE 0
> > +#define RTE_PTYPE_L4_NONE 0
> > +#define RTE_PTYPE_TUNNEL_NONE 0
> > +#define RTE_PTYPE_INNER_L2_NONE 0
> > +#define RTE_PTYPE_INNER_L3_NONE 0
> > +#define RTE_PTYPE_INNER_L4_NONE 0
>
> Why you are defining new PTYPEs? If these are for driver internal you can drop
> the 'RTE_' prefix.
>
I just want to use short macros, to make the lookup table readable.
So it needs 'RTE_' prefix here, to be compatible with other RTE mbuf packet types.
> <...>
>
> > +
> > +#ifndef RTE_PTYPE_UNKNOWN
> > +#define RTE_PTYPE_UNKNOWN 0x00000000
> > +#define RTE_PTYPE_L2_ETHER 0x00000001
> > +#define RTE_PTYPE_L2_ETHER_TIMESYNC 0x00000002
> > +#define RTE_PTYPE_L2_ETHER_ARP 0x00000003
> > +#define RTE_PTYPE_L2_ETHER_LLDP 0x00000004
> > +#define RTE_PTYPE_L2_ETHER_NSH 0x00000005
> > +#define RTE_PTYPE_L2_ETHER_FCOE 0x00000009
> > +#define RTE_PTYPE_L3_IPV4 0x00000010
> > +#define RTE_PTYPE_L3_IPV4_EXT 0x00000030
> > +#define RTE_PTYPE_L3_IPV6 0x00000040
> > +#define RTE_PTYPE_L3_IPV4_EXT_UNKNOWN 0x00000090
> > +#define RTE_PTYPE_L3_IPV6_EXT 0x000000c0
> > +#define RTE_PTYPE_L3_IPV6_EXT_UNKNOWN 0x000000e0
> > +#define RTE_PTYPE_L4_TCP 0x00000100
> > +#define RTE_PTYPE_L4_UDP 0x00000200
> > +#define RTE_PTYPE_L4_FRAG 0x00000300
> > +#define RTE_PTYPE_L4_SCTP 0x00000400
> > +#define RTE_PTYPE_L4_ICMP 0x00000500
> > +#define RTE_PTYPE_L4_NONFRAG 0x00000600
> > +#define RTE_PTYPE_TUNNEL_IP 0x00001000
> > +#define RTE_PTYPE_TUNNEL_GRE 0x00002000
> > +#define RTE_PTYPE_TUNNEL_VXLAN 0x00003000
> > +#define RTE_PTYPE_TUNNEL_NVGRE 0x00004000
> > +#define RTE_PTYPE_TUNNEL_GENEVE 0x00005000
> > +#define RTE_PTYPE_TUNNEL_GRENAT 0x00006000
> > +#define RTE_PTYPE_INNER_L2_ETHER 0x00010000
> > +#define RTE_PTYPE_INNER_L2_ETHER_VLAN 0x00020000
> > +#define RTE_PTYPE_INNER_L3_IPV4 0x00100000
> > +#define RTE_PTYPE_INNER_L3_IPV4_EXT 0x00200000
> > +#define RTE_PTYPE_INNER_L3_IPV6 0x00300000
> > +#define RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN 0x00400000
> > +#define RTE_PTYPE_INNER_L3_IPV6_EXT 0x00500000
> > +#define RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN 0x00600000
> > +#define RTE_PTYPE_INNER_L4_TCP 0x01000000
> > +#define RTE_PTYPE_INNER_L4_UDP 0x02000000
> > +#define RTE_PTYPE_INNER_L4_FRAG 0x03000000
> > +#define RTE_PTYPE_INNER_L4_SCTP 0x04000000
> > +#define RTE_PTYPE_INNER_L4_ICMP 0x05000000
> > +#define RTE_PTYPE_INNER_L4_NONFRAG 0x06000000
> > +#endif /* !RTE_PTYPE_UNKNOWN */
>
> These are already defined in the mbuf public header, why there are defined
> again?
>
These can be removed directly. They were written previously for version compatibility.
> <...>
>
> > @@ -378,6 +389,10 @@ ngbe_recv_pkts(void *rx_queue, struct rte_mbuf
> **rx_pkts,
> > rxm->data_len = pkt_len;
> > rxm->port = rxq->port_id;
> >
> > + pkt_info = rte_le_to_cpu_32(rxd.qw0.dw0);
> > + rxm->packet_type = ngbe_rxd_pkt_info_to_pkt_type(pkt_info,
> > + rxq->pkt_type_mask);
> > +
> > /*
> > * Store the mbuf address into the next entry of the array
> > * of returned packets.
> > @@ -799,6 +814,7 @@ ngbe_dev_rx_queue_setup(struct rte_eth_dev *dev,
> > rxq->port_id = dev->data->port_id;
> > rxq->drop_en = rx_conf->rx_drop_en;
> > rxq->rx_deferred_start = rx_conf->rx_deferred_start;
> > + rxq->pkt_type_mask = NGBE_PTID_MASK;
>
> What is the use of the 'pkt_type_mask', it seems it is a fixed value, why keeping
> it per queue?
^ permalink raw reply [flat|nested] 54+ messages in thread
* Re: [dpdk-dev] [PATCH 08/32] net/ngbe: support basic statistics
2021-09-15 16:50 ` Ferruh Yigit
@ 2021-10-14 2:51 ` Jiawen Wu
2021-10-14 7:59 ` Ferruh Yigit
0 siblings, 1 reply; 54+ messages in thread
From: Jiawen Wu @ 2021-10-14 2:51 UTC (permalink / raw)
To: 'Ferruh Yigit'; +Cc: dev
On September 16, 2021 12:51 AM, Ferruh Yigit wrote:
> On 9/8/2021 9:37 AM, Jiawen Wu wrote:
> > Support to read and clear basic statistics, and configure per-queue
> > stats counter mapping.
> >
> > Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
> > ---
> > doc/guides/nics/features/ngbe.ini | 2 +
> > doc/guides/nics/ngbe.rst | 1 +
> > drivers/net/ngbe/base/ngbe_dummy.h | 5 +
> > drivers/net/ngbe/base/ngbe_hw.c | 101 ++++++++++
> > drivers/net/ngbe/base/ngbe_hw.h | 1 +
> > drivers/net/ngbe/base/ngbe_type.h | 134 +++++++++++++
> > drivers/net/ngbe/ngbe_ethdev.c | 300
> +++++++++++++++++++++++++++++
> > drivers/net/ngbe/ngbe_ethdev.h | 19 ++
> > 8 files changed, 563 insertions(+)
> >
>
> <...>
>
> > +static int
> > +ngbe_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats
> > +*stats) {
> > + struct ngbe_hw *hw = ngbe_dev_hw(dev);
> > + struct ngbe_hw_stats *hw_stats = NGBE_DEV_STATS(dev);
> > + struct ngbe_stat_mappings *stat_mappings =
> > + NGBE_DEV_STAT_MAPPINGS(dev);
> > + uint32_t i, j;
> > +
> > + ngbe_read_stats_registers(hw, hw_stats);
> > +
> > + if (stats == NULL)
> > + return -EINVAL;
> > +
> > + /* Fill out the rte_eth_stats statistics structure */
> > + stats->ipackets = hw_stats->rx_packets;
> > + stats->ibytes = hw_stats->rx_bytes;
> > + stats->opackets = hw_stats->tx_packets;
> > + stats->obytes = hw_stats->tx_bytes;
> > +
> > + memset(&stats->q_ipackets, 0, sizeof(stats->q_ipackets));
> > + memset(&stats->q_opackets, 0, sizeof(stats->q_opackets));
> > + memset(&stats->q_ibytes, 0, sizeof(stats->q_ibytes));
> > + memset(&stats->q_obytes, 0, sizeof(stats->q_obytes));
> > + memset(&stats->q_errors, 0, sizeof(stats->q_errors));
> > + for (i = 0; i < NGBE_MAX_QP; i++) {
> > + uint32_t n = i / NB_QMAP_FIELDS_PER_QSM_REG;
> > + uint32_t offset = (i % NB_QMAP_FIELDS_PER_QSM_REG) * 8;
> > + uint32_t q_map;
> > +
> > + q_map = (stat_mappings->rqsm[n] >> offset)
> > + & QMAP_FIELD_RESERVED_BITS_MASK;
> > + j = (q_map < RTE_ETHDEV_QUEUE_STAT_CNTRS
> > + ? q_map : q_map % RTE_ETHDEV_QUEUE_STAT_CNTRS);
> > + stats->q_ipackets[j] += hw_stats->qp[i].rx_qp_packets;
> > + stats->q_ibytes[j] += hw_stats->qp[i].rx_qp_bytes;
> > +
> > + q_map = (stat_mappings->tqsm[n] >> offset)
> > + & QMAP_FIELD_RESERVED_BITS_MASK;
> > + j = (q_map < RTE_ETHDEV_QUEUE_STAT_CNTRS
> > + ? q_map : q_map % RTE_ETHDEV_QUEUE_STAT_CNTRS);
> > + stats->q_opackets[j] += hw_stats->qp[i].tx_qp_packets;
> > + stats->q_obytes[j] += hw_stats->qp[i].tx_qp_bytes;
> > + }
> > +
> > + /* Rx Errors */
> > + stats->imissed = hw_stats->rx_total_missed_packets +
> > + hw_stats->rx_dma_drop;
> > + stats->ierrors = hw_stats->rx_crc_errors +
> > + hw_stats->rx_mac_short_packet_dropped +
> > + hw_stats->rx_length_errors +
> > + hw_stats->rx_undersize_errors +
> > + hw_stats->rx_oversize_errors +
> > + hw_stats->rx_illegal_byte_errors +
> > + hw_stats->rx_error_bytes +
> > + hw_stats->rx_fragment_errors;
> > +
> > + /* Tx Errors */
> > + stats->oerrors = 0;
> > + return 0;
>
> You can consider keeping 'stats->rx_nombuf' stats too, this needs to be
> calculated by driver.
>
I see ' stats->rx_nombuf = dev->data->rx_mbuf_alloc_failed' in the function rte_eth_stats_get, before calling stats_get ops.
Should I write it again here?
> <...>
>
> > +
> > static int
> > ngbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info
> > *dev_info) { @@ -1462,6 +1759,9 @@ static const struct eth_dev_ops
> > ngbe_eth_dev_ops = {
> > .dev_close = ngbe_dev_close,
> > .dev_reset = ngbe_dev_reset,
> > .link_update = ngbe_dev_link_update,
> > + .stats_get = ngbe_dev_stats_get,
> > + .stats_reset = ngbe_dev_stats_reset,
> > + .queue_stats_mapping_set = ngbe_dev_queue_stats_mapping_set,
>
> 'queue_stats_mapping_set' is only needed when number of stats registers are
> less than number of queues. If this is not the case for you please drop this
> support.
>
> And we are switching to qstats on extending stat, please see
> 'RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS'. This is mainly done to remove the
> compile time 'RTE_ETHDEV_QUEUE_STAT_CNTRS' limitation.
>
> Btw, 'RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS' seems missing, you should
> have it in the driver.
>
^ permalink raw reply [flat|nested] 54+ messages in thread
* Re: [dpdk-dev] [PATCH 08/32] net/ngbe: support basic statistics
2021-10-14 2:51 ` Jiawen Wu
@ 2021-10-14 7:59 ` Ferruh Yigit
0 siblings, 0 replies; 54+ messages in thread
From: Ferruh Yigit @ 2021-10-14 7:59 UTC (permalink / raw)
To: Jiawen Wu; +Cc: dev
On 10/14/2021 3:51 AM, Jiawen Wu wrote:
> On September 16, 2021 12:51 AM, Ferruh Yigit wrote:
>> On 9/8/2021 9:37 AM, Jiawen Wu wrote:
>>> Support to read and clear basic statistics, and configure per-queue
>>> stats counter mapping.
>>>
>>> Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
>>> ---
>>> doc/guides/nics/features/ngbe.ini | 2 +
>>> doc/guides/nics/ngbe.rst | 1 +
>>> drivers/net/ngbe/base/ngbe_dummy.h | 5 +
>>> drivers/net/ngbe/base/ngbe_hw.c | 101 ++++++++++
>>> drivers/net/ngbe/base/ngbe_hw.h | 1 +
>>> drivers/net/ngbe/base/ngbe_type.h | 134 +++++++++++++
>>> drivers/net/ngbe/ngbe_ethdev.c | 300
>> +++++++++++++++++++++++++++++
>>> drivers/net/ngbe/ngbe_ethdev.h | 19 ++
>>> 8 files changed, 563 insertions(+)
>>>
>>
>> <...>
>>
>>> +static int
>>> +ngbe_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats
>>> +*stats) {
>>> + struct ngbe_hw *hw = ngbe_dev_hw(dev);
>>> + struct ngbe_hw_stats *hw_stats = NGBE_DEV_STATS(dev);
>>> + struct ngbe_stat_mappings *stat_mappings =
>>> + NGBE_DEV_STAT_MAPPINGS(dev);
>>> + uint32_t i, j;
>>> +
>>> + ngbe_read_stats_registers(hw, hw_stats);
>>> +
>>> + if (stats == NULL)
>>> + return -EINVAL;
>>> +
>>> + /* Fill out the rte_eth_stats statistics structure */
>>> + stats->ipackets = hw_stats->rx_packets;
>>> + stats->ibytes = hw_stats->rx_bytes;
>>> + stats->opackets = hw_stats->tx_packets;
>>> + stats->obytes = hw_stats->tx_bytes;
>>> +
>>> + memset(&stats->q_ipackets, 0, sizeof(stats->q_ipackets));
>>> + memset(&stats->q_opackets, 0, sizeof(stats->q_opackets));
>>> + memset(&stats->q_ibytes, 0, sizeof(stats->q_ibytes));
>>> + memset(&stats->q_obytes, 0, sizeof(stats->q_obytes));
>>> + memset(&stats->q_errors, 0, sizeof(stats->q_errors));
>>> + for (i = 0; i < NGBE_MAX_QP; i++) {
>>> + uint32_t n = i / NB_QMAP_FIELDS_PER_QSM_REG;
>>> + uint32_t offset = (i % NB_QMAP_FIELDS_PER_QSM_REG) * 8;
>>> + uint32_t q_map;
>>> +
>>> + q_map = (stat_mappings->rqsm[n] >> offset)
>>> + & QMAP_FIELD_RESERVED_BITS_MASK;
>>> + j = (q_map < RTE_ETHDEV_QUEUE_STAT_CNTRS
>>> + ? q_map : q_map % RTE_ETHDEV_QUEUE_STAT_CNTRS);
>>> + stats->q_ipackets[j] += hw_stats->qp[i].rx_qp_packets;
>>> + stats->q_ibytes[j] += hw_stats->qp[i].rx_qp_bytes;
>>> +
>>> + q_map = (stat_mappings->tqsm[n] >> offset)
>>> + & QMAP_FIELD_RESERVED_BITS_MASK;
>>> + j = (q_map < RTE_ETHDEV_QUEUE_STAT_CNTRS
>>> + ? q_map : q_map % RTE_ETHDEV_QUEUE_STAT_CNTRS);
>>> + stats->q_opackets[j] += hw_stats->qp[i].tx_qp_packets;
>>> + stats->q_obytes[j] += hw_stats->qp[i].tx_qp_bytes;
>>> + }
>>> +
>>> + /* Rx Errors */
>>> + stats->imissed = hw_stats->rx_total_missed_packets +
>>> + hw_stats->rx_dma_drop;
>>> + stats->ierrors = hw_stats->rx_crc_errors +
>>> + hw_stats->rx_mac_short_packet_dropped +
>>> + hw_stats->rx_length_errors +
>>> + hw_stats->rx_undersize_errors +
>>> + hw_stats->rx_oversize_errors +
>>> + hw_stats->rx_illegal_byte_errors +
>>> + hw_stats->rx_error_bytes +
>>> + hw_stats->rx_fragment_errors;
>>> +
>>> + /* Tx Errors */
>>> + stats->oerrors = 0;
>>> + return 0;
>>
>> You can consider keeping 'stats->rx_nombuf' stats too, this needs to be
>> calculated by driver.
>>
>
> I see ' stats->rx_nombuf = dev->data->rx_mbuf_alloc_failed' in the function rte_eth_stats_get, before calling stats_get ops.
> Should I write it again here?
>
You are right, I missed it. Just updating 'rx_mbuf_alloc_failed' is
sufficient.
^ permalink raw reply [flat|nested] 54+ messages in thread
* Re: [dpdk-dev] [PATCH 14/32] net/ngbe: support Rx interrupt
2021-09-15 16:53 ` Ferruh Yigit
@ 2021-10-14 10:11 ` Jiawen Wu
0 siblings, 0 replies; 54+ messages in thread
From: Jiawen Wu @ 2021-10-14 10:11 UTC (permalink / raw)
To: 'Ferruh Yigit', dev
On September 16, 2021 12:54 AM, Ferruh Yigit wrote:
> On 9/8/2021 9:37 AM, Jiawen Wu wrote:
> > Support Rx queue interrupt.
> >
> > Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
> > ---
> > doc/guides/nics/features/ngbe.ini | 1 +
> > doc/guides/nics/ngbe.rst | 1 +
> > drivers/net/ngbe/ngbe_ethdev.c | 35
> +++++++++++++++++++++++++++++++
> > 3 files changed, 37 insertions(+)
> >
> > diff --git a/doc/guides/nics/features/ngbe.ini
> > b/doc/guides/nics/features/ngbe.ini
> > index 1006c3935b..d14469eb43 100644
> > --- a/doc/guides/nics/features/ngbe.ini
> > +++ b/doc/guides/nics/features/ngbe.ini
> > @@ -7,6 +7,7 @@
> > Speed capabilities = Y
> > Link status = Y
> > Link status event = Y
> > +Rx interrupt = Y
>
> This also requires configuring Rx interrupts if user 'dev_conf.intr_conf.rxq'
> config requests it.
>
> Is an application can request and use Rx interrupts with current status of the
> driver? Did you test it?
I can't find the corresponding test case in examples, could you give me a suggestion?
I just configured almost the same registers as the kernel driver before.
But now I'll drop this feature first, and wait for a successful test result.
^ permalink raw reply [flat|nested] 54+ messages in thread
end of thread, other threads:[~2021-10-14 10:11 UTC | newest]
Thread overview: 54+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-09-08 8:37 [dpdk-dev] [PATCH 00/32] net/ngbe: add many features Jiawen Wu
2021-09-08 8:37 ` [dpdk-dev] [PATCH 01/32] net/ngbe: add packet type Jiawen Wu
2021-09-15 16:47 ` Ferruh Yigit
2021-09-22 8:01 ` Jiawen Wu
2021-09-08 8:37 ` [dpdk-dev] [PATCH 02/32] net/ngbe: support scattered Rx Jiawen Wu
2021-09-15 13:22 ` Ferruh Yigit
2021-09-08 8:37 ` [dpdk-dev] [PATCH 03/32] net/ngbe: support Rx checksum offload Jiawen Wu
2021-09-15 16:48 ` Ferruh Yigit
2021-09-08 8:37 ` [dpdk-dev] [PATCH 04/32] net/ngbe: support TSO Jiawen Wu
2021-09-15 16:57 ` Ferruh Yigit
2021-09-08 8:37 ` [dpdk-dev] [PATCH 05/32] net/ngbe: support CRC offload Jiawen Wu
2021-09-15 16:48 ` Ferruh Yigit
2021-09-08 8:37 ` [dpdk-dev] [PATCH 06/32] net/ngbe: support jumbo frame Jiawen Wu
2021-09-15 16:48 ` Ferruh Yigit
2021-09-08 8:37 ` [dpdk-dev] [PATCH 07/32] net/ngbe: support VLAN and QinQ offload Jiawen Wu
2021-09-08 8:37 ` [dpdk-dev] [PATCH 08/32] net/ngbe: support basic statistics Jiawen Wu
2021-09-15 16:50 ` Ferruh Yigit
2021-10-14 2:51 ` Jiawen Wu
2021-10-14 7:59 ` Ferruh Yigit
2021-09-08 8:37 ` [dpdk-dev] [PATCH 09/32] net/ngbe: support device xstats Jiawen Wu
2021-09-08 8:37 ` [dpdk-dev] [PATCH 10/32] net/ngbe: support MTU set Jiawen Wu
2021-09-15 16:52 ` Ferruh Yigit
2021-09-08 8:37 ` [dpdk-dev] [PATCH 11/32] net/ngbe: add device promiscuous and allmulticast mode Jiawen Wu
2021-09-08 8:37 ` [dpdk-dev] [PATCH 12/32] net/ngbe: support getting FW version Jiawen Wu
2021-09-15 16:53 ` Ferruh Yigit
2021-09-08 8:37 ` [dpdk-dev] [PATCH 13/32] net/ngbe: add loopback mode Jiawen Wu
2021-09-08 8:37 ` [dpdk-dev] [PATCH 14/32] net/ngbe: support Rx interrupt Jiawen Wu
2021-09-15 16:53 ` Ferruh Yigit
2021-10-14 10:11 ` Jiawen Wu
2021-09-08 8:37 ` [dpdk-dev] [PATCH 15/32] net/ngbe: support MAC filters Jiawen Wu
2021-09-08 8:37 ` [dpdk-dev] [PATCH 16/32] net/ngbe: support VLAN filter Jiawen Wu
2021-09-15 16:54 ` Ferruh Yigit
2021-09-08 8:37 ` [dpdk-dev] [PATCH 17/32] net/ngbe: support RSS hash Jiawen Wu
2021-09-08 8:37 ` [dpdk-dev] [PATCH 18/32] net/ngbe: support SRIOV Jiawen Wu
2021-09-08 8:37 ` [dpdk-dev] [PATCH 19/32] net/ngbe: add mailbox process operations Jiawen Wu
2021-09-15 16:56 ` Ferruh Yigit
2021-09-08 8:37 ` [dpdk-dev] [PATCH 20/32] net/ngbe: support flow control Jiawen Wu
2021-09-08 8:37 ` [dpdk-dev] [PATCH 21/32] net/ngbe: support device LED on and off Jiawen Wu
2021-09-08 8:37 ` [dpdk-dev] [PATCH 22/32] net/ngbe: support EEPROM dump Jiawen Wu
2021-09-08 8:37 ` [dpdk-dev] [PATCH 23/32] net/ngbe: support register dump Jiawen Wu
2021-09-08 8:37 ` [dpdk-dev] [PATCH 24/32] net/ngbe: support timesync Jiawen Wu
2021-09-08 8:37 ` [dpdk-dev] [PATCH 25/32] net/ngbe: add Rx and Tx queue info get Jiawen Wu
2021-09-08 8:37 ` [dpdk-dev] [PATCH 26/32] net/ngbe: add Rx and Tx descriptor status Jiawen Wu
2021-09-08 8:37 ` [dpdk-dev] [PATCH 27/32] net/ngbe: add Tx done cleanup Jiawen Wu
2021-09-08 8:37 ` [dpdk-dev] [PATCH 28/32] net/ngbe: add IPsec context creation Jiawen Wu
2021-09-15 16:58 ` Ferruh Yigit
2021-09-16 9:00 ` Hemant Agrawal
2021-09-16 17:15 ` Ferruh Yigit
2021-09-16 9:04 ` Hemant Agrawal
2021-09-08 8:37 ` [dpdk-dev] [PATCH 29/32] net/ngbe: create and destroy security session Jiawen Wu
2021-09-08 8:37 ` [dpdk-dev] [PATCH 30/32] net/ngbe: support security operations Jiawen Wu
2021-09-08 8:37 ` [dpdk-dev] [PATCH 31/32] net/ngbe: add security offload in Rx and Tx Jiawen Wu
2021-09-08 8:37 ` [dpdk-dev] [PATCH 32/32] doc: update for ngbe Jiawen Wu
2021-09-15 16:58 ` Ferruh Yigit
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).