From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 26CBEA0547; Tue, 19 Oct 2021 20:42:54 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C644C4114A; Tue, 19 Oct 2021 20:42:42 +0200 (CEST) Received: from inva021.nxp.com (inva021.nxp.com [92.121.34.21]) by mails.dpdk.org (Postfix) with ESMTP id 04F54410FE for ; Tue, 19 Oct 2021 20:42:38 +0200 (CEST) Received: from inva021.nxp.com (localhost [127.0.0.1]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id D7947200B05; Tue, 19 Oct 2021 20:42:37 +0200 (CEST) Received: from aprdc01srsp001v.ap-rdc01.nxp.com (aprdc01srsp001v.ap-rdc01.nxp.com [165.114.16.16]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id 74BF7200AD2; Tue, 19 Oct 2021 20:42:37 +0200 (CEST) Received: from lsv03186.swis.in-blr01.nxp.com (lsv03186.swis.in-blr01.nxp.com [92.120.146.182]) by aprdc01srsp001v.ap-rdc01.nxp.com (Postfix) with ESMTP id 729DC183AC89; Wed, 20 Oct 2021 02:42:36 +0800 (+08) From: Apeksha Gupta To: david.marchand@redhat.com, andrew.rybchenko@oktetlabs.ru, ferruh.yigit@intel.com Cc: dev@dpdk.org, sachin.saxena@nxp.com, hemant.agrawal@nxp.com, Apeksha Gupta Date: Wed, 20 Oct 2021 00:10:03 +0530 Message-Id: <20211019184003.23128-6-apeksha.gupta@nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20211019184003.23128-1-apeksha.gupta@nxp.com> References: <20211001114230.14107-2-apeksha.gupta@nxp.com> <20211019184003.23128-1-apeksha.gupta@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Subject: [dpdk-dev] [PATCH v5 5/5] net/enetfec: add features X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch adds checksum and VLAN offloads in enetfec network poll mode driver. Signed-off-by: Sachin Saxena Signed-off-by: Apeksha Gupta --- doc/guides/nics/enetfec.rst | 2 ++ doc/guides/nics/features/enetfec.ini | 3 ++ drivers/net/enetfec/enet_ethdev.c | 17 ++++++++- drivers/net/enetfec/enet_regs.h | 10 ++++++ drivers/net/enetfec/enet_rxtx.c | 53 +++++++++++++++++++++++++++- 5 files changed, 83 insertions(+), 2 deletions(-) diff --git a/doc/guides/nics/enetfec.rst b/doc/guides/nics/enetfec.rst index 6c4e23379f..7f4560e5ce 100644 --- a/doc/guides/nics/enetfec.rst +++ b/doc/guides/nics/enetfec.rst @@ -82,6 +82,8 @@ ENETFEC Features - Basic stats - Promiscuous +- VLAN offload +- L3/L4 checksum offload - Linux - ARMv8 diff --git a/doc/guides/nics/features/enetfec.ini b/doc/guides/nics/features/enetfec.ini index 7e0fb148ac..3e9cc90b9f 100644 --- a/doc/guides/nics/features/enetfec.ini +++ b/doc/guides/nics/features/enetfec.ini @@ -6,6 +6,9 @@ [Features] Basic stats = Y Promiscuous mode = Y +VLAN offload = Y +L3 checksum offload = Y +L4 checksum offload = Y Linux = Y ARMv8 = Y Usage doc = Y diff --git a/drivers/net/enetfec/enet_ethdev.c b/drivers/net/enetfec/enet_ethdev.c index 4419952443..c6957e16e5 100644 --- a/drivers/net/enetfec/enet_ethdev.c +++ b/drivers/net/enetfec/enet_ethdev.c @@ -106,7 +106,11 @@ enetfec_restart(struct rte_eth_dev *dev) val = rte_read32((uint8_t *)fep->hw_baseaddr_v + ENETFEC_RACC); /* align IP header */ val |= ENETFEC_RACC_SHIFT16; - val &= ~ENETFEC_RACC_OPTIONS; + if (fep->flag_csum & RX_FLAG_CSUM_EN) + /* set RX checksum */ + val |= ENETFEC_RACC_OPTIONS; + else + val &= ~ENETFEC_RACC_OPTIONS; rte_write32(rte_cpu_to_le_32(val), (uint8_t *)fep->hw_baseaddr_v + ENETFEC_RACC); rte_write32(rte_cpu_to_le_32(PKT_MAX_BUF_SIZE), @@ -611,9 +615,20 @@ static int enetfec_eth_init(struct rte_eth_dev *dev) { struct enetfec_private *fep = dev->data->dev_private; + struct rte_eth_conf *eth_conf = &fep->dev->data->dev_conf; + uint64_t rx_offloads = eth_conf->rxmode.offloads; fep->full_duplex = FULL_DUPLEX; dev->dev_ops = &enetfec_ops; + if (fep->quirks & QUIRK_VLAN) + /* enable hw VLAN support */ + rx_offloads |= DEV_RX_OFFLOAD_VLAN; + + if (fep->quirks & QUIRK_CSUM) { + /* enable hw accelerator */ + rx_offloads |= DEV_RX_OFFLOAD_CHECKSUM; + fep->flag_csum |= RX_FLAG_CSUM_EN; + } rte_eth_dev_probing_finish(dev); return 0; diff --git a/drivers/net/enetfec/enet_regs.h b/drivers/net/enetfec/enet_regs.h index 5415ed77ea..a300c6f8bc 100644 --- a/drivers/net/enetfec/enet_regs.h +++ b/drivers/net/enetfec/enet_regs.h @@ -27,6 +27,12 @@ #define RX_BD_EMPTY ((ushort)0x8000) /* BD is empty */ #define RX_BD_STATS ((ushort)0x013f) /* All buffer descriptor status bits */ +/* Ethernet receive use control and status of enhanced buffer descriptor */ +#define BD_ENETFEC_RX_VLAN 0x00000004 + +#define RX_FLAG_CSUM_EN (RX_BD_ICE | RX_BD_PCR) +#define RX_FLAG_CSUM_ERR (RX_BD_ICE | RX_BD_PCR) + /* Ethernet transmit use control and status of buffer descriptor */ #define TX_BD_TC ((ushort)0x0400) /* Transmit CRC */ #define TX_BD_LAST ((ushort)0x0800) /* Last in frame */ @@ -56,6 +62,10 @@ #define QUIRK_HAS_ENETFEC_MAC (1 << 0) /* GBIT supported in controller */ #define QUIRK_GBIT (1 << 3) +/* Controller support hardware checksum */ +#define QUIRK_CSUM (1 << 5) +/* Controller support hardware vlan */ +#define QUIRK_VLAN (1 << 6) /* RACC register supported by controller */ #define QUIRK_RACC (1 << 12) /* i.MX8 ENETFEC IP version added the feature to generate the delayed TXC or diff --git a/drivers/net/enetfec/enet_rxtx.c b/drivers/net/enetfec/enet_rxtx.c index 445fa97e77..fdd3343589 100644 --- a/drivers/net/enetfec/enet_rxtx.c +++ b/drivers/net/enetfec/enet_rxtx.c @@ -245,9 +245,14 @@ enetfec_recv_pkts(void *rxq1, __rte_unused struct rte_mbuf **rx_pkts, unsigned short status; unsigned short pkt_len; int pkt_received = 0, index = 0; - void *data; + void *data, *mbuf_data; + uint16_t vlan_tag; + struct bufdesc_ex *ebdp = NULL; + bool vlan_packet_rcvd = false; struct enetfec_priv_rx_q *rxq = (struct enetfec_priv_rx_q *)rxq1; struct rte_eth_stats *stats = &rxq->fep->stats; + struct rte_eth_conf *eth_conf = &rxq->fep->dev->data->dev_conf; + uint64_t rx_offloads = eth_conf->rxmode.offloads; pool = rxq->pool; bdp = rxq->bd.cur; #if ENETFEC_LOOPBACK @@ -302,6 +307,7 @@ enetfec_recv_pkts(void *rxq1, __rte_unused struct rte_mbuf **rx_pkts, mbuf = rxq->rx_mbuf[index]; data = rte_pktmbuf_mtod(mbuf, uint8_t *); + mbuf_data = data; rte_prefetch0(data); rte_pktmbuf_append((struct rte_mbuf *)mbuf, pkt_len - 4); @@ -311,6 +317,47 @@ enetfec_recv_pkts(void *rxq1, __rte_unused struct rte_mbuf **rx_pkts, rx_pkts[pkt_received] = mbuf; pkt_received++; + + /* Extract the enhanced buffer descriptor */ + ebdp = NULL; + if (rxq->fep->bufdesc_ex) + ebdp = (struct bufdesc_ex *)bdp; + + /* If this is a VLAN packet remove the VLAN Tag */ + vlan_packet_rcvd = false; + if ((rx_offloads & DEV_RX_OFFLOAD_VLAN) && + rxq->fep->bufdesc_ex && + (rte_read32(&ebdp->bd_esc) & + rte_cpu_to_le_32(BD_ENETFEC_RX_VLAN))) { + /* Push and remove the vlan tag */ + struct rte_vlan_hdr *vlan_header = + (struct rte_vlan_hdr *) + ((uint8_t *)data + ETH_HLEN); + vlan_tag = rte_be_to_cpu_16(vlan_header->vlan_tci); + + vlan_packet_rcvd = true; + memmove((uint8_t *)mbuf_data + VLAN_HLEN, + data, ETH_ALEN * 2); + rte_pktmbuf_adj(mbuf, VLAN_HLEN); + } + + if (rxq->fep->bufdesc_ex && + (rxq->fep->flag_csum & RX_FLAG_CSUM_EN)) { + if ((rte_read32(&ebdp->bd_esc) & + rte_cpu_to_le_32(RX_FLAG_CSUM_ERR)) == 0) { + /* don't check it */ + mbuf->ol_flags = PKT_RX_IP_CKSUM_BAD; + } else { + mbuf->ol_flags = PKT_RX_IP_CKSUM_GOOD; + } + } + + /* Handle received VLAN packets */ + if (vlan_packet_rcvd) { + mbuf->vlan_tci = vlan_tag; + mbuf->ol_flags |= PKT_RX_VLAN_STRIPPED | PKT_RX_VLAN; + } + rxq->rx_mbuf[index] = new_mbuf; rte_write32(rte_cpu_to_le_32(rte_pktmbuf_iova(new_mbuf)), &bdp->bd_bufaddr); @@ -411,6 +458,10 @@ enetfec_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) if (txq->fep->bufdesc_ex) { struct bufdesc_ex *ebdp = (struct bufdesc_ex *)bdp; + + if (mbuf->ol_flags == PKT_RX_IP_CKSUM_GOOD) + estatus |= TX_BD_PINS | TX_BD_IINS; + rte_write32(0, &ebdp->bd_bdu); rte_write32(rte_cpu_to_le_32(estatus), &ebdp->bd_esc); -- 2.17.1