From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 5AD16A0C4A; Thu, 8 Jul 2021 11:34:55 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id A6F85415C6; Thu, 8 Jul 2021 11:34:18 +0200 (CEST) Received: from qq.com (smtpbg564.qq.com [183.3.255.184]) by mails.dpdk.org (Postfix) with ESMTP id 3F2D1415D4 for ; Thu, 8 Jul 2021 11:34:16 +0200 (CEST) X-QQ-mid: bizesmtp44t1625736804tsgfqilz Received: from jiawenwu.trustnetic.com (unknown [183.129.236.74]) by esmtp6.qq.com (ESMTP) with id ; Thu, 08 Jul 2021 17:33:24 +0800 (CST) X-QQ-SSF: 01400000002000D0E000B00A0000000 X-QQ-FEAT: O9RHVi+JMbJLn+fSKBTVrx/noa8k/KZ4a86crAUi44Z6q60scbpEpEijVCJHW d+XGhSlBR9B4XMzU/8X/715wOsJzhbfFkGS1xCa7a7KWG0JR3wDihzNhSXH2UB499g828Wm 6UtGRM/eGwOtJ8wc9eJEORG1CODz4ylu13AVnhAe0wJZWGxmUJzSC3eEAEK1+K5LSKfAFFy g4YmVdFWJKlpqNPQhkJCt2zn0d62cnK3WuYrXHu6X6y2wcVCP4vzDWGvctDTaLDnZENyfpp atpXPwtu5ugJ/cKfypxq7RzKmVr0o69JBmpc5BsuYVidoLtLF+vjOVLccnyXlj3VuteAPq1 7oSV+7lrPq/yKHRZyfTcwgo01xMMpY1TUlcNvaHfdXIJasT2Uw= X-QQ-GoodBg: 2 From: Jiawen Wu To: dev@dpdk.org Cc: Jiawen Wu Date: Thu, 8 Jul 2021 17:32:37 +0800 Message-Id: <20210708093239.13896-18-jiawenwu@trustnetic.com> X-Mailer: git-send-email 2.21.0.windows.1 In-Reply-To: <20210708093239.13896-1-jiawenwu@trustnetic.com> References: <20210708093239.13896-1-jiawenwu@trustnetic.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-QQ-SENDSIZE: 520 Feedback-ID: bizesmtp:trustnetic.com:qybgforeign:qybgforeign2 X-QQ-Bgrelay: 1 Subject: [dpdk-dev] [PATCH v8 17/19] net/ngbe: add simple Rx flow X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Initialize device with the simplest receive function. Signed-off-by: Jiawen Wu --- drivers/net/ngbe/ngbe_ethdev.c | 1 + drivers/net/ngbe/ngbe_ethdev.h | 3 + drivers/net/ngbe/ngbe_rxtx.c | 169 +++++++++++++++++++++++++++++++++ drivers/net/ngbe/ngbe_rxtx.h | 73 ++++++++++++++ 4 files changed, 246 insertions(+) diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c index 2a40dbd184..e5e2f44454 100644 --- a/drivers/net/ngbe/ngbe_ethdev.c +++ b/drivers/net/ngbe/ngbe_ethdev.c @@ -137,6 +137,7 @@ eth_ngbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused) PMD_INIT_FUNC_TRACE(); eth_dev->dev_ops = &ngbe_eth_dev_ops; + eth_dev->rx_pkt_burst = &ngbe_recv_pkts; if (rte_eal_process_type() != RTE_PROC_PRIMARY) return 0; diff --git a/drivers/net/ngbe/ngbe_ethdev.h b/drivers/net/ngbe/ngbe_ethdev.h index 049d8fe71d..3f17384596 100644 --- a/drivers/net/ngbe/ngbe_ethdev.h +++ b/drivers/net/ngbe/ngbe_ethdev.h @@ -99,6 +99,9 @@ int ngbe_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id); int ngbe_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id); +uint16_t ngbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, + uint16_t nb_pkts); + void ngbe_set_ivar_map(struct ngbe_hw *hw, int8_t direction, uint8_t queue, uint8_t msix_vector); diff --git a/drivers/net/ngbe/ngbe_rxtx.c b/drivers/net/ngbe/ngbe_rxtx.c index 47b5926e34..1207563cfe 100644 --- a/drivers/net/ngbe/ngbe_rxtx.c +++ b/drivers/net/ngbe/ngbe_rxtx.c @@ -15,6 +15,175 @@ #include "ngbe_ethdev.h" #include "ngbe_rxtx.h" +/* + * Prefetch a cache line into all cache levels. + */ +#define rte_ngbe_prefetch(p) rte_prefetch0(p) + +/********************************************************************* + * + * Rx functions + * + **********************************************************************/ +uint16_t +ngbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, + uint16_t nb_pkts) +{ + struct ngbe_rx_queue *rxq; + volatile struct ngbe_rx_desc *rx_ring; + volatile struct ngbe_rx_desc *rxdp; + struct ngbe_rx_entry *sw_ring; + struct ngbe_rx_entry *rxe; + struct rte_mbuf *rxm; + struct rte_mbuf *nmb; + struct ngbe_rx_desc rxd; + uint64_t dma_addr; + uint32_t staterr; + uint16_t pkt_len; + uint16_t rx_id; + uint16_t nb_rx; + uint16_t nb_hold; + + nb_rx = 0; + nb_hold = 0; + rxq = rx_queue; + rx_id = rxq->rx_tail; + rx_ring = rxq->rx_ring; + sw_ring = rxq->sw_ring; + struct rte_eth_dev *dev = &rte_eth_devices[rxq->port_id]; + while (nb_rx < nb_pkts) { + /* + * The order of operations here is important as the DD status + * bit must not be read after any other descriptor fields. + * rx_ring and rxdp are pointing to volatile data so the order + * of accesses cannot be reordered by the compiler. If they were + * not volatile, they could be reordered which could lead to + * using invalid descriptor fields when read from rxd. + */ + rxdp = &rx_ring[rx_id]; + staterr = rxdp->qw1.lo.status; + if (!(staterr & rte_cpu_to_le_32(NGBE_RXD_STAT_DD))) + break; + rxd = *rxdp; + + /* + * End of packet. + * + * If the NGBE_RXD_STAT_EOP flag is not set, the Rx packet + * is likely to be invalid and to be dropped by the various + * validation checks performed by the network stack. + * + * Allocate a new mbuf to replenish the RX ring descriptor. + * If the allocation fails: + * - arrange for that Rx descriptor to be the first one + * being parsed the next time the receive function is + * invoked [on the same queue]. + * + * - Stop parsing the Rx ring and return immediately. + * + * This policy do not drop the packet received in the Rx + * descriptor for which the allocation of a new mbuf failed. + * Thus, it allows that packet to be later retrieved if + * mbuf have been freed in the mean time. + * As a side effect, holding Rx descriptors instead of + * systematically giving them back to the NIC may lead to + * Rx ring exhaustion situations. + * However, the NIC can gracefully prevent such situations + * to happen by sending specific "back-pressure" flow control + * frames to its peer(s). + */ + PMD_RX_LOG(DEBUG, + "port_id=%u queue_id=%u rx_id=%u ext_err_stat=0x%08x pkt_len=%u", + (uint16_t)rxq->port_id, (uint16_t)rxq->queue_id, + (uint16_t)rx_id, (uint32_t)staterr, + (uint16_t)rte_le_to_cpu_16(rxd.qw1.hi.len)); + + nmb = rte_mbuf_raw_alloc(rxq->mb_pool); + if (nmb == NULL) { + PMD_RX_LOG(DEBUG, + "Rx mbuf alloc failed port_id=%u queue_id=%u", + (uint16_t)rxq->port_id, + (uint16_t)rxq->queue_id); + dev->data->rx_mbuf_alloc_failed++; + break; + } + + nb_hold++; + rxe = &sw_ring[rx_id]; + rx_id++; + if (rx_id == rxq->nb_rx_desc) + rx_id = 0; + + /* Prefetch next mbuf while processing current one. */ + rte_ngbe_prefetch(sw_ring[rx_id].mbuf); + + /* + * When next Rx descriptor is on a cache-line boundary, + * prefetch the next 4 Rx descriptors and the next 8 pointers + * to mbufs. + */ + if ((rx_id & 0x3) == 0) { + rte_ngbe_prefetch(&rx_ring[rx_id]); + rte_ngbe_prefetch(&sw_ring[rx_id]); + } + + rxm = rxe->mbuf; + rxe->mbuf = nmb; + dma_addr = rte_cpu_to_le_64(rte_mbuf_data_iova_default(nmb)); + NGBE_RXD_HDRADDR(rxdp, 0); + NGBE_RXD_PKTADDR(rxdp, dma_addr); + + /* + * Initialize the returned mbuf. + * setup generic mbuf fields: + * - number of segments, + * - next segment, + * - packet length, + * - Rx port identifier. + */ + pkt_len = (uint16_t)(rte_le_to_cpu_16(rxd.qw1.hi.len)); + rxm->data_off = RTE_PKTMBUF_HEADROOM; + rte_packet_prefetch((char *)rxm->buf_addr + rxm->data_off); + rxm->nb_segs = 1; + rxm->next = NULL; + rxm->pkt_len = pkt_len; + rxm->data_len = pkt_len; + rxm->port = rxq->port_id; + + /* + * Store the mbuf address into the next entry of the array + * of returned packets. + */ + rx_pkts[nb_rx++] = rxm; + } + rxq->rx_tail = rx_id; + + /* + * If the number of free Rx descriptors is greater than the Rx free + * threshold of the queue, advance the Receive Descriptor Tail (RDT) + * register. + * Update the RDT with the value of the last processed Rx descriptor + * minus 1, to guarantee that the RDT register is never equal to the + * RDH register, which creates a "full" ring situation from the + * hardware point of view... + */ + nb_hold = (uint16_t)(nb_hold + rxq->nb_rx_hold); + if (nb_hold > rxq->rx_free_thresh) { + PMD_RX_LOG(DEBUG, + "port_id=%u queue_id=%u rx_tail=%u nb_hold=%u nb_rx=%u", + (uint16_t)rxq->port_id, (uint16_t)rxq->queue_id, + (uint16_t)rx_id, (uint16_t)nb_hold, + (uint16_t)nb_rx); + rx_id = (uint16_t)((rx_id == 0) ? + (rxq->nb_rx_desc - 1) : (rx_id - 1)); + ngbe_set32(rxq->rdt_reg_addr, rx_id); + nb_hold = 0; + } + rxq->nb_rx_hold = nb_hold; + return nb_rx; +} + + /********************************************************************* * * Queue management functions diff --git a/drivers/net/ngbe/ngbe_rxtx.h b/drivers/net/ngbe/ngbe_rxtx.h index e35ef166d3..7bef7b918b 100644 --- a/drivers/net/ngbe/ngbe_rxtx.h +++ b/drivers/net/ngbe/ngbe_rxtx.h @@ -51,6 +51,77 @@ struct ngbe_rx_desc { #define NGBE_RXD_HDRADDR(rxd, v) \ (((volatile __le64 *)(rxd))[1] = cpu_to_le64(v)) +/* @ngbe_rx_desc.dw0 */ +#define NGBE_RXD_RSSTYPE(dw) RS(dw, 0, 0xF) +#define NGBE_RSSTYPE_NONE 0 +#define NGBE_RSSTYPE_IPV4TCP 1 +#define NGBE_RSSTYPE_IPV4 2 +#define NGBE_RSSTYPE_IPV6TCP 3 +#define NGBE_RSSTYPE_IPV4SCTP 4 +#define NGBE_RSSTYPE_IPV6 5 +#define NGBE_RSSTYPE_IPV6SCTP 6 +#define NGBE_RSSTYPE_IPV4UDP 7 +#define NGBE_RSSTYPE_IPV6UDP 8 +#define NGBE_RSSTYPE_FDIR 15 +#define NGBE_RXD_SECTYPE(dw) RS(dw, 4, 0x3) +#define NGBE_RXD_SECTYPE_NONE LS(0, 4, 0x3) +#define NGBE_RXD_SECTYPE_IPSECESP LS(2, 4, 0x3) +#define NGBE_RXD_SECTYPE_IPSECAH LS(3, 4, 0x3) +#define NGBE_RXD_TPIDSEL(dw) RS(dw, 6, 0x7) +#define NGBE_RXD_PTID(dw) RS(dw, 9, 0xFF) +#define NGBE_RXD_RSCCNT(dw) RS(dw, 17, 0xF) +#define NGBE_RXD_HDRLEN(dw) RS(dw, 21, 0x3FF) +#define NGBE_RXD_SPH MS(31, 0x1) + +/* @ngbe_rx_desc.dw1 */ +/** bit 0-31, as rss hash when **/ +#define NGBE_RXD_RSSHASH(rxd) ((rxd)->qw0.dw1) + +/** bit 0-31, as ip csum when **/ +#define NGBE_RXD_IPID(rxd) ((rxd)->qw0.hi.ipid) +#define NGBE_RXD_CSUM(rxd) ((rxd)->qw0.hi.csum) + +/* @ngbe_rx_desc.dw2 */ +#define NGBE_RXD_STATUS(rxd) ((rxd)->qw1.lo.status) +/** bit 0-1 **/ +#define NGBE_RXD_STAT_DD MS(0, 0x1) /* Descriptor Done */ +#define NGBE_RXD_STAT_EOP MS(1, 0x1) /* End of Packet */ +/** bit 2-31, when EOP=0 **/ +#define NGBE_RXD_NEXTP_RESV(v) LS(v, 2, 0x3) +#define NGBE_RXD_NEXTP(dw) RS(dw, 4, 0xFFFF) /* Next Descriptor */ +/** bit 2-31, when EOP=1 **/ +#define NGBE_RXD_PKT_CLS_MASK MS(2, 0x7) /* Packet Class */ +#define NGBE_RXD_PKT_CLS_TC_RSS LS(0, 2, 0x7) /* RSS Hash */ +#define NGBE_RXD_PKT_CLS_FLM LS(1, 2, 0x7) /* FDir Match */ +#define NGBE_RXD_PKT_CLS_SYN LS(2, 2, 0x7) /* TCP Sync */ +#define NGBE_RXD_PKT_CLS_5TUPLE LS(3, 2, 0x7) /* 5 Tuple */ +#define NGBE_RXD_PKT_CLS_ETF LS(4, 2, 0x7) /* Ethertype Filter */ +#define NGBE_RXD_STAT_VLAN MS(5, 0x1) /* IEEE VLAN Packet */ +#define NGBE_RXD_STAT_UDPCS MS(6, 0x1) /* UDP xsum calculated */ +#define NGBE_RXD_STAT_L4CS MS(7, 0x1) /* L4 xsum calculated */ +#define NGBE_RXD_STAT_IPCS MS(8, 0x1) /* IP xsum calculated */ +#define NGBE_RXD_STAT_PIF MS(9, 0x1) /* Non-unicast address */ +#define NGBE_RXD_STAT_EIPCS MS(10, 0x1) /* Encap IP xsum calculated */ +#define NGBE_RXD_STAT_VEXT MS(11, 0x1) /* Multi-VLAN */ +#define NGBE_RXD_STAT_IPV6EX MS(12, 0x1) /* IPv6 with option header */ +#define NGBE_RXD_STAT_LLINT MS(13, 0x1) /* Pkt caused LLI */ +#define NGBE_RXD_STAT_1588 MS(14, 0x1) /* IEEE1588 Time Stamp */ +#define NGBE_RXD_STAT_SECP MS(15, 0x1) /* Security Processing */ +#define NGBE_RXD_STAT_LB MS(16, 0x1) /* Loopback Status */ +/*** bit 17-30, when PTYPE=IP ***/ +#define NGBE_RXD_STAT_BMC MS(17, 0x1) /* PTYPE=IP, BMC status */ +#define NGBE_RXD_ERR_HBO MS(23, 0x1) /* Header Buffer Overflow */ +#define NGBE_RXD_ERR_EIPCS MS(26, 0x1) /* Encap IP header error */ +#define NGBE_RXD_ERR_SECERR MS(27, 0x1) /* macsec or ipsec error */ +#define NGBE_RXD_ERR_RXE MS(29, 0x1) /* Any MAC Error */ +#define NGBE_RXD_ERR_L4CS MS(30, 0x1) /* TCP/UDP xsum error */ +#define NGBE_RXD_ERR_IPCS MS(31, 0x1) /* IP xsum error */ +#define NGBE_RXD_ERR_CSUM(dw) RS(dw, 30, 0x3) + +/* @ngbe_rx_desc.dw3 */ +#define NGBE_RXD_LENGTH(rxd) ((rxd)->qw1.hi.len) +#define NGBE_RXD_VLAN(rxd) ((rxd)->qw1.hi.tag) + /***************************************************************************** * Transmit Descriptor *****************************************************************************/ @@ -81,6 +152,8 @@ struct ngbe_tx_desc { #define RX_RING_SZ ((NGBE_RING_DESC_MAX + RTE_PMD_NGBE_RX_MAX_BURST) * \ sizeof(struct ngbe_rx_desc)) +#define rte_packet_prefetch(p) rte_prefetch1(p) + #define RTE_NGBE_REGISTER_POLL_WAIT_10_MS 10 #define RTE_NGBE_WAIT_100_US 100 -- 2.21.0.windows.1