From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by dpdk.org (Postfix) with ESMTP id 69BCA5A65 for ; Tue, 27 Oct 2015 21:56:49 +0100 (CET) Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga103.jf.intel.com with ESMTP; 27 Oct 2015 13:56:48 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.20,206,1444719600"; d="scan'208";a="836491600" Received: from orsmsx102.amr.corp.intel.com ([10.22.225.129]) by orsmga002.jf.intel.com with ESMTP; 27 Oct 2015 13:56:38 -0700 Received: from orsmsx151.amr.corp.intel.com (10.22.226.38) by ORSMSX102.amr.corp.intel.com (10.22.225.129) with Microsoft SMTP Server (TLS) id 14.3.248.2; Tue, 27 Oct 2015 13:56:36 -0700 Received: from orsmsx102.amr.corp.intel.com ([169.254.1.29]) by ORSMSX151.amr.corp.intel.com ([10.22.226.38]) with mapi id 14.03.0248.002; Tue, 27 Oct 2015 13:56:36 -0700 From: "Polehn, Mike A" To: "dev@dpdk.org" Thread-Topic: [Patch 1/2] i40e RX Bulk Alloc: Larger list size (33 to 128) throughput optimization Thread-Index: AdEQ+HIFu/HMtvMaSLauQ29q2/F7+g== Date: Tue, 27 Oct 2015 20:56:36 +0000 Message-ID: <745DB4B8861F8E4B9849C970520ABBF14974C1EB@ORSMSX102.amr.corp.intel.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-titus-metadata-40: eyJDYXRlZ29yeUxhYmVscyI6IiIsIk1ldGFkYXRhIjp7Im5zIjoiaHR0cDpcL1wvd3d3LnRpdHVzLmNvbVwvbnNcL0ludGVsIiwiaWQiOiJiNWEzYTg2OS00YWRiLTQ0YmUtOThmMy1hYjFmNTcwMTRlNDgiLCJwcm9wcyI6W3sibiI6IkludGVsRGF0YUNsYXNzaWZpY2F0aW9uIiwidmFscyI6W3sidmFsdWUiOiJDVFBfSUMifV19XX0sIlN1YmplY3RMYWJlbHMiOltdLCJUTUNWZXJzaW9uIjoiMTUuNC4xMC4xOSIsIlRydXN0ZWRMYWJlbEhhc2giOiJBT1pLZlpiM09kMkFCQWhsczVra1wvS1pHaGdvRlhsWmhWYjcybVF0QTY0Yz0ifQ== x-inteldataclassification: CTP_IC x-originating-ip: [10.22.254.139] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: [dpdk-dev] [Patch 1/2] i40e RX Bulk Alloc: Larger list size (33 to 128) throughput optimization X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 27 Oct 2015 20:56:50 -0000 Combined 2 subroutines of code into one subroutine with one read operation = followed by=20 buffer allocate and load loop. Eliminated the staging queue and subroutine, which removed extra pointer li= st movements=20 and reduced number of active variable cache pages during for call. Reduced queue position variables to just 2, the next read point and last NI= C RX descriptor=20 position, also changed to allowing NIC descriptor table to not always need = to be filled. Removed NIC register update write from per loop to one per driver write cal= l to minimize CPU=20 stalls waiting of multiple SMB synchronization points and for earlier NIC r= egister writes that=20 often take large cycle counts to complete. For example with an input packet= list of 33, with=20 the default loops size of 32, the second NIC register write will occur just= after RX processing=20 for just 1 packet, resulting in large CPU stall time. Eliminated initial rx packet present test before rx processing loop that al= so checks, since less=20 free time is generally available when packets are present than when not pro= cessing any input=20 packets.=20 Used some standard variables to help reduce overhead of non-standard variab= le sizes. Reduced number of variables, reordered variable structure to put most activ= e variables in=20 first cache line, better utilize memory bytes inside cache line, and reduce= d active cache line=20 count to 1 cache line during processing call. Other RX subroutine sets migh= t still use more=20 than 1 variable cache line. Signed-off-by: Mike A. Polehn diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c index fd656d5..ea63f2f 100644 --- a/drivers/net/i40e/i40e_rxtx.c +++ b/drivers/net/i40e/i40e_rxtx.c @@ -63,6 +63,7 @@ #define DEFAULT_TX_RS_THRESH 32 #define DEFAULT_TX_FREE_THRESH 32 #define I40E_MAX_PKT_TYPE 256 +#define I40E_RX_INPUT_BUF_MAX 256 =20 #define I40E_TX_MAX_BURST 32 =20 @@ -959,115 +960,97 @@ check_rx_burst_bulk_alloc_preconditions(__rte_unused= struct i40e_rx_queue *rxq) } =20 #ifdef RTE_LIBRTE_I40E_RX_ALLOW_BULK_ALLOC -#define I40E_LOOK_AHEAD 8 -#if (I40E_LOOK_AHEAD !=3D 8) -#error "PMD I40E: I40E_LOOK_AHEAD must be 8\n" -#endif -static inline int -i40e_rx_scan_hw_ring(struct i40e_rx_queue *rxq) + +static inline unsigned +i40e_rx_scan_hw_ring(struct i40e_rx_queue *rxq, struct rte_mbuf **rx_pkts, + unsigned nb_pkts) { volatile union i40e_rx_desc *rxdp; struct i40e_rx_entry *rxep; - struct rte_mbuf *mb; - uint16_t pkt_len; - uint64_t qword1; - uint32_t rx_status; - int32_t s[I40E_LOOK_AHEAD], nb_dd; - int32_t i, j, nb_rx =3D 0; - uint64_t pkt_flags; + unsigned i, n, tail; =20 - rxdp =3D &rxq->rx_ring[rxq->rx_tail]; - rxep =3D &rxq->sw_ring[rxq->rx_tail]; - - qword1 =3D rte_le_to_cpu_64(rxdp->wb.qword1.status_error_len); - rx_status =3D (qword1 & I40E_RXD_QW1_STATUS_MASK) >> - I40E_RXD_QW1_STATUS_SHIFT; + /* Wrap tail */ + if (rxq->rx_tail >=3D rxq->nb_rx_desc) + tail =3D 0; + else + tail =3D rxq->rx_tail; + + /* Stop at end of Q, for end, next read alligned at Q start */ + n =3D rxq->nb_rx_desc - tail; + if (n < nb_pkts) + nb_pkts =3D n; + + rxdp =3D &rxq->rx_ring[tail]; + rte_prefetch0(rxdp); + rxep =3D &rxq->sw_ring[tail]; + rte_prefetch0(rxep); + + i =3D 0; + while (nb_pkts > 0) { + /* Prefetch NIC descriptors and packet list */ + if (likely(nb_pkts > 4)) { + rte_prefetch0(&rxdp[4]); + if (likely(nb_pkts > 8)) { + rte_prefetch0(&rxdp[8]); + rte_prefetch0(&rxep[8]); + } + } =20 - /* Make sure there is at least 1 packet to receive */ - if (!(rx_status & (1 << I40E_RX_DESC_STATUS_DD_SHIFT))) - return 0; + for (n =3D 0; (nb_pkts > 0)&&(n < 8); n++, nb_pkts--, i++) { + uint64_t qword1; + uint64_t pkt_flags; + uint16_t pkt_len; + struct rte_mbuf *mb =3D rxep->mbuf; + rxep++; =20 - /** - * Scan LOOK_AHEAD descriptors at a time to determine which - * descriptors reference packets that are ready to be received. - */ - for (i =3D 0; i < RTE_PMD_I40E_RX_MAX_BURST; i+=3DI40E_LOOK_AHEAD, - rxdp +=3D I40E_LOOK_AHEAD, rxep +=3D I40E_LOOK_AHEAD) { - /* Read desc statuses backwards to avoid race condition */ - for (j =3D I40E_LOOK_AHEAD - 1; j >=3D 0; j--) { + /* Translate descriptor info to mbuf parameters */ qword1 =3D rte_le_to_cpu_64(\ - rxdp[j].wb.qword1.status_error_len); - s[j] =3D (qword1 & I40E_RXD_QW1_STATUS_MASK) >> - I40E_RXD_QW1_STATUS_SHIFT; - } + rxdp->wb.qword1.status_error_len); =20 - /* Compute how many status bits were set */ - for (j =3D 0, nb_dd =3D 0; j < I40E_LOOK_AHEAD; j++) - nb_dd +=3D s[j] & (1 << I40E_RX_DESC_STATUS_DD_SHIFT); + if (!(((qword1 & I40E_RXD_QW1_STATUS_MASK) >> + I40E_RXD_QW1_STATUS_SHIFT) + & (1 << I40E_RX_DESC_STATUS_DD_SHIFT))) + goto DONE; /* Packet not yet completed */ =20 - nb_rx +=3D nb_dd; =20 - /* Translate descriptor info to mbuf parameters */ - for (j =3D 0; j < nb_dd; j++) { - mb =3D rxep[j].mbuf; - qword1 =3D rte_le_to_cpu_64(\ - rxdp[j].wb.qword1.status_error_len); pkt_len =3D ((qword1 & I40E_RXD_QW1_LENGTH_PBUF_MASK) >> I40E_RXD_QW1_LENGTH_PBUF_SHIFT) - rxq->crc_len; - mb->data_len =3D pkt_len; mb->pkt_len =3D pkt_len; - mb->ol_flags =3D 0; - i40e_rxd_to_vlan_tci(mb, &rxdp[j]); + mb->data_len =3D pkt_len; + i40e_rxd_to_vlan_tci(mb, rxdp); pkt_flags =3D i40e_rxd_status_to_pkt_flags(qword1); pkt_flags |=3D i40e_rxd_error_to_pkt_flags(qword1); mb->packet_type =3D i40e_rxd_pkt_type_mapping((uint8_t)((qword1 & - I40E_RXD_QW1_PTYPE_MASK) >> - I40E_RXD_QW1_PTYPE_SHIFT)); + I40E_RXD_QW1_PTYPE_MASK) >> + I40E_RXD_QW1_PTYPE_SHIFT)); if (pkt_flags & PKT_RX_RSS_HASH) mb->hash.rss =3D rte_le_to_cpu_32(\ - rxdp[j].wb.qword0.hi_dword.rss); + rxdp->wb.qword0.hi_dword.rss); if (pkt_flags & PKT_RX_FDIR) - pkt_flags |=3D i40e_rxd_build_fdir(&rxdp[j], mb); + pkt_flags |=3D i40e_rxd_build_fdir(rxdp, mb); + rxdp++; =20 #ifdef RTE_LIBRTE_IEEE1588 pkt_flags |=3D i40e_get_iee15888_flags(mb, qword1); #endif - mb->ol_flags |=3D pkt_flags; - + mb->ol_flags =3D pkt_flags; } - - for (j =3D 0; j < I40E_LOOK_AHEAD; j++) - rxq->rx_stage[i + j] =3D rxep[j].mbuf; - - if (nb_dd !=3D I40E_LOOK_AHEAD) - break; } =20 - /* Clear software ring entries */ - for (i =3D 0; i < nb_rx; i++) - rxq->sw_ring[rxq->rx_tail + i].mbuf =3D NULL; - - return nb_rx; -} - -static inline uint16_t -i40e_rx_fill_from_stage(struct i40e_rx_queue *rxq, - struct rte_mbuf **rx_pkts, - uint16_t nb_pkts) -{ - uint16_t i; - struct rte_mbuf **stage =3D &rxq->rx_stage[rxq->rx_next_avail]; - - nb_pkts =3D (uint16_t)RTE_MIN(nb_pkts, rxq->rx_nb_avail); - - for (i =3D 0; i < nb_pkts; i++) - rx_pkts[i] =3D stage[i]; +DONE: + /* Copy packets to output list and clear NIC list */ + rxep =3D &rxq->sw_ring[tail]; + for (n =3D 0; n < i; n++) { + *rx_pkts++ =3D rxep->mbuf; + rxep->mbuf =3D NULL; + rxep++; + } =20 - rxq->rx_nb_avail =3D (uint16_t)(rxq->rx_nb_avail - nb_pkts); - rxq->rx_next_avail =3D (uint16_t)(rxq->rx_next_avail + nb_pkts); + if (i) /* Don't wrap if no packets received */ + rxq->rx_tail =3D tail + i; /* Includes pointer wrap */ =20 - return nb_pkts; + return i; } =20 static inline int @@ -1076,13 +1059,15 @@ i40e_rx_alloc_bufs(struct i40e_rx_queue *rxq) volatile union i40e_rx_desc *rxdp; struct i40e_rx_entry *rxep; struct rte_mbuf *mb; - uint16_t alloc_idx, i; + unsigned alloc_idx, i; uint64_t dma_addr; int diag; =20 /* Allocate buffers in bulk */ - alloc_idx =3D (uint16_t)(rxq->rx_free_trigger - - (rxq->rx_free_thresh - 1)); + alloc_idx =3D rxq->rx_last_pos + 1; + if (alloc_idx >=3D rxq->nb_rx_desc) + alloc_idx =3D 0; + rxep =3D &(rxq->sw_ring[alloc_idx]); diag =3D rte_mempool_get_bulk(rxq->mp, (void *)rxep, rxq->rx_free_thresh); @@ -1109,84 +1094,72 @@ i40e_rx_alloc_bufs(struct i40e_rx_queue *rxq) rxdp[i].read.pkt_addr =3D dma_addr; } =20 - /* Update rx tail regsiter */ - rte_wmb(); - I40E_PCI_REG_WRITE(rxq->qrx_tail, rxq->rx_free_trigger); - - rxq->rx_free_trigger =3D - (uint16_t)(rxq->rx_free_trigger + rxq->rx_free_thresh); - if (rxq->rx_free_trigger >=3D rxq->nb_rx_desc) - rxq->rx_free_trigger =3D (uint16_t)(rxq->rx_free_thresh - 1); + rxq->rx_last_pos =3D alloc_idx + rxq->rx_free_thresh - 1; =20 return 0; } =20 -static inline uint16_t -rx_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) +static uint16_t +i40e_recv_pkts_bulk_alloc(void *rx_queue, struct rte_mbuf **rx_pkts,=20 + uint16_t nb_pkts) { struct i40e_rx_queue *rxq =3D (struct i40e_rx_queue *)rx_queue; - uint16_t nb_rx =3D 0; - - if (!nb_pkts) - return 0; - - if (rxq->rx_nb_avail) - return i40e_rx_fill_from_stage(rxq, rx_pkts, nb_pkts); + unsigned nb_rx, n_buf, n_empty, n, max_alloc; + uint8_t alloced =3D 0; =20 - nb_rx =3D (uint16_t)i40e_rx_scan_hw_ring(rxq); - rxq->rx_next_avail =3D 0; - rxq->rx_nb_avail =3D nb_rx; - rxq->rx_tail =3D (uint16_t)(rxq->rx_tail + nb_rx); + /* Note: to calc n_buf correctly, tail wraps at start of RX operation */ + /* Note 2: rxq->rx_last_pos is last packet buf location of NIC */ =20 - if (rxq->rx_tail > rxq->rx_free_trigger) { - if (i40e_rx_alloc_bufs(rxq) !=3D 0) { - uint16_t i, j; - - PMD_RX_LOG(DEBUG, "Rx mbuf alloc failed for " - "port_id=3D%u, queue_id=3D%u", - rxq->port_id, rxq->queue_id); - rxq->rx_nb_avail =3D 0; - rxq->rx_tail =3D (uint16_t)(rxq->rx_tail - nb_rx); - for (i =3D 0, j =3D rxq->rx_tail; i < nb_rx; i++, j++) - rxq->sw_ring[j].mbuf =3D rxq->rx_stage[i]; - - return 0; + /* Calculate current number of buffers */ + n_buf =3D rxq->rx_last_pos + 1; + if (rxq->rx_tail <=3D n_buf) + n_buf =3D n_buf - rxq->rx_tail; + else + n_buf =3D n_buf + rxq->nb_rx_desc - rxq->rx_tail; + + n =3D nb_pkts; + max_alloc =3D n + rxq->rx_free_thresh; /* Round up, finish in loop */ + if (unlikely(n_buf < n)) /* Cannot receive more then buffer count */ + n =3D n_buf; + + /* Receive packets */ + if (likely(n)) { + if ((unlikely(n > I40E_RX_INPUT_BUF_MAX))) { /* Limit rx count */ + n =3D I40E_RX_INPUT_BUF_MAX; + max_alloc =3D I40E_RX_INPUT_BUF_MAX + 1; } - } =20 - if (rxq->rx_tail >=3D rxq->nb_rx_desc) - rxq->rx_tail =3D 0; - - if (rxq->rx_nb_avail) - return i40e_rx_fill_from_stage(rxq, rx_pkts, nb_pkts); + nb_rx =3D i40e_rx_scan_hw_ring(rxq, rx_pkts, n); + } else { + nb_rx =3D 0; + if (unlikely(!nb_pkts)) /* Input rx of 0, allow 1 alloc block */ + max_alloc =3D rxq->rx_free_thresh + 1;=20 + } =20 - return 0; -} + /* Determine empty count */ + n_empty =3D rxq->nb_rx_desc - n_buf + nb_rx; =20 -static uint16_t -i40e_recv_pkts_bulk_alloc(void *rx_queue, - struct rte_mbuf **rx_pkts, - uint16_t nb_pkts) -{ - uint16_t nb_rx =3D 0, n, count; + if (n_empty > max_alloc) /* Limit alloc to rounded up rx receive count */ + n_empty =3D max_alloc; =20 - if (unlikely(nb_pkts =3D=3D 0)) - return 0; + /* Add empty buffers to NIC discriptor table */ + while (n_empty > rxq->rx_free_thresh) { /* Round and/or leave 1 empty */ + if (i40e_rx_alloc_bufs(rxq) !=3D 0) + break; =20 - if (likely(nb_pkts <=3D RTE_PMD_I40E_RX_MAX_BURST)) - return rx_recv_pkts(rx_queue, rx_pkts, nb_pkts); + alloced =3D 1; + n_empty -=3D rxq->rx_free_thresh; + } =20 - while (nb_pkts) { - n =3D RTE_MIN(nb_pkts, RTE_PMD_I40E_RX_MAX_BURST); - count =3D rx_recv_pkts(rx_queue, &rx_pkts[nb_rx], n); - nb_rx =3D (uint16_t)(nb_rx + count); - nb_pkts =3D (uint16_t)(nb_pkts - count); - if (count < n) - break; + if (alloced) { + /* Update NIC rx tail register */ + rte_wmb(); + I40E_PCI_REG_WRITE(rxq->qrx_tail, rxq->rx_last_pos); } =20 - return nb_rx; + return (uint16_t)nb_rx; } + #endif /* RTE_LIBRTE_I40E_RX_ALLOW_BULK_ALLOC */ =20 uint16_t @@ -1296,7 +1269,7 @@ i40e_recv_pkts(void *rx_queue, struct rte_mbuf **rx_p= kts, uint16_t nb_pkts) nb_hold =3D (uint16_t)(nb_hold + rxq->nb_rx_hold); if (nb_hold > rxq->rx_free_thresh) { rx_id =3D (uint16_t) ((rx_id =3D=3D 0) ? - (rxq->nb_rx_desc - 1) : (rx_id - 1)); + (uint16_t)(rxq->nb_rx_desc - 1) : (rx_id - 1)); I40E_PCI_REG_WRITE(rxq->qrx_tail, rx_id); nb_hold =3D 0; } @@ -1468,7 +1441,7 @@ i40e_recv_scattered_pkts(void *rx_queue, nb_hold =3D (uint16_t)(nb_hold + rxq->nb_rx_hold); if (nb_hold > rxq->rx_free_thresh) { rx_id =3D (uint16_t)(rx_id =3D=3D 0 ? - (rxq->nb_rx_desc - 1) : (rx_id - 1)); + (uint16_t)(rxq->nb_rx_desc - 1) : (rx_id - 1)); I40E_PCI_REG_WRITE(rxq->qrx_tail, rx_id); nb_hold =3D 0; } @@ -2578,17 +2551,6 @@ i40e_rx_queue_release_mbufs(struct i40e_rx_queue *rx= q) rxq->sw_ring[i].mbuf =3D NULL; } } -#ifdef RTE_LIBRTE_I40E_RX_ALLOW_BULK_ALLOC - if (rxq->rx_nb_avail =3D=3D 0) - return; - for (i =3D 0; i < rxq->rx_nb_avail; i++) { - struct rte_mbuf *mbuf; - - mbuf =3D rxq->rx_stage[rxq->rx_next_avail + i]; - rte_pktmbuf_free_seg(mbuf); - } - rxq->rx_nb_avail =3D 0; -#endif /* RTE_LIBRTE_I40E_RX_ALLOW_BULK_ALLOC */ } =20 void @@ -2617,9 +2579,7 @@ i40e_reset_rx_queue(struct i40e_rx_queue *rxq) for (i =3D 0; i < RTE_PMD_I40E_RX_MAX_BURST; ++i) rxq->sw_ring[rxq->nb_rx_desc + i].mbuf =3D &rxq->fake_mbuf; =20 - rxq->rx_nb_avail =3D 0; - rxq->rx_next_avail =3D 0; - rxq->rx_free_trigger =3D (uint16_t)(rxq->rx_free_thresh - 1); + rxq->rx_last_pos =3D rxq->nb_rx_desc - 1; #endif /* RTE_LIBRTE_I40E_RX_ALLOW_BULK_ALLOC */ rxq->rx_tail =3D 0; rxq->nb_rx_hold =3D 0; diff --git a/drivers/net/i40e/i40e_rxtx.h b/drivers/net/i40e/i40e_rxtx.h index 4385142..4146a63 100644 --- a/drivers/net/i40e/i40e_rxtx.h +++ b/drivers/net/i40e/i40e_rxtx.h @@ -85,34 +85,35 @@ struct i40e_rx_entry { struct i40e_rx_queue { struct rte_mempool *mp; /**< mbuf pool to populate RX ring */ volatile union i40e_rx_desc *rx_ring;/**< RX ring virtual address */ - uint64_t rx_ring_phys_addr; /**< RX ring DMA address */ struct i40e_rx_entry *sw_ring; /**< address of RX soft ring */ - uint16_t nb_rx_desc; /**< number of RX descriptors */ - uint16_t rx_free_thresh; /**< max free RX desc to hold */ - uint16_t rx_tail; /**< current value of tail */ - uint16_t nb_rx_hold; /**< number of held free RX desc */ - struct rte_mbuf *pkt_first_seg; /**< first segment of current packet */ - struct rte_mbuf *pkt_last_seg; /**< last segment of current packet */ + volatile uint8_t *qrx_tail; /**< register address of tail */ + unsigned nb_rx_desc; /**< number of RX descriptors */ + unsigned rx_free_thresh; /**< max free RX desc to hold */ + unsigned rx_tail; /**< current value of tail */ #ifdef RTE_LIBRTE_I40E_RX_ALLOW_BULK_ALLOC - uint16_t rx_nb_avail; /**< number of staged packets ready */ - uint16_t rx_next_avail; /**< index of next staged packets */ - uint16_t rx_free_trigger; /**< triggers rx buffer allocation */ - struct rte_mbuf fake_mbuf; /**< dummy mbuf */ - struct rte_mbuf *rx_stage[RTE_PMD_I40E_RX_MAX_BURST * 2]; + unsigned rx_last_pos; /* Position of last packet buf: NIC reg value */ #endif uint8_t port_id; /**< device port ID */ uint8_t crc_len; /**< 0 if CRC stripped, 4 otherwise */ - uint16_t queue_id; /**< RX queue index */ - uint16_t reg_idx; /**< RX queue register index */ + uint8_t hs_mode; /* Header Split mode */ uint8_t drop_en; /**< if not 0, set register bit */ - volatile uint8_t *qrx_tail; /**< register address of tail */ + uint16_t nb_rx_hold; /**< number of held free RX desc */ + uint16_t queue_id; /**< RX queue index */ + struct rte_mbuf *pkt_first_seg; /**< first segment of current packet */ + struct rte_mbuf *pkt_last_seg; /**< last segment of current packet */ + + /* Setup and seldom used variables */ + uint64_t rx_ring_phys_addr; /**< RX ring DMA address */ struct i40e_vsi *vsi; /**< the VSI this queue belongs to */ uint16_t rx_buf_len; /* The packet buffer size */ uint16_t rx_hdr_len; /* The header buffer size */ + uint16_t reg_idx; /**< RX queue register index */ uint16_t max_pkt_len; /* Maximum packet length */ - uint8_t hs_mode; /* Header Split mode */ bool q_set; /**< indicate if rx queue has been configured */ bool rx_deferred_start; /**< don't start this queue in dev start */ +#ifdef RTE_LIBRTE_I40E_RX_ALLOW_BULK_ALLOC + struct rte_mbuf fake_mbuf; /**< dummy mbuf */ +#endif }; =20 struct i40e_tx_entry {