From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by dpdk.org (Postfix) with ESMTP id EB4B49A96 for ; Thu, 2 Apr 2015 22:18:02 +0200 (CEST) Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga102.fm.intel.com with ESMTP; 02 Apr 2015 13:18:02 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.11,512,1422950400"; d="scan'208";a="689683677" Received: from irsmsx104.ger.corp.intel.com ([163.33.3.159]) by fmsmga001.fm.intel.com with ESMTP; 02 Apr 2015 13:18:01 -0700 Received: from irsmsx109.ger.corp.intel.com ([169.254.13.95]) by IRSMSX104.ger.corp.intel.com ([169.254.3.49]) with mapi id 14.03.0224.002; Thu, 2 Apr 2015 21:18:00 +0100 From: "Butler, Siobhan A" To: "xuelin.shi@freescale.com" , "Ananyev, Konstantin" Thread-Topic: [dpdk-dev] [PATCH v3] ixgbe: fix data access on big endian cpu Thread-Index: AQHQa4shz3klEp8JcEywrJvgAGkh+p06LJkA Date: Thu, 2 Apr 2015 20:18:00 +0000 Message-ID: <0C5AFCA4B3408848ADF2A3073F7D8CC86D587220@IRSMSX109.ger.corp.intel.com> References: <1427786750-30308-1-git-send-email-xuelin.shi@freescale.com> In-Reply-To: <1427786750-30308-1-git-send-email-xuelin.shi@freescale.com> Accept-Language: en-IE, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [163.33.239.182] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Cc: "dev@dpdk.org" Subject: Re: [dpdk-dev] [PATCH v3] ixgbe: fix data access on big endian cpu X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 02 Apr 2015 20:18:03 -0000 > -----Original Message----- > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of > xuelin.shi@freescale.com > Sent: Tuesday, March 31, 2015 8:26 AM > To: Ananyev, Konstantin > Cc: dev@dpdk.org; Xuelin Shi > Subject: [dpdk-dev] [PATCH v3] ixgbe: fix data access on big endian cpu >=20 > From: Xuelin Shi >=20 > enforce rules of the cpu and ixgbe exchange data. > 1. cpu use data owned by ixgbe must use rte_le_to_cpu_xx(...) 2. cpu fill > data to ixgbe must use rte_cpu_to_le_xx(...) 3. checking pci status with > converted constant. >=20 > Signed-off-by: Xuelin Shi > --- > change for v3: > check pci status with converted constant to avoid performance penalty. > remove tmp variable. >=20 > lib/librte_pmd_ixgbe/ixgbe_rxtx.c | 89 ++++++++++++++++++++++++------- > -------- > 1 file changed, 56 insertions(+), 33 deletions(-) >=20 > diff --git a/lib/librte_pmd_ixgbe/ixgbe_rxtx.c > b/lib/librte_pmd_ixgbe/ixgbe_rxtx.c > index 9da2c7e..6e508ec 100644 > --- a/lib/librte_pmd_ixgbe/ixgbe_rxtx.c > +++ b/lib/librte_pmd_ixgbe/ixgbe_rxtx.c > @@ -129,7 +129,7 @@ ixgbe_tx_free_bufs(struct ixgbe_tx_queue *txq) >=20 > /* check DD bit on threshold descriptor */ > status =3D txq->tx_ring[txq->tx_next_dd].wb.status; > - if (! (status & IXGBE_ADVTXD_STAT_DD)) > + if (!(status & rte_cpu_to_le_32(IXGBE_ADVTXD_STAT_DD))) > return 0; >=20 > /* > @@ -174,11 +174,14 @@ tx4(volatile union ixgbe_adv_tx_desc *txdp, struct > rte_mbuf **pkts) > pkt_len =3D (*pkts)->data_len; >=20 > /* write data to descriptor */ > - txdp->read.buffer_addr =3D buf_dma_addr; > + txdp->read.buffer_addr =3D > rte_cpu_to_le_64(buf_dma_addr); > + > txdp->read.cmd_type_len =3D > - ((uint32_t)DCMD_DTYP_FLAGS | pkt_len); > + rte_cpu_to_le_32((uint32_t)DCMD_DTYP_FLAGS | > pkt_len); > + > txdp->read.olinfo_status =3D > - (pkt_len << IXGBE_ADVTXD_PAYLEN_SHIFT); > + rte_cpu_to_le_32(pkt_len << > IXGBE_ADVTXD_PAYLEN_SHIFT); > + > rte_prefetch0(&(*pkts)->pool); > } > } > @@ -194,11 +197,14 @@ tx1(volatile union ixgbe_adv_tx_desc *txdp, struct > rte_mbuf **pkts) > pkt_len =3D (*pkts)->data_len; >=20 > /* write data to descriptor */ > - txdp->read.buffer_addr =3D buf_dma_addr; > + txdp->read.buffer_addr =3D rte_cpu_to_le_64(buf_dma_addr); > + > txdp->read.cmd_type_len =3D > - ((uint32_t)DCMD_DTYP_FLAGS | pkt_len); > + rte_cpu_to_le_32((uint32_t)DCMD_DTYP_FLAGS | > pkt_len); > + > txdp->read.olinfo_status =3D > - (pkt_len << IXGBE_ADVTXD_PAYLEN_SHIFT); > + rte_cpu_to_le_32(pkt_len << > IXGBE_ADVTXD_PAYLEN_SHIFT); > + > rte_prefetch0(&(*pkts)->pool); > } >=20 > @@ -285,7 +291,7 @@ tx_xmit_pkts(void *tx_queue, struct rte_mbuf > **tx_pkts, > * a divisor of the ring size > */ > tx_r[txq->tx_next_rs].read.cmd_type_len |=3D > - rte_cpu_to_le_32(IXGBE_ADVTXD_DCMD_RS); > + > rte_cpu_to_le_32(IXGBE_ADVTXD_DCMD_RS); > txq->tx_next_rs =3D (uint16_t)(txq->tx_rs_thresh - 1); >=20 > txq->tx_tail =3D 0; > @@ -304,7 +310,7 @@ tx_xmit_pkts(void *tx_queue, struct rte_mbuf > **tx_pkts, > */ > if (txq->tx_tail > txq->tx_next_rs) { > tx_r[txq->tx_next_rs].read.cmd_type_len |=3D > - rte_cpu_to_le_32(IXGBE_ADVTXD_DCMD_RS); > + > rte_cpu_to_le_32(IXGBE_ADVTXD_DCMD_RS); > txq->tx_next_rs =3D (uint16_t)(txq->tx_next_rs + > txq->tx_rs_thresh); > if (txq->tx_next_rs >=3D txq->nb_tx_desc) @@ -505,6 +511,7 > @@ ixgbe_xmit_cleanup(struct ixgbe_tx_queue *txq) > uint16_t nb_tx_desc =3D txq->nb_tx_desc; > uint16_t desc_to_clean_to; > uint16_t nb_tx_to_clean; > + uint32_t stat; >=20 > /* Determine the last descriptor needing to be cleaned */ > desc_to_clean_to =3D (uint16_t)(last_desc_cleaned + txq- > >tx_rs_thresh); @@ -513,7 +520,9 @@ ixgbe_xmit_cleanup(struct > ixgbe_tx_queue *txq) >=20 > /* Check to make sure the last descriptor to clean is done */ > desc_to_clean_to =3D sw_ring[desc_to_clean_to].last_id; > - if (! (txr[desc_to_clean_to].wb.status & IXGBE_TXD_STAT_DD)) > + > + stat =3D txr[desc_to_clean_to].wb.status; > + if (!(stat & rte_cpu_to_le_32(IXGBE_TXD_STAT_DD))) > { > PMD_TX_FREE_LOG(DEBUG, > "TX descriptor %4u is not done" > @@ -801,12 +810,14 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf > **tx_pkts, > */ > slen =3D m_seg->data_len; > buf_dma_addr =3D > RTE_MBUF_DATA_DMA_ADDR(m_seg); > + > txd->read.buffer_addr =3D > - rte_cpu_to_le_64(buf_dma_addr); > + rte_cpu_to_le_64(buf_dma_addr); > txd->read.cmd_type_len =3D > - rte_cpu_to_le_32(cmd_type_len | slen); > + rte_cpu_to_le_32(cmd_type_len | > slen); > txd->read.olinfo_status =3D > - rte_cpu_to_le_32(olinfo_status); > + rte_cpu_to_le_32(olinfo_status); > + > txe->last_id =3D tx_last; > tx_id =3D txe->next_id; > txe =3D txn; > @@ -946,14 +957,16 @@ ixgbe_rx_scan_hw_ring(struct ixgbe_rx_queue > *rxq) > uint64_t pkt_flags; > int s[LOOK_AHEAD], nb_dd; > int i, j, nb_rx =3D 0; > + uint32_t stat; >=20 >=20 > /* get references to current descriptor and S/W ring entry */ > rxdp =3D &rxq->rx_ring[rxq->rx_tail]; > rxep =3D &rxq->sw_ring[rxq->rx_tail]; >=20 > + stat =3D rxdp->wb.upper.status_error; > /* check to make sure there is at least 1 packet to receive */ > - if (! (rxdp->wb.upper.status_error & IXGBE_RXDADV_STAT_DD)) > + if (!(stat & rte_cpu_to_le_32(IXGBE_RXDADV_STAT_DD))) > return 0; >=20 > /* > @@ -965,7 +978,7 @@ ixgbe_rx_scan_hw_ring(struct ixgbe_rx_queue *rxq) > { > /* Read desc statuses backwards to avoid race condition */ > for (j =3D LOOK_AHEAD-1; j >=3D 0; --j) > - s[j] =3D rxdp[j].wb.upper.status_error; > + s[j] =3D > rte_le_to_cpu_32(rxdp[j].wb.upper.status_error); >=20 > /* Compute how many status bits were set */ > nb_dd =3D 0; > @@ -977,27 +990,32 @@ ixgbe_rx_scan_hw_ring(struct ixgbe_rx_queue > *rxq) > /* Translate descriptor info to mbuf format */ > for (j =3D 0; j < nb_dd; ++j) { > mb =3D rxep[j].mbuf; > - pkt_len =3D (uint16_t)(rxdp[j].wb.upper.length - rxq- > >crc_len); > + pkt_len =3D > rte_le_to_cpu_16(rxdp[j].wb.upper.length) - > + rxq->crc_len; > + > mb->data_len =3D pkt_len; > mb->pkt_len =3D pkt_len; > - mb->vlan_tci =3D rxdp[j].wb.upper.vlan; > mb->vlan_tci =3D > rte_le_to_cpu_16(rxdp[j].wb.upper.vlan); >=20 > /* convert descriptor fields to rte mbuf flags */ > pkt_flags =3D rx_desc_hlen_type_rss_to_pkt_flags( > - rxdp[j].wb.lower.lo_dword.data); > + > rte_le_to_cpu_32(rxdp[j].wb.lower.lo_dword.data)); > + > /* reuse status field from scan list */ > pkt_flags |=3D rx_desc_status_to_pkt_flags(s[j]); > pkt_flags |=3D rx_desc_error_to_pkt_flags(s[j]); > mb->ol_flags =3D pkt_flags; >=20 > if (likely(pkt_flags & PKT_RX_RSS_HASH)) > - mb->hash.rss =3D > rxdp[j].wb.lower.hi_dword.rss; > + mb->hash.rss =3D rte_le_to_cpu_32( > + rxdp[j].wb.lower.hi_dword.rss); > else if (pkt_flags & PKT_RX_FDIR) { > - mb->hash.fdir.hash =3D > - > (uint16_t)((rxdp[j].wb.lower.hi_dword.csum_ip.csum) > - & IXGBE_ATR_HASH_MASK); > - mb->hash.fdir.id =3D > rxdp[j].wb.lower.hi_dword.csum_ip.ip_id; > + mb->hash.fdir.hash =3D rte_le_to_cpu_16( > + rxdp[j].wb.lower.hi_dword.csum_ip.csum) > & > + IXGBE_ATR_HASH_MASK; > + > + mb->hash.fdir.id =3D rte_le_to_cpu_16( > + rxdp[j].wb.lower.hi_dword.csum_ip.ip_id); > } > } >=20 > @@ -1221,7 +1239,7 @@ ixgbe_recv_pkts(void *rx_queue, struct rte_mbuf > **rx_pkts, > */ > rxdp =3D &rx_ring[rx_id]; > staterr =3D rxdp->wb.upper.status_error; > - if (! (staterr & > rte_cpu_to_le_32(IXGBE_RXDADV_STAT_DD))) > + if (!(staterr & rte_cpu_to_le_32(IXGBE_RXDADV_STAT_DD))) > break; > rxd =3D *rxdp; >=20 > @@ -1325,12 +1343,15 @@ ixgbe_recv_pkts(void *rx_queue, struct > rte_mbuf **rx_pkts, > rxm->ol_flags =3D pkt_flags; >=20 > if (likely(pkt_flags & PKT_RX_RSS_HASH)) > - rxm->hash.rss =3D rxd.wb.lower.hi_dword.rss; > + rxm->hash.rss =3D rte_le_to_cpu_32( > + rxd.wb.lower.hi_dword.rss); > else if (pkt_flags & PKT_RX_FDIR) { > - rxm->hash.fdir.hash =3D > - > (uint16_t)((rxd.wb.lower.hi_dword.csum_ip.csum) > - & IXGBE_ATR_HASH_MASK); > - rxm->hash.fdir.id =3D > rxd.wb.lower.hi_dword.csum_ip.ip_id; > + rxm->hash.fdir.hash =3D rte_le_to_cpu_16( > + > rxd.wb.lower.hi_dword.csum_ip.csum) & > + IXGBE_ATR_HASH_MASK; > + > + rxm->hash.fdir.id =3D rte_le_to_cpu_16( > + > rxd.wb.lower.hi_dword.csum_ip.ip_id); > } > /* > * Store the mbuf address into the next entry of the array > @@ -1413,7 +1434,7 @@ ixgbe_recv_scattered_pkts(void *rx_queue, struct > rte_mbuf **rx_pkts, > */ > rxdp =3D &rx_ring[rx_id]; > staterr =3D rxdp->wb.upper.status_error; > - if (! (staterr & > rte_cpu_to_le_32(IXGBE_RXDADV_STAT_DD))) > + if (!(staterr & rte_cpu_to_le_32(IXGBE_RXDADV_STAT_DD))) > break; > rxd =3D *rxdp; >=20 > @@ -1742,7 +1763,7 @@ ixgbe_reset_tx_queue(struct ixgbe_tx_queue > *txq) > prev =3D (uint16_t) (txq->nb_tx_desc - 1); > for (i =3D 0; i < txq->nb_tx_desc; i++) { > volatile union ixgbe_adv_tx_desc *txd =3D &txq->tx_ring[i]; > - txd->wb.status =3D IXGBE_TXD_STAT_DD; > + txd->wb.status =3D rte_cpu_to_le_32(IXGBE_TXD_STAT_DD); > txe[i].mbuf =3D NULL; > txe[i].last_id =3D i; > txe[prev].next_id =3D i; > @@ -2293,7 +2314,8 @@ ixgbe_dev_rx_queue_count(struct rte_eth_dev > *dev, uint16_t rx_queue_id) > rxdp =3D &(rxq->rx_ring[rxq->rx_tail]); >=20 > while ((desc < rxq->nb_rx_desc) && > - (rxdp->wb.upper.status_error & IXGBE_RXDADV_STAT_DD)) > { > + (rte_le_to_cpu_32(rxdp->wb.upper.status_error) & > + IXGBE_RXDADV_STAT_DD)) { > desc +=3D IXGBE_RXQ_SCAN_INTERVAL; > rxdp +=3D IXGBE_RXQ_SCAN_INTERVAL; > if (rxq->rx_tail + desc >=3D rxq->nb_rx_desc) @@ -2318,7 > +2340,8 @@ ixgbe_dev_rx_descriptor_done(void *rx_queue, uint16_t > offset) > desc -=3D rxq->nb_rx_desc; >=20 > rxdp =3D &rxq->rx_ring[desc]; > - return !!(rxdp->wb.upper.status_error & > IXGBE_RXDADV_STAT_DD); > + return !!(rxdp->wb.upper.status_error & > + rte_cpu_to_le_32(IXGBE_RXDADV_STAT_DD)); > } >=20 > void > -- > 1.9.1 Hi Xuelin, I haven't had a chance to test the patches yet but if its ok with everyone = can we defer on this until after the 2.0 package as it would be risky to ma= ke the change right before the release without full testing? Thanks Siobhan=20