From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by dpdk.org (Postfix) with ESMTP id 3926D7E1F for ; Fri, 26 Sep 2014 11:50:06 +0200 (CEST) Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga102.fm.intel.com with ESMTP; 26 Sep 2014 02:56:27 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="4.97,862,1389772800"; d="scan'208";a="391937746" Received: from bnemeth-mobl.ger.corp.intel.com (HELO bn-ivy12.localdomain) ([172.22.195.65]) by FMSMGA003.fm.intel.com with ESMTP; 26 Sep 2014 02:50:23 -0700 From: Balazs Nemeth To: dev@dpdk.org Date: Fri, 26 Sep 2014 09:57:20 +0000 Message-Id: <1411725440-4520-1-git-send-email-balazs.nemeth@intel.com> X-Mailer: git-send-email 2.1.0 Cc: Balazs Nemeth Subject: [dpdk-dev] [PATCH] ixgbe: fix crash caused by bulk allocation failure in vector pmd X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 26 Sep 2014 09:50:06 -0000 Since the introduction of vector PMD, a bug in ixgbe_rxq_rearm could cause a crash. As long as the memory pool allocated to the RX queue has mbufs available, there is no problem. After allocation of _all_ mbufs from the memory pool, previously returned mbufs by rte_eth_rx_burst could be accessed by subsequent calls to the PMD and could be returned by subsequent calls to rte_eth_rx_burst. From the perspective of the application, the means that fields within the mbuf could change and that previously allocated mbufs could appear multiple times. After failure of mbuf allocation, the dd bits should indicate that the packets are not ready. For this, this patch adds code to reset the dd bits in the first RTE_IXGBE_DESCS_PER_LOOP packets of the next RTE_IXGBE_RXQ_REARM_THRESH packets only if the next RTE_IXGBE_RXQ_REARM_THRESH packets that will be accessed contain previously allocated packets. Setting the bits is not enough. The bits are checked _after_ setting the mbuf fields, thus a mechanism is needed to prevent the previously used mbuf pointers from being accessed during the speculative load of the mbuf fields. For this reason, not only the dd bits are reset, but also the mbufs associated to those descriptors are set to point to a "fake" mbuf. Signed-off-by: Balazs Nemeth --- lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c | 19 +++++++++++++++---- 1 file changed, 15 insertions(+), 4 deletions(-) diff --git a/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c b/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c index 203ddf7..457f267 100644 --- a/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c +++ b/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c @@ -54,17 +54,28 @@ ixgbe_rxq_rearm(struct igb_rx_queue *rxq) struct rte_mbuf *mb0, *mb1; __m128i hdr_room = _mm_set_epi64x(RTE_PKTMBUF_HEADROOM, RTE_PKTMBUF_HEADROOM); + __m128i dma_addr0, dma_addr1; + + rxdp = rxq->rx_ring + rxq->rxrearm_start; /* Pull 'n' more MBUFs into the software ring */ if (rte_mempool_get_bulk(rxq->mb_pool, - (void *)rxep, RTE_IXGBE_RXQ_REARM_THRESH) < 0) + (void *)rxep, + RTE_IXGBE_RXQ_REARM_THRESH) < 0) { + if (rxq->rxrearm_nb + RTE_IXGBE_RXQ_REARM_THRESH >= + rxq->nb_rx_desc) { + dma_addr0 = _mm_xor_si128(dma_addr0, dma_addr0); + for (i = 0; i < RTE_IXGBE_DESCS_PER_LOOP; i++) { + rxep[i].mbuf = &rxq->fake_mbuf; + _mm_store_si128((__m128i *)&rxdp[i].read, + dma_addr0); + } + } return; - - rxdp = rxq->rx_ring + rxq->rxrearm_start; + } /* Initialize the mbufs in vector, process 2 mbufs in one loop */ for (i = 0; i < RTE_IXGBE_RXQ_REARM_THRESH; i += 2, rxep += 2) { - __m128i dma_addr0, dma_addr1; __m128i vaddr0, vaddr1; mb0 = rxep[0].mbuf; -- 2.1.0