From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by dpdk.org (Postfix) with ESMTP id CCAF03777 for ; Sat, 29 Apr 2017 07:14:10 +0200 (CEST) Received: from orsmga003.jf.intel.com ([10.7.209.27]) by orsmga101.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 28 Apr 2017 22:14:10 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.37,391,1488873600"; d="scan'208";a="962485702" Received: from unknown (HELO localhost.localdomain.sh.intel.com) ([10.239.129.189]) by orsmga003.jf.intel.com with ESMTP; 28 Apr 2017 22:14:09 -0700 From: Qi Zhang To: qi.z.zhang@intel.com Cc: stable@dpdk.org Date: Fri, 28 Apr 2017 18:08:03 -0400 Message-Id: <1493417284-32355-3-git-send-email-qi.z.zhang@intel.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1493417284-32355-1-git-send-email-qi.z.zhang@intel.com> References: <1493417284-32355-1-git-send-email-qi.z.zhang@intel.com> Subject: [dpdk-stable] [PATCH 2/3] net/ixgbe: fix memory overflow for 32 bit vPMD X-BeenThere: stable@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches for DPDK stable branches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 29 Apr 2017 05:14:11 -0000 Return mbuf points of _recv_raw_pkts_vec are modified out of bound. Fixes: c95584dc2b18 ("ixgbe: new vectorized functions for Rx/Tx") Cc: stable@dpdk.org Signed-off-by: Qi Zhang --- drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c | 16 ++++++++++++---- 1 file changed, 12 insertions(+), 4 deletions(-) diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c b/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c index 795ee33..a7bc199 100644 --- a/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c +++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c @@ -337,9 +337,13 @@ _recv_raw_pkts_vec(struct ixgbe_rx_queue *rxq, struct rte_mbuf **rx_pkts, __m128i descs[RTE_IXGBE_DESCS_PER_LOOP]; __m128i pkt_mb1, pkt_mb2, pkt_mb3, pkt_mb4; __m128i zero, staterr, sterr_tmp1, sterr_tmp2; - __m128i mbp1, mbp2; /* two mbuf pointer in one XMM reg. */ + /* 2 64 bit or 4 32 bit mbuf pointers in one XMM reg. */ + __m128i mbp1; +#if defined(RTE_ARCH_X86_64) + __m128i mbp2; +#endif - /* B.1 load 1 mbuf point */ + /* B.1 load 2 (64 bit) or 4 (32 bit) mbuf points */ mbp1 = _mm_loadu_si128((__m128i *)&sw_ring[pos]); /* Read desc statuses backwards to avoid race condition */ @@ -347,11 +351,13 @@ _recv_raw_pkts_vec(struct ixgbe_rx_queue *rxq, struct rte_mbuf **rx_pkts, descs[3] = _mm_loadu_si128((__m128i *)(rxdp + 3)); rte_compiler_barrier(); - /* B.2 copy 2 mbuf point into rx_pkts */ + /* B.2 copy 2 64 bit or 4 32 bit mbuf point into rx_pkts */ _mm_storeu_si128((__m128i *)&rx_pkts[pos], mbp1); - /* B.1 load 1 mbuf point */ +#if defined(RTE_ARCH_X86_64) + /* B.1 load 2 64 bit mbuf points */ mbp2 = _mm_loadu_si128((__m128i *)&sw_ring[pos+2]); +#endif descs[2] = _mm_loadu_si128((__m128i *)(rxdp + 2)); rte_compiler_barrier(); @@ -360,8 +366,10 @@ _recv_raw_pkts_vec(struct ixgbe_rx_queue *rxq, struct rte_mbuf **rx_pkts, rte_compiler_barrier(); descs[0] = _mm_loadu_si128((__m128i *)(rxdp)); +#if defined(RTE_ARCH_X86_64) /* B.2 copy 2 mbuf point into rx_pkts */ _mm_storeu_si128((__m128i *)&rx_pkts[pos+2], mbp2); +#endif if (split_packet) { rte_mbuf_prefetch_part2(rx_pkts[pos]); -- 2.7.4