From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-ig0-f173.google.com (mail-ig0-f173.google.com [209.85.213.173]) by dpdk.org (Postfix) with ESMTP id A531DC31C for ; Tue, 26 May 2015 17:53:08 +0200 (CEST) Received: by igbsb11 with SMTP id sb11so56778834igb.0 for ; Tue, 26 May 2015 08:53:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=from:to:subject:date:message-id:in-reply-to:references; bh=+MV3xVBZ/YLLLm7uXcbXYdDwaG7EB1tDRA6BeF87KMM=; b=dEbenmlsRkXZHoSFdxzdE6yLn6PGSgi5MiTZiEygwLNFVyufl4YFDnHOFquVrwpHZZ ilxLt/g7/UBgfxlvwdkyEv40ADTrVyl6cjhakr1++M1/Cc0ZsqVyONaAsqaCKIr/JVM5 +Pw/n3RL8siHVtlKZJOFt3l6MiANVvgbD7pbhK+/L8zoUebCQz+o9kTMKYZcnECew1ys jwRiqYmVWLDVG/r53tYky8kl58kOUZhTRIRAI020se6MpvrlvJwMq9gasb0YVccN77C2 8ie9KTFZBfHHdLyGscxC5mQAMWcaqmNiU7O5trRee6AjAyKADdRyJ2qaPvRCtsZ/ErxI K/Tg== X-Received: by 10.107.33.9 with SMTP id h9mr27926021ioh.1.1432655588202; Tue, 26 May 2015 08:53:08 -0700 (PDT) Received: from buildhost2.vyatta.com. ([144.49.197.22]) by mx.google.com with ESMTPSA id fs5sm8483389igb.0.2015.05.26.08.53.05 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 26 May 2015 08:53:07 -0700 (PDT) From: Eric Kinzie To: dev@dpdk.org Date: Tue, 26 May 2015 08:52:28 -0700 Message-Id: <1432655548-13949-2-git-send-email-ehkinzie@gmail.com> X-Mailer: git-send-email 1.7.10.4 In-Reply-To: <1432655548-13949-1-git-send-email-ehkinzie@gmail.com> References: <1432655548-13949-1-git-send-email-ehkinzie@gmail.com> Subject: [dpdk-dev] [PATCH] ixgbe: fall back to non-vector rx X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 26 May 2015 15:53:08 -0000 The ixgbe driver refuses to receive any packets when vector receive is enabled and fewer than the minimum number of required mbufs (32) are supplied. This makes it incompatible with the bonding driver which, during receive, may start out with enough buffers but as it collects packets from each of the enslaved interfaces can drop below that threshold. Instead of just giving up when insufficient buffers are supplied fall back to the original, non-vector, ixgbe receive function. Signed-off-by: Eric Kinzie --- drivers/net/ixgbe/ixgbe_rxtx.c | 10 +++++----- drivers/net/ixgbe/ixgbe_rxtx.h | 4 ++++ drivers/net/ixgbe/ixgbe_rxtx_vec.c | 4 ++-- 3 files changed, 11 insertions(+), 7 deletions(-) diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c index 4f9ab22..fbba0ab 100644 --- a/drivers/net/ixgbe/ixgbe_rxtx.c +++ b/drivers/net/ixgbe/ixgbe_rxtx.c @@ -1088,9 +1088,9 @@ ixgbe_rx_fill_from_stage(struct ixgbe_rx_queue *rxq, struct rte_mbuf **rx_pkts, return nb_pkts; } -static inline uint16_t -rx_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, - uint16_t nb_pkts) +uint16_t +ixgbe_rx_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, + uint16_t nb_pkts) { struct ixgbe_rx_queue *rxq = (struct ixgbe_rx_queue *)rx_queue; uint16_t nb_rx = 0; @@ -1158,14 +1158,14 @@ ixgbe_recv_pkts_bulk_alloc(void *rx_queue, struct rte_mbuf **rx_pkts, return 0; if (likely(nb_pkts <= RTE_PMD_IXGBE_RX_MAX_BURST)) - return rx_recv_pkts(rx_queue, rx_pkts, nb_pkts); + return ixgbe_rx_recv_pkts(rx_queue, rx_pkts, nb_pkts); /* request is relatively large, chunk it up */ nb_rx = 0; while (nb_pkts) { uint16_t ret, n; n = (uint16_t)RTE_MIN(nb_pkts, RTE_PMD_IXGBE_RX_MAX_BURST); - ret = rx_recv_pkts(rx_queue, &rx_pkts[nb_rx], n); + ret = ixgbe_rx_recv_pkts(rx_queue, &rx_pkts[nb_rx], n); nb_rx = (uint16_t)(nb_rx + ret); nb_pkts = (uint16_t)(nb_pkts - ret); if (ret < n) diff --git a/drivers/net/ixgbe/ixgbe_rxtx.h b/drivers/net/ixgbe/ixgbe_rxtx.h index af36438..811e514 100644 --- a/drivers/net/ixgbe/ixgbe_rxtx.h +++ b/drivers/net/ixgbe/ixgbe_rxtx.h @@ -283,6 +283,10 @@ uint16_t ixgbe_recv_scattered_pkts_vec(void *rx_queue, int ixgbe_rx_vec_dev_conf_condition_check(struct rte_eth_dev *dev); int ixgbe_rxq_vec_setup(struct ixgbe_rx_queue *rxq); +uint16_t ixgbe_rx_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, + uint16_t nb_pkts); + + #ifdef RTE_IXGBE_INC_VECTOR uint16_t ixgbe_xmit_pkts_vec(void *tx_queue, struct rte_mbuf **tx_pkts, diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec.c b/drivers/net/ixgbe/ixgbe_rxtx_vec.c index abd10f6..d27424c 100644 --- a/drivers/net/ixgbe/ixgbe_rxtx_vec.c +++ b/drivers/net/ixgbe/ixgbe_rxtx_vec.c @@ -181,7 +181,7 @@ desc_to_olflags_v(__m128i descs[4], struct rte_mbuf **rx_pkts) * in one loop * * Notice: - * - nb_pkts < RTE_IXGBE_VPMD_RX_BURST, just return no packet + * - nb_pkts < RTE_IXGBE_VPMD_RX_BURST, fall back to non-vector receive * - nb_pkts > RTE_IXGBE_VPMD_RX_BURST, only scan RTE_IXGBE_VPMD_RX_BURST * numbers of DD bit * - don't support ol_flags for rss and csum err @@ -206,7 +206,7 @@ _recv_raw_pkts_vec(struct ixgbe_rx_queue *rxq, struct rte_mbuf **rx_pkts, __m128i dd_check, eop_check; if (unlikely(nb_pkts < RTE_IXGBE_VPMD_RX_BURST)) - return 0; + return ixgbe_rx_recv_pkts(rxq, rx_pkts, nb_pkts); /* Just the act of getting into the function from the application is * going to cost about 7 cycles */ -- 1.7.10.4