From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by dpdk.org (Postfix) with ESMTP id 5C49FDE4 for ; Wed, 27 May 2015 12:24:48 +0200 (CEST) Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga101.fm.intel.com with ESMTP; 27 May 2015 03:24:43 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.13,505,1427785200"; d="scan'208";a="736013737" Received: from irsmsx151.ger.corp.intel.com ([163.33.192.59]) by orsmga002.jf.intel.com with ESMTP; 27 May 2015 03:24:43 -0700 Received: from irsmsx105.ger.corp.intel.com ([169.254.7.73]) by IRSMSX151.ger.corp.intel.com ([169.254.4.102]) with mapi id 14.03.0224.002; Wed, 27 May 2015 11:20:39 +0100 From: "Ananyev, Konstantin" To: Eric Kinzie , "dev@dpdk.org" Thread-Topic: [dpdk-dev] [PATCH] ixgbe: fall back to non-vector rx Thread-Index: AQHQl8wsYU3qlU2FeUqC5Z9HwoFSaZ2OcXjg Date: Wed, 27 May 2015 10:20:39 +0000 Message-ID: <2601191342CEEE43887BDE71AB97725821432F20@irsmsx105.ger.corp.intel.com> References: <1432655548-13949-1-git-send-email-ehkinzie@gmail.com> <1432655548-13949-2-git-send-email-ehkinzie@gmail.com> In-Reply-To: <1432655548-13949-2-git-send-email-ehkinzie@gmail.com> Accept-Language: en-IE, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [163.33.239.180] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dpdk-dev] [PATCH] ixgbe: fall back to non-vector rx X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 27 May 2015 10:24:49 -0000 Hi Eric, > -----Original Message----- > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Eric Kinzie > Sent: Tuesday, May 26, 2015 4:52 PM > To: dev@dpdk.org > Subject: [dpdk-dev] [PATCH] ixgbe: fall back to non-vector rx >=20 > The ixgbe driver refuses to receive any packets when vector receive > is enabled and fewer than the minimum number of required mbufs (32) > are supplied. This makes it incompatible with the bonding driver > which, during receive, may start out with enough buffers but as it > collects packets from each of the enslaved interfaces can drop below > that threshold. Instead of just giving up when insufficient buffers are > supplied fall back to the original, non-vector, ixgbe receive function. Right now, vector and bulk_alloc RX methods are not interchangeable. Once you setup your RX queue, you can't mix them. It would be good to make vector RX method to work with arbitrary number of = packets, but I don't think your method would work properly. In meanwhile, wonder can this problem be handled on the bonding device leve= l? Something like prevent vector RX be enabled at setup stage, or something? Konstantin >=20 > Signed-off-by: Eric Kinzie > --- > drivers/net/ixgbe/ixgbe_rxtx.c | 10 +++++----- > drivers/net/ixgbe/ixgbe_rxtx.h | 4 ++++ > drivers/net/ixgbe/ixgbe_rxtx_vec.c | 4 ++-- > 3 files changed, 11 insertions(+), 7 deletions(-) >=20 > diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxt= x.c > index 4f9ab22..fbba0ab 100644 > --- a/drivers/net/ixgbe/ixgbe_rxtx.c > +++ b/drivers/net/ixgbe/ixgbe_rxtx.c > @@ -1088,9 +1088,9 @@ ixgbe_rx_fill_from_stage(struct ixgbe_rx_queue *rxq= , struct rte_mbuf **rx_pkts, > return nb_pkts; > } >=20 > -static inline uint16_t > -rx_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, > - uint16_t nb_pkts) > +uint16_t > +ixgbe_rx_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, > + uint16_t nb_pkts) > { > struct ixgbe_rx_queue *rxq =3D (struct ixgbe_rx_queue *)rx_queue; > uint16_t nb_rx =3D 0; > @@ -1158,14 +1158,14 @@ ixgbe_recv_pkts_bulk_alloc(void *rx_queue, struct= rte_mbuf **rx_pkts, > return 0; >=20 > if (likely(nb_pkts <=3D RTE_PMD_IXGBE_RX_MAX_BURST)) > - return rx_recv_pkts(rx_queue, rx_pkts, nb_pkts); > + return ixgbe_rx_recv_pkts(rx_queue, rx_pkts, nb_pkts); >=20 > /* request is relatively large, chunk it up */ > nb_rx =3D 0; > while (nb_pkts) { > uint16_t ret, n; > n =3D (uint16_t)RTE_MIN(nb_pkts, RTE_PMD_IXGBE_RX_MAX_BURST); > - ret =3D rx_recv_pkts(rx_queue, &rx_pkts[nb_rx], n); > + ret =3D ixgbe_rx_recv_pkts(rx_queue, &rx_pkts[nb_rx], n); > nb_rx =3D (uint16_t)(nb_rx + ret); > nb_pkts =3D (uint16_t)(nb_pkts - ret); > if (ret < n) > diff --git a/drivers/net/ixgbe/ixgbe_rxtx.h b/drivers/net/ixgbe/ixgbe_rxt= x.h > index af36438..811e514 100644 > --- a/drivers/net/ixgbe/ixgbe_rxtx.h > +++ b/drivers/net/ixgbe/ixgbe_rxtx.h > @@ -283,6 +283,10 @@ uint16_t ixgbe_recv_scattered_pkts_vec(void *rx_queu= e, > int ixgbe_rx_vec_dev_conf_condition_check(struct rte_eth_dev *dev); > int ixgbe_rxq_vec_setup(struct ixgbe_rx_queue *rxq); >=20 > +uint16_t ixgbe_rx_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, > + uint16_t nb_pkts); > + > + > #ifdef RTE_IXGBE_INC_VECTOR >=20 > uint16_t ixgbe_xmit_pkts_vec(void *tx_queue, struct rte_mbuf **tx_pkts, > diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec.c b/drivers/net/ixgbe/ixgbe= _rxtx_vec.c > index abd10f6..d27424c 100644 > --- a/drivers/net/ixgbe/ixgbe_rxtx_vec.c > +++ b/drivers/net/ixgbe/ixgbe_rxtx_vec.c > @@ -181,7 +181,7 @@ desc_to_olflags_v(__m128i descs[4], struct rte_mbuf *= *rx_pkts) > * in one loop > * > * Notice: > - * - nb_pkts < RTE_IXGBE_VPMD_RX_BURST, just return no packet > + * - nb_pkts < RTE_IXGBE_VPMD_RX_BURST, fall back to non-vector receive > * - nb_pkts > RTE_IXGBE_VPMD_RX_BURST, only scan RTE_IXGBE_VPMD_RX_BURS= T > * numbers of DD bit > * - don't support ol_flags for rss and csum err > @@ -206,7 +206,7 @@ _recv_raw_pkts_vec(struct ixgbe_rx_queue *rxq, struct= rte_mbuf **rx_pkts, > __m128i dd_check, eop_check; >=20 > if (unlikely(nb_pkts < RTE_IXGBE_VPMD_RX_BURST)) > - return 0; > + return ixgbe_rx_recv_pkts(rxq, rx_pkts, nb_pkts); >=20 > /* Just the act of getting into the function from the application is > * going to cost about 7 cycles */ > -- > 1.7.10.4