From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by dpdk.org (Postfix) with ESMTP id BFA703237 for ; Thu, 28 May 2015 11:59:32 +0200 (CEST) Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by orsmga101.jf.intel.com with ESMTP; 28 May 2015 02:59:27 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.13,512,1427785200"; d="scan'208";a="733010646" Received: from irsmsx154.ger.corp.intel.com ([163.33.192.96]) by fmsmga002.fm.intel.com with ESMTP; 28 May 2015 02:59:25 -0700 Received: from irsmsx105.ger.corp.intel.com ([169.254.7.73]) by IRSMSX154.ger.corp.intel.com ([169.254.12.182]) with mapi id 14.03.0224.002; Thu, 28 May 2015 10:59:24 +0100 From: "Ananyev, Konstantin" To: Eric Kinzie Thread-Topic: [dpdk-dev] [PATCH] ixgbe: fall back to non-vector rx Thread-Index: AQHQl8wsYU3qlU2FeUqC5Z9HwoFSaZ2OcXjggAGhrwCAARYH0A== Date: Thu, 28 May 2015 09:59:24 +0000 Message-ID: <2601191342CEEE43887BDE71AB977258214333D4@irsmsx105.ger.corp.intel.com> References: <1432655548-13949-1-git-send-email-ehkinzie@gmail.com> <1432655548-13949-2-git-send-email-ehkinzie@gmail.com> <2601191342CEEE43887BDE71AB97725821432F20@irsmsx105.ger.corp.intel.com> <20150527182000.GB27368@zaphod> In-Reply-To: <20150527182000.GB27368@zaphod> Accept-Language: en-IE, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [163.33.239.182] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Cc: "dev@dpdk.org" Subject: Re: [dpdk-dev] [PATCH] ixgbe: fall back to non-vector rx X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 28 May 2015 09:59:33 -0000 Hi Eric, > -----Original Message----- > From: Eric Kinzie [mailto:ekinzie@brocade.com] > Sent: Wednesday, May 27, 2015 7:20 PM > To: Ananyev, Konstantin > Cc: dev@dpdk.org > Subject: Re: [dpdk-dev] [PATCH] ixgbe: fall back to non-vector rx >=20 > On Wed May 27 10:20:39 +0000 2015, Ananyev, Konstantin wrote: > > Hi Eric, > > > > > -----Original Message----- > > > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Eric Kinzie > > > Sent: Tuesday, May 26, 2015 4:52 PM > > > To: dev@dpdk.org > > > Subject: [dpdk-dev] [PATCH] ixgbe: fall back to non-vector rx > > > > > > The ixgbe driver refuses to receive any packets when vector receive > > > is enabled and fewer than the minimum number of required mbufs (32) > > > are supplied. This makes it incompatible with the bonding driver > > > which, during receive, may start out with enough buffers but as it > > > collects packets from each of the enslaved interfaces can drop below > > > that threshold. Instead of just giving up when insufficient buffers = are > > > supplied fall back to the original, non-vector, ixgbe receive functio= n. > > > > Right now, vector and bulk_alloc RX methods are not interchangeable. > > Once you setup your RX queue, you can't mix them. > > It would be good to make vector RX method to work with arbitrary number= of packets, > > but I don't think your method would work properly. > > In meanwhile, wonder can this problem be handled on the bonding device = level? > > Something like prevent vector RX be enabled at setup stage, or somethin= g? > > Konstantin >=20 >=20 > Konstantin, thanks for reviewing this -- I'll look for some other way to > address the problem. >=20 > Regardless of how this is dealt with, is it acceptable to make > _recv_raw_pkts_vec() return an error when nb_pkts is too small? Or will > this cause problems elsewhere? I am afraid it would. Right now rte_eth_rx_burst() function does not provide any error informatio= n, it just returns number of packets. Changing that would have a massive impact on both DPDK libraries and extern= al applications. Konstantin >=20 > Eric >=20 >=20 > > > > > > Signed-off-by: Eric Kinzie > > > --- > > > drivers/net/ixgbe/ixgbe_rxtx.c | 10 +++++----- > > > drivers/net/ixgbe/ixgbe_rxtx.h | 4 ++++ > > > drivers/net/ixgbe/ixgbe_rxtx_vec.c | 4 ++-- > > > 3 files changed, 11 insertions(+), 7 deletions(-) > > > > > > diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe= _rxtx.c > > > index 4f9ab22..fbba0ab 100644 > > > --- a/drivers/net/ixgbe/ixgbe_rxtx.c > > > +++ b/drivers/net/ixgbe/ixgbe_rxtx.c > > > @@ -1088,9 +1088,9 @@ ixgbe_rx_fill_from_stage(struct ixgbe_rx_queue = *rxq, struct rte_mbuf **rx_pkts, > > > return nb_pkts; > > > } > > > > > > -static inline uint16_t > > > -rx_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, > > > - uint16_t nb_pkts) > > > +uint16_t > > > +ixgbe_rx_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, > > > + uint16_t nb_pkts) > > > { > > > struct ixgbe_rx_queue *rxq =3D (struct ixgbe_rx_queue *)rx_queue; > > > uint16_t nb_rx =3D 0; > > > @@ -1158,14 +1158,14 @@ ixgbe_recv_pkts_bulk_alloc(void *rx_queue, st= ruct rte_mbuf **rx_pkts, > > > return 0; > > > > > > if (likely(nb_pkts <=3D RTE_PMD_IXGBE_RX_MAX_BURST)) > > > - return rx_recv_pkts(rx_queue, rx_pkts, nb_pkts); > > > + return ixgbe_rx_recv_pkts(rx_queue, rx_pkts, nb_pkts); > > > > > > /* request is relatively large, chunk it up */ > > > nb_rx =3D 0; > > > while (nb_pkts) { > > > uint16_t ret, n; > > > n =3D (uint16_t)RTE_MIN(nb_pkts, RTE_PMD_IXGBE_RX_MAX_BURST); > > > - ret =3D rx_recv_pkts(rx_queue, &rx_pkts[nb_rx], n); > > > + ret =3D ixgbe_rx_recv_pkts(rx_queue, &rx_pkts[nb_rx], n); > > > nb_rx =3D (uint16_t)(nb_rx + ret); > > > nb_pkts =3D (uint16_t)(nb_pkts - ret); > > > if (ret < n) > > > diff --git a/drivers/net/ixgbe/ixgbe_rxtx.h b/drivers/net/ixgbe/ixgbe= _rxtx.h > > > index af36438..811e514 100644 > > > --- a/drivers/net/ixgbe/ixgbe_rxtx.h > > > +++ b/drivers/net/ixgbe/ixgbe_rxtx.h > > > @@ -283,6 +283,10 @@ uint16_t ixgbe_recv_scattered_pkts_vec(void *rx_= queue, > > > int ixgbe_rx_vec_dev_conf_condition_check(struct rte_eth_dev *dev); > > > int ixgbe_rxq_vec_setup(struct ixgbe_rx_queue *rxq); > > > > > > +uint16_t ixgbe_rx_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkt= s, > > > + uint16_t nb_pkts); > > > + > > > + > > > #ifdef RTE_IXGBE_INC_VECTOR > > > > > > uint16_t ixgbe_xmit_pkts_vec(void *tx_queue, struct rte_mbuf **tx_pk= ts, > > > diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec.c b/drivers/net/ixgbe/i= xgbe_rxtx_vec.c > > > index abd10f6..d27424c 100644 > > > --- a/drivers/net/ixgbe/ixgbe_rxtx_vec.c > > > +++ b/drivers/net/ixgbe/ixgbe_rxtx_vec.c > > > @@ -181,7 +181,7 @@ desc_to_olflags_v(__m128i descs[4], struct rte_mb= uf **rx_pkts) > > > * in one loop > > > * > > > * Notice: > > > - * - nb_pkts < RTE_IXGBE_VPMD_RX_BURST, just return no packet > > > + * - nb_pkts < RTE_IXGBE_VPMD_RX_BURST, fall back to non-vector rece= ive > > > * - nb_pkts > RTE_IXGBE_VPMD_RX_BURST, only scan RTE_IXGBE_VPMD_RX_= BURST > > > * numbers of DD bit > > > * - don't support ol_flags for rss and csum err > > > @@ -206,7 +206,7 @@ _recv_raw_pkts_vec(struct ixgbe_rx_queue *rxq, st= ruct rte_mbuf **rx_pkts, > > > __m128i dd_check, eop_check; > > > > > > if (unlikely(nb_pkts < RTE_IXGBE_VPMD_RX_BURST)) > > > - return 0; > > > + return ixgbe_rx_recv_pkts(rxq, rx_pkts, nb_pkts); > > > > > > /* Just the act of getting into the function from the application i= s > > > * going to cost about 7 cycles */ > > > -- > > > 1.7.10.4 > >