From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by dpdk.org (Postfix) with ESMTP id 33B8CDE0 for ; Wed, 11 Feb 2015 18:28:51 +0100 (CET) Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga102.jf.intel.com with ESMTP; 11 Feb 2015 09:25:00 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.09,559,1418112000"; d="scan'208";a="453215347" Received: from plxs0284.pdx.intel.com ([10.25.97.128]) by FMSMGA003.fm.intel.com with ESMTP; 11 Feb 2015 09:14:09 -0800 Received: from plxv1143.pdx.intel.com (plxv1143.pdx.intel.com [10.25.98.50]) by plxs0284.pdx.intel.com with ESMTP id t1BHSmjp010290; Wed, 11 Feb 2015 09:28:48 -0800 Received: from plxv1143.pdx.intel.com (localhost [127.0.0.1]) by plxv1143.pdx.intel.com with ESMTP id t1BHSmq4016842; Wed, 11 Feb 2015 09:28:48 -0800 Received: (from jbshaw@localhost) by plxv1143.pdx.intel.com with œ id t1BHSlPD016756; Wed, 11 Feb 2015 09:28:47 -0800 Date: Wed, 11 Feb 2015 09:28:47 -0800 From: Jeff Shaw To: "Chen Jing D(Mark)" Message-ID: <20150211172847.GA2984@plxv1143.pdx.intel.com> References: <1423551775-3604-2-git-send-email-jing.d.chen@intel.com> <1423618298-2933-1-git-send-email-jing.d.chen@intel.com> <1423618298-2933-11-git-send-email-jing.d.chen@intel.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1423618298-2933-11-git-send-email-jing.d.chen@intel.com> User-Agent: Mutt/1.4.2.3i Cc: dev@dpdk.org Subject: Re: [dpdk-dev] [PATCH v4 10/15] fm10k: add receive and tranmit function X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 11 Feb 2015 17:28:51 -0000 On Wed, Feb 11, 2015 at 09:31:33AM +0800, Chen Jing D(Mark) wrote: > +uint16_t > +fm10k_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, > + uint16_t nb_pkts) > +{ > + struct rte_mbuf *mbuf; > + union fm10k_rx_desc desc; > + struct fm10k_rx_queue *q = rx_queue; > + uint16_t count = 0; > + int alloc = 0; > + uint16_t next_dd; > + > + next_dd = q->next_dd; > + > + nb_pkts = RTE_MIN(nb_pkts, q->alloc_thresh); > + for (count = 0; count < nb_pkts; ++count) { > + mbuf = q->sw_ring[next_dd]; > + desc = q->hw_ring[next_dd]; > + if (!(desc.d.staterr & FM10K_RXD_STATUS_DD)) > + break; > +#ifdef RTE_LIBRTE_FM10K_DEBUG_RX > + dump_rxd(&desc); > +#endif > + rte_pktmbuf_pkt_len(mbuf) = desc.w.length; > + rte_pktmbuf_data_len(mbuf) = desc.w.length; > + > + mbuf->ol_flags = 0; > +#ifdef RTE_LIBRTE_FM10K_RX_OLFLAGS_ENABLE > + rx_desc_to_ol_flags(mbuf, &desc); > +#endif > + > + mbuf->hash.rss = desc.d.rss; > + > + rx_pkts[count] = mbuf; > + if (++next_dd == q->nb_desc) { > + next_dd = 0; > + alloc = 1; > + } > + > + /* Prefetch next mbuf while processing current one. */ > + rte_prefetch0(q->sw_ring[next_dd]); > + > + /* > + * When next RX descriptor is on a cache-line boundary, > + * prefetch the next 4 RX descriptors and the next 8 pointers > + * to mbufs. > + */ > + if ((next_dd & 0x3) == 0) { > + rte_prefetch0(&q->hw_ring[next_dd]); > + rte_prefetch0(&q->sw_ring[next_dd]); > + } > + } > + > + q->next_dd = next_dd; > + > + if ((q->next_dd > q->next_trigger) || (alloc == 1)) { > + rte_mempool_get_bulk(q->mp, (void **)&q->sw_ring[q->next_alloc], > + q->alloc_thresh); The return value should be checked here in case the mempool runs out of buffers. Thanks Helin for spotting this. I'm not sure how I missed it originally. > + for (; q->next_alloc <= q->next_trigger; ++q->next_alloc) { > + mbuf = q->sw_ring[q->next_alloc]; > + > + /* setup static mbuf fields */ > + fm10k_pktmbuf_reset(mbuf, q->port_id); > + > + /* write descriptor */ > + desc.q.pkt_addr = MBUF_DMA_ADDR_DEFAULT(mbuf); > + desc.q.hdr_addr = MBUF_DMA_ADDR_DEFAULT(mbuf); > + q->hw_ring[q->next_alloc] = desc; > + } > + FM10K_PCI_REG_WRITE(q->tail_ptr, q->next_trigger); > + q->next_trigger += q->alloc_thresh; > + if (q->next_trigger >= q->nb_desc) { > + q->next_trigger = q->alloc_thresh - 1; > + q->next_alloc = 0; > + } > + } > + > + return count; > +} > + Thanks, Jeff