From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id BE809A0613 for ; Fri, 27 Sep 2019 12:05:56 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id C26311BEF0; Fri, 27 Sep 2019 12:05:43 +0200 (CEST) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by dpdk.org (Postfix) with ESMTP id 23CE51BEF0 for ; Fri, 27 Sep 2019 12:05:40 +0200 (CEST) X-Amp-Result: UNKNOWN X-Amp-Original-Verdict: FILE UNKNOWN X-Amp-File-Uploaded: False Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by fmsmga104.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 27 Sep 2019 03:05:40 -0700 X-IronPort-AV: E=Sophos;i="5.64,555,1559545200"; d="scan'208";a="219724727" Received: from bricha3-mobl.ger.corp.intel.com ([10.237.221.95]) by fmsmga002-auth.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 27 Sep 2019 03:05:39 -0700 Date: Fri, 27 Sep 2019 11:05:36 +0100 From: Bruce Richardson To: Marcin Baran Cc: dev@dpdk.org, Pawel Modrak Message-ID: <20190927100536.GC1847@bricha3-MOBL.ger.corp.intel.com> References: <20190919093850.460-1-marcinx.baran@intel.com> <20190920073714.1314-1-marcinx.baran@intel.com> <20190920073714.1314-4-marcinx.baran@intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20190920073714.1314-4-marcinx.baran@intel.com> User-Agent: Mutt/1.11.4 (2019-03-13) Subject: Re: [dpdk-dev] [PATCH v5 3/6] examples/ioat: add rawdev copy mode support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On Fri, Sep 20, 2019 at 09:37:11AM +0200, Marcin Baran wrote: > Added support for copying packets using > rawdev device. Each port's Rx queue is > assigned DMA channel for copy. > > Signed-off-by: Marcin Baran > Signed-off-by: Pawel Modrak > --- > examples/ioat/ioatfwd.c | 236 ++++++++++++++++++++++++++++++++-------- > 1 file changed, 189 insertions(+), 47 deletions(-) > > diff --git a/examples/ioat/ioatfwd.c b/examples/ioat/ioatfwd.c > index 3a092c6cf..c66ce7e49 100644 > --- a/examples/ioat/ioatfwd.c > +++ b/examples/ioat/ioatfwd.c > @@ -121,6 +121,50 @@ pktmbuf_sw_copy(struct rte_mbuf *src, struct rte_mbuf *dst) > rte_pktmbuf_mtod(src, char *), src->data_len); > } > > +static uint32_t > +ioat_enqueue_packets(struct rte_mbuf **pkts, > + uint32_t nb_rx, uint16_t dev_id) > +{ > + int ret; > + uint32_t i; > + struct rte_mbuf *pkts_copy[MAX_PKT_BURST]; > + > + const uint64_t addr_offset = RTE_PTR_DIFF(pkts[0]->buf_addr, > + &pkts[0]->rearm_data); > + > + ret = rte_mempool_get_bulk(ioat_pktmbuf_pool, > + (void *)pkts_copy, nb_rx); > + > + if (unlikely(ret < 0)) > + rte_exit(EXIT_FAILURE, "Unable to allocate memory.\n"); > + > + for (i = 0; i < nb_rx; i++) { > + /* Perform data copy */ > + ret = rte_ioat_enqueue_copy(dev_id, > + pkts[i]->buf_iova > + - addr_offset, > + pkts_copy[i]->buf_iova > + - addr_offset, > + rte_pktmbuf_data_len(pkts[i]) > + + addr_offset, > + (uintptr_t)pkts[i], > + (uintptr_t)pkts_copy[i], > + 0 /* nofence */); > + > + if (ret != 1) > + break; > + } > + > + ret = i; > + /* Free any not enqueued packets. */ > + rte_mempool_put_bulk(ioat_pktmbuf_pool, (void *)&pkts[i], nb_rx - i); > + rte_mempool_put_bulk(ioat_pktmbuf_pool, (void *)&pkts_copy[i], > + nb_rx - i); > + > + > + return ret; > +} > + > /* Receive packets on one port and enqueue to IOAT rawdev or rte_ring. */ > static void > ioat_rx_port(struct rxtx_port_config *rx_config) > @@ -136,32 +180,40 @@ ioat_rx_port(struct rxtx_port_config *rx_config) > if (nb_rx == 0) > continue; > > - /* Perform packet software copy, free source packets */ > - int ret; > - struct rte_mbuf *pkts_burst_copy[MAX_PKT_BURST]; > - > - ret = rte_mempool_get_bulk(ioat_pktmbuf_pool, > - (void *)pkts_burst_copy, nb_rx); > - > - if (unlikely(ret < 0)) > - rte_exit(EXIT_FAILURE, > - "Unable to allocate memory.\n"); > - > - for (j = 0; j < nb_rx; j++) > - pktmbuf_sw_copy(pkts_burst[j], > - pkts_burst_copy[j]); > - > - rte_mempool_put_bulk(ioat_pktmbuf_pool, > - (void *)pkts_burst, nb_rx); > - > - nb_enq = rte_ring_enqueue_burst( > - rx_config->rx_to_tx_ring, > - (void *)pkts_burst_copy, nb_rx, NULL); > - > - /* Free any not enqueued packets. */ > - rte_mempool_put_bulk(ioat_pktmbuf_pool, > - (void *)&pkts_burst_copy[nb_enq], > - nb_rx - nb_enq); > + if (copy_mode == COPY_MODE_IOAT_NUM) { > + /* Perform packet hardware copy */ > + nb_enq = ioat_enqueue_packets(pkts_burst, > + nb_rx, rx_config->ioat_ids[i]); > + if (nb_enq > 0) > + rte_ioat_do_copies(rx_config->ioat_ids[i]); > + } else { > + /* Perform packet software copy, free source packets */ > + int ret; > + struct rte_mbuf *pkts_burst_copy[MAX_PKT_BURST]; > + > + ret = rte_mempool_get_bulk(ioat_pktmbuf_pool, > + (void *)pkts_burst_copy, nb_rx); > + > + if (unlikely(ret < 0)) > + rte_exit(EXIT_FAILURE, > + "Unable to allocate memory.\n"); > + > + for (j = 0; j < nb_rx; j++) > + pktmbuf_sw_copy(pkts_burst[j], > + pkts_burst_copy[j]); > + > + rte_mempool_put_bulk(ioat_pktmbuf_pool, > + (void *)pkts_burst, nb_rx); > + > + nb_enq = rte_ring_enqueue_burst( > + rx_config->rx_to_tx_ring, > + (void *)pkts_burst_copy, nb_rx, NULL); > + > + /* Free any not enqueued packets. */ > + rte_mempool_put_bulk(ioat_pktmbuf_pool, > + (void *)&pkts_burst_copy[nb_enq], > + nb_rx - nb_enq); > + } Would the diff in this patch be smaller if you switched the order of the branches so that the SW copy leg, which was added first, was processed first? You could even add in a dummy branch in patch 2, so that the indentation for that section remains unchanged. /Bruce