From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 8C4A2A0613 for ; Fri, 27 Sep 2019 16:03:09 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id F052C1BEB9; Fri, 27 Sep 2019 16:03:08 +0200 (CEST) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by dpdk.org (Postfix) with ESMTP id 4E0941BEB9 for ; Fri, 27 Sep 2019 16:03:07 +0200 (CEST) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga105.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 27 Sep 2019 07:03:06 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.64,555,1559545200"; d="scan'208";a="180506179" Received: from fmsmsx104.amr.corp.intel.com ([10.18.124.202]) by orsmga007.jf.intel.com with ESMTP; 27 Sep 2019 07:03:06 -0700 Received: from fmsmsx155.amr.corp.intel.com (10.18.116.71) by fmsmsx104.amr.corp.intel.com (10.18.124.202) with Microsoft SMTP Server (TLS) id 14.3.439.0; Fri, 27 Sep 2019 07:03:05 -0700 Received: from hasmsx113.ger.corp.intel.com (10.184.198.64) by FMSMSX155.amr.corp.intel.com (10.18.116.71) with Microsoft SMTP Server (TLS) id 14.3.439.0; Fri, 27 Sep 2019 07:03:05 -0700 Received: from hasmsx114.ger.corp.intel.com ([169.254.14.116]) by HASMSX113.ger.corp.intel.com ([169.254.13.177]) with mapi id 14.03.0439.000; Fri, 27 Sep 2019 17:03:02 +0300 From: "Baran, MarcinX" To: "Richardson, Bruce" CC: "dev@dpdk.org" , "Modrak, PawelX" Thread-Topic: [dpdk-dev] [PATCH v5 3/6] examples/ioat: add rawdev copy mode support Thread-Index: AQHVb4a+HrGDb50OA0G+Dtj92Mcgo6c/JN4AgAB0O+A= Date: Fri, 27 Sep 2019 14:03:02 +0000 Message-ID: <06CDC4676D44784DA2DF9423D4B672BE15ECCCAF@HASMSX114.ger.corp.intel.com> References: <20190919093850.460-1-marcinx.baran@intel.com> <20190920073714.1314-1-marcinx.baran@intel.com> <20190920073714.1314-4-marcinx.baran@intel.com> <20190927100536.GC1847@bricha3-MOBL.ger.corp.intel.com> In-Reply-To: <20190927100536.GC1847@bricha3-MOBL.ger.corp.intel.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: dlp-product: dlpe-windows dlp-version: 11.2.0.6 dlp-reaction: no-action x-originating-ip: [10.184.70.11] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dpdk-dev] [PATCH v5 3/6] examples/ioat: add rawdev copy mode support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" -----Original Message----- From: Bruce Richardson =20 Sent: Friday, September 27, 2019 12:06 PM To: Baran, MarcinX Cc: dev@dpdk.org; Modrak, PawelX Subject: Re: [dpdk-dev] [PATCH v5 3/6] examples/ioat: add rawdev copy mode = support On Fri, Sep 20, 2019 at 09:37:11AM +0200, Marcin Baran wrote: > Added support for copying packets using rawdev device. Each port's Rx=20 > queue is assigned DMA channel for copy. >=20 > Signed-off-by: Marcin Baran > Signed-off-by: Pawel Modrak > --- > examples/ioat/ioatfwd.c | 236=20 > ++++++++++++++++++++++++++++++++-------- > 1 file changed, 189 insertions(+), 47 deletions(-) >=20 > diff --git a/examples/ioat/ioatfwd.c b/examples/ioat/ioatfwd.c index=20 > 3a092c6cf..c66ce7e49 100644 > --- a/examples/ioat/ioatfwd.c > +++ b/examples/ioat/ioatfwd.c > @@ -121,6 +121,50 @@ pktmbuf_sw_copy(struct rte_mbuf *src, struct rte_mbu= f *dst) > rte_pktmbuf_mtod(src, char *), src->data_len); } > =20 > +static uint32_t > +ioat_enqueue_packets(struct rte_mbuf **pkts, > + uint32_t nb_rx, uint16_t dev_id) > +{ > + int ret; > + uint32_t i; > + struct rte_mbuf *pkts_copy[MAX_PKT_BURST]; > + > + const uint64_t addr_offset =3D RTE_PTR_DIFF(pkts[0]->buf_addr, > + &pkts[0]->rearm_data); > + > + ret =3D rte_mempool_get_bulk(ioat_pktmbuf_pool, > + (void *)pkts_copy, nb_rx); > + > + if (unlikely(ret < 0)) > + rte_exit(EXIT_FAILURE, "Unable to allocate memory.\n"); > + > + for (i =3D 0; i < nb_rx; i++) { > + /* Perform data copy */ > + ret =3D rte_ioat_enqueue_copy(dev_id, > + pkts[i]->buf_iova > + - addr_offset, > + pkts_copy[i]->buf_iova > + - addr_offset, > + rte_pktmbuf_data_len(pkts[i]) > + + addr_offset, > + (uintptr_t)pkts[i], > + (uintptr_t)pkts_copy[i], > + 0 /* nofence */); > + > + if (ret !=3D 1) > + break; > + } > + > + ret =3D i; > + /* Free any not enqueued packets. */ > + rte_mempool_put_bulk(ioat_pktmbuf_pool, (void *)&pkts[i], nb_rx - i); > + rte_mempool_put_bulk(ioat_pktmbuf_pool, (void *)&pkts_copy[i], > + nb_rx - i); > + > + > + return ret; > +} > + > /* Receive packets on one port and enqueue to IOAT rawdev or=20 > rte_ring. */ static void ioat_rx_port(struct rxtx_port_config=20 > *rx_config) @@ -136,32 +180,40 @@ ioat_rx_port(struct rxtx_port_config=20 > *rx_config) > if (nb_rx =3D=3D 0) > continue; > =20 > - /* Perform packet software copy, free source packets */ > - int ret; > - struct rte_mbuf *pkts_burst_copy[MAX_PKT_BURST]; > - > - ret =3D rte_mempool_get_bulk(ioat_pktmbuf_pool, > - (void *)pkts_burst_copy, nb_rx); > - > - if (unlikely(ret < 0)) > - rte_exit(EXIT_FAILURE, > - "Unable to allocate memory.\n"); > - > - for (j =3D 0; j < nb_rx; j++) > - pktmbuf_sw_copy(pkts_burst[j], > - pkts_burst_copy[j]); > - > - rte_mempool_put_bulk(ioat_pktmbuf_pool, > - (void *)pkts_burst, nb_rx); > - > - nb_enq =3D rte_ring_enqueue_burst( > - rx_config->rx_to_tx_ring, > - (void *)pkts_burst_copy, nb_rx, NULL); > - > - /* Free any not enqueued packets. */ > - rte_mempool_put_bulk(ioat_pktmbuf_pool, > - (void *)&pkts_burst_copy[nb_enq], > - nb_rx - nb_enq); > + if (copy_mode =3D=3D COPY_MODE_IOAT_NUM) { > + /* Perform packet hardware copy */ > + nb_enq =3D ioat_enqueue_packets(pkts_burst, > + nb_rx, rx_config->ioat_ids[i]); > + if (nb_enq > 0) > + rte_ioat_do_copies(rx_config->ioat_ids[i]); > + } else { > + /* Perform packet software copy, free source packets */ > + int ret; > + struct rte_mbuf *pkts_burst_copy[MAX_PKT_BURST]; > + > + ret =3D rte_mempool_get_bulk(ioat_pktmbuf_pool, > + (void *)pkts_burst_copy, nb_rx); > + > + if (unlikely(ret < 0)) > + rte_exit(EXIT_FAILURE, > + "Unable to allocate memory.\n"); > + > + for (j =3D 0; j < nb_rx; j++) > + pktmbuf_sw_copy(pkts_burst[j], > + pkts_burst_copy[j]); > + > + rte_mempool_put_bulk(ioat_pktmbuf_pool, > + (void *)pkts_burst, nb_rx); > + > + nb_enq =3D rte_ring_enqueue_burst( > + rx_config->rx_to_tx_ring, > + (void *)pkts_burst_copy, nb_rx, NULL); > + > + /* Free any not enqueued packets. */ > + rte_mempool_put_bulk(ioat_pktmbuf_pool, > + (void *)&pkts_burst_copy[nb_enq], > + nb_rx - nb_enq); > + } Would the diff in this patch be smaller if you switched the order of the br= anches so that the SW copy leg, which was added first, was processed first?= You could even add in a dummy branch in patch 2, so that the indentation f= or that section remains unchanged. /Bruce [Marcin] Switched the order and added dummy branch in patch 2. Also changed= ioat_tx_port() function the same way in v6.