From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6258AA0C43; Fri, 24 Sep 2021 06:00:10 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 152EF40687; Fri, 24 Sep 2021 06:00:10 +0200 (CEST) Received: from szxga03-in.huawei.com (szxga03-in.huawei.com [45.249.212.189]) by mails.dpdk.org (Postfix) with ESMTP id C05B740142 for ; Fri, 24 Sep 2021 06:00:07 +0200 (CEST) Received: from dggemv703-chm.china.huawei.com (unknown [172.30.72.57]) by szxga03-in.huawei.com (SkyGuard) with ESMTP id 4HFyvn373gz8tP7 for ; Fri, 24 Sep 2021 11:59:17 +0800 (CST) Received: from dggpeml500024.china.huawei.com (7.185.36.10) by dggemv703-chm.china.huawei.com (10.3.19.46) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.8; Fri, 24 Sep 2021 12:00:03 +0800 Received: from [10.40.190.165] (10.40.190.165) by dggpeml500024.china.huawei.com (7.185.36.10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.8; Fri, 24 Sep 2021 12:00:03 +0800 To: Kevin Laatz , CC: , References: <20210910172737.2561156-1-kevin.laatz@intel.com> <20210917164136.3499904-1-kevin.laatz@intel.com> <20210917164136.3499904-5-kevin.laatz@intel.com> From: fengchengwen Message-ID: Date: Fri, 24 Sep 2021 12:00:02 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101 Thunderbird/68.11.0 MIME-Version: 1.0 In-Reply-To: <20210917164136.3499904-5-kevin.laatz@intel.com> Content-Type: text/plain; charset="utf-8" Content-Language: en-US Content-Transfer-Encoding: 7bit X-Originating-IP: [10.40.190.165] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To dggpeml500024.china.huawei.com (7.185.36.10) X-CFilter-Loop: Reflected Subject: Re: [dpdk-dev] [PATCH v2 4/6] examples/ioat: port application to dmadev APIs X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On 2021/9/18 0:41, Kevin Laatz wrote: > The dmadev library abstraction allows applications to use the same APIs for > all DMA device drivers in DPDK. This patch updates the ioatfwd application > to make use of the new dmadev APIs, in turn making it a generic application > which can be used with any of the DMA device drivers. > > Signed-off-by: Kevin Laatz > > --- > v2: > - dmadev api name updates following rebase > - use rte_config macro for max devs > - use PRIu64 for printing stats > --- > examples/ioat/ioatfwd.c | 239 ++++++++++++++++---------------------- > examples/ioat/meson.build | 8 +- > 2 files changed, 105 insertions(+), 142 deletions(-) > [snip] > > typedef enum copy_mode_t { > @@ -95,6 +94,16 @@ static copy_mode_t copy_mode = COPY_MODE_IOAT_NUM; > */ > static unsigned short ring_size = 2048; > > +/* global mbuf arrays for tracking DMA bufs */ > +#define MBUF_RING_SIZE 1024 > +#define MBUF_RING_MASK (MBUF_RING_SIZE - 1) > +struct dma_bufs { > + struct rte_mbuf *bufs[MBUF_RING_SIZE]; > + struct rte_mbuf *copies[MBUF_RING_SIZE]; > + uint16_t sent; > +}; The dma_bufs size only hold 1024 address info, and the dmadev virtual channel ring size is 2048 default, If the DMA cannot be moved in time, may exist overlay in dma_bufs in dma_dequeue() API. > +static struct dma_bufs dma_bufs[RTE_DMADEV_DEFAULT_MAX_DEVS]; > + > /* global transmission config */ > struct rxtx_transmission_config cfg; [snip] > } > /* >8 End of configuration of device. */ > > @@ -820,18 +789,16 @@ assign_rawdevs(void) > > for (i = 0; i < cfg.nb_ports; i++) { > for (j = 0; j < cfg.ports[i].nb_queues; j++) { > - struct rte_rawdev_info rdev_info = { 0 }; > + struct rte_dma_info dmadev_info = { 0 }; > > do { > - if (rdev_id == rte_rawdev_count()) > + if (rdev_id == rte_dma_count_avail()) > goto end; > - rte_rawdev_info_get(rdev_id++, &rdev_info, 0); > - } while (rdev_info.driver_name == NULL || > - strcmp(rdev_info.driver_name, > - IOAT_PMD_RAWDEV_NAME_STR) != 0); > + rte_dma_info_get(rdev_id++, &dmadev_info); > + } while (!rte_dma_is_valid(rdev_id)); > > - cfg.ports[i].ioat_ids[j] = rdev_id - 1; > - configure_rawdev_queue(cfg.ports[i].ioat_ids[j]); > + cfg.ports[i].dmadev_ids[j] = rdev_id - 1; > + configure_rawdev_queue(cfg.ports[i].dmadev_ids[j]); Tests show that if there are four dmadevs, only three dmadevs can be allocated here. 1st malloc: rdev_id=0, assign successful, and the dmadev_id=0, rdev_id=1 2st malloc: rdev_id=1, assign successful, and the dmadev_id=1, rdev_id=2 3st malloc: rdev_id=2, assign successful, and the dmadev_id=2, rdev_id=3 4st malloc: rdev_id=3, assign failed, because rte_dma_info_get(rdev_id++,...), the rdev_id is 4 and it's not a valid id. Recommended use rte_dma_next_dev() which Bruce introduced. > ++nb_rawdev; > } > } > @@ -840,7 +807,7 @@ assign_rawdevs(void) > rte_exit(EXIT_FAILURE, > "Not enough IOAT rawdevs (%u) for all queues (%u).\n", > nb_rawdev, cfg.nb_ports * cfg.ports[0].nb_queues); > - RTE_LOG(INFO, IOAT, "Number of used rawdevs: %u.\n", nb_rawdev); > + RTE_LOG(INFO, DMA, "Number of used rawdevs: %u.\n", nb_rawdev); > } > /* >8 End of using IOAT rawdev API functions. */ > [snip]