From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id BFC1FA0093; Mon, 11 Apr 2022 14:21:09 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 2043F427F1; Mon, 11 Apr 2022 14:20:52 +0200 (CEST) Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by mails.dpdk.org (Postfix) with ESMTP id 20FE940692 for ; Mon, 11 Apr 2022 14:20:45 +0200 (CEST) Received: from dggpeml500024.china.huawei.com (unknown [172.30.72.53]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4KcSbt36VVzdZqt; Mon, 11 Apr 2022 20:20:10 +0800 (CST) Received: from localhost.localdomain (10.67.165.24) by dggpeml500024.china.huawei.com (7.185.36.10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Mon, 11 Apr 2022 20:20:43 +0800 From: Chengwen Feng To: , , CC: Subject: [PATCH v2 4/4] examples/dma: add force minimal copy size parameter Date: Mon, 11 Apr 2022 20:14:59 +0800 Message-ID: <20220411121459.23898-5-fengchengwen@huawei.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20220411121459.23898-1-fengchengwen@huawei.com> References: <20220411025634.33032-1-fengchengwen@huawei.com> <20220411121459.23898-1-fengchengwen@huawei.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Originating-IP: [10.67.165.24] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpeml500024.china.huawei.com (7.185.36.10) X-CFilter-Loop: Reflected X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org This patch adds force minimal copy size parameter (-m/--force-min-copy-size), so when do copy by CPU or DMA, the real copy size will be the maximum of mbuf's data_len and this parameter. This parameter was designed to compare the performance between CPU copy and DMA copy. User could send small packets with a high rate to drive the performance test. Signed-off-by: Chengwen Feng --- examples/dma/dmafwd.c | 30 +++++++++++++++++++++++++++--- 1 file changed, 27 insertions(+), 3 deletions(-) diff --git a/examples/dma/dmafwd.c b/examples/dma/dmafwd.c index 6b1b777cb8..3bd86b98c1 100644 --- a/examples/dma/dmafwd.c +++ b/examples/dma/dmafwd.c @@ -25,6 +25,7 @@ #define CMD_LINE_OPT_RING_SIZE "ring-size" #define CMD_LINE_OPT_BATCH_SIZE "dma-batch-size" #define CMD_LINE_OPT_FRAME_SIZE "max-frame-size" +#define CMD_LINE_OPT_FORCE_COPY_SIZE "force-min-copy-size" #define CMD_LINE_OPT_STATS_INTERVAL "stats-interval" /* configurable number of RX/TX ring descriptors */ @@ -119,6 +120,7 @@ static volatile bool force_quit; static uint32_t dma_batch_sz = MAX_PKT_BURST; static uint32_t max_frame_size; +static uint32_t force_min_copy_size; /* ethernet addresses of ports */ static struct rte_ether_addr dma_ports_eth_addr[RTE_MAX_ETHPORTS]; @@ -208,7 +210,13 @@ print_stats(char *prgname) "Rx Queues = %d, ", nb_queues); status_strlen += snprintf(status_string + status_strlen, sizeof(status_string) - status_strlen, - "Ring Size = %d", ring_size); + "Ring Size = %d\n", ring_size); + status_strlen += snprintf(status_string + status_strlen, + sizeof(status_string) - status_strlen, + "Force Min Copy Size = %u Packet Data Room Size = %u", + force_min_copy_size, + rte_pktmbuf_data_room_size(dma_pktmbuf_pool) - + RTE_PKTMBUF_HEADROOM); memset(&ts, 0, sizeof(struct total_statistics)); @@ -307,7 +315,8 @@ static inline void pktmbuf_sw_copy(struct rte_mbuf *src, struct rte_mbuf *dst) { rte_memcpy(rte_pktmbuf_mtod(dst, char *), - rte_pktmbuf_mtod(src, char *), src->data_len); + rte_pktmbuf_mtod(src, char *), + RTE_MAX(src->data_len, force_min_copy_size)); } /* >8 End of perform packet copy there is a user-defined function. */ @@ -324,7 +333,9 @@ dma_enqueue_packets(struct rte_mbuf *pkts[], struct rte_mbuf *pkts_copy[], ret = rte_dma_copy(dev_id, 0, rte_pktmbuf_iova(pkts[i]), rte_pktmbuf_iova(pkts_copy[i]), - rte_pktmbuf_data_len(pkts[i]), 0); + RTE_MAX(rte_pktmbuf_data_len(pkts[i]), + force_min_copy_size), + 0); if (ret < 0) break; @@ -576,6 +587,7 @@ dma_usage(const char *prgname) printf("%s [EAL options] -- -p PORTMASK [-q NQ]\n" " -b --dma-batch-size: number of requests per DMA batch\n" " -f --max-frame-size: max frame size\n" + " -m --force-min-copy-size: force a minimum copy length, even for smaller packets\n" " -p --portmask: hexadecimal bitmask of ports to configure\n" " -q NQ: number of RX queues per port (default is 1)\n" " --[no-]mac-updating: Enable or disable MAC addresses updating (enabled by default)\n" @@ -621,6 +633,7 @@ dma_parse_args(int argc, char **argv, unsigned int nb_ports) "b:" /* dma batch size */ "c:" /* copy type (sw|hw) */ "f:" /* max frame size */ + "m:" /* force min copy size */ "p:" /* portmask */ "q:" /* number of RX queues per port */ "s:" /* ring size */ @@ -636,6 +649,7 @@ dma_parse_args(int argc, char **argv, unsigned int nb_ports) {CMD_LINE_OPT_RING_SIZE, required_argument, NULL, 's'}, {CMD_LINE_OPT_BATCH_SIZE, required_argument, NULL, 'b'}, {CMD_LINE_OPT_FRAME_SIZE, required_argument, NULL, 'f'}, + {CMD_LINE_OPT_FORCE_COPY_SIZE, required_argument, NULL, 'm'}, {CMD_LINE_OPT_STATS_INTERVAL, required_argument, NULL, 'i'}, {NULL, 0, 0, 0} }; @@ -670,6 +684,10 @@ dma_parse_args(int argc, char **argv, unsigned int nb_ports) } break; + case 'm': + force_min_copy_size = atoi(optarg); + break; + /* portmask */ case 'p': dma_enabled_port_mask = dma_parse_portmask(optarg); @@ -1068,6 +1086,12 @@ main(int argc, char **argv) rte_exit(EXIT_FAILURE, "Cannot init mbuf pool\n"); /* >8 End of allocates mempool to hold the mbufs. */ + if (force_min_copy_size > + (uint32_t)(rte_pktmbuf_data_room_size(dma_pktmbuf_pool) - + RTE_PKTMBUF_HEADROOM)) + rte_exit(EXIT_FAILURE, + "Force min copy size > packet mbuf size\n"); + /* Initialize each port. 8< */ cfg.nb_ports = 0; RTE_ETH_FOREACH_DEV(portid) -- 2.33.0