From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id EED77A04BA; Wed, 7 Oct 2020 17:09:51 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id DE03D1BDF8; Wed, 7 Oct 2020 17:07:13 +0200 (CEST) Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by dpdk.org (Postfix) with ESMTP id 88A881BCD7 for ; Wed, 7 Oct 2020 17:07:01 +0200 (CEST) Received: from Internal Mail-Server by MTLPINE1 (envelope-from viacheslavo@nvidia.com) with SMTP; 7 Oct 2020 18:06:56 +0300 Received: from nvidia.com (pegasus12.mtr.labs.mlnx [10.210.17.40]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 097F6uPF010965; Wed, 7 Oct 2020 18:06:56 +0300 From: Viacheslav Ovsiienko To: dev@dpdk.org Cc: thomasm@monjalon.net, stephen@networkplumber.org, ferruh.yigit@intel.com, olivier.matz@6wind.com, jerinjacobk@gmail.com, maxime.coquelin@redhat.com, david.marchand@redhat.com, arybchenko@solarflare.com Date: Wed, 7 Oct 2020 15:06:48 +0000 Message-Id: <1602083215-22921-3-git-send-email-viacheslavo@nvidia.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1602083215-22921-1-git-send-email-viacheslavo@nvidia.com> References: <1602083215-22921-1-git-send-email-viacheslavo@nvidia.com> Subject: [dpdk-dev] [PATCH v2 2/9] app/testpmd: add multiple pools per core creation X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The command line parameter --mbuf-size is updated, it can handle the multiple values like the following: --mbuf-size=2176,512,768,4096 specifying the creation the extra memory pools with the requested mbuf data buffer sizes. If some buffer split feature is engaged the extra memory pools can be used to configure the Rx queues with rte_the_dev_rx_queue_setup_ex(). The extra pools are created with requested sizes, and pool names are assigned with appended index: mbuf_pool_socket_%socket_%index. Index zero is used to specify the first mandatory pool to maintain compatibility with existing code. Signed-off-by: Viacheslav Ovsiienko --- app/test-pmd/bpf_cmd.c | 4 +-- app/test-pmd/cmdline.c | 2 +- app/test-pmd/config.c | 6 ++-- app/test-pmd/parameters.c | 24 +++++++++---- app/test-pmd/testpmd.c | 63 +++++++++++++++++++---------------- app/test-pmd/testpmd.h | 24 ++++++++++--- doc/guides/testpmd_app_ug/run_app.rst | 7 ++-- 7 files changed, 83 insertions(+), 47 deletions(-) diff --git a/app/test-pmd/bpf_cmd.c b/app/test-pmd/bpf_cmd.c index 0f984cc..d8bb7ca 100644 --- a/app/test-pmd/bpf_cmd.c +++ b/app/test-pmd/bpf_cmd.c @@ -69,7 +69,7 @@ struct cmd_bpf_ld_result { *flags = RTE_BPF_ETH_F_NONE; arg->type = RTE_BPF_ARG_PTR; - arg->size = mbuf_data_size; + arg->size = mbuf_data_size[0]; for (i = 0; str[i] != 0; i++) { v = toupper(str[i]); @@ -78,7 +78,7 @@ struct cmd_bpf_ld_result { else if (v == 'M') { arg->type = RTE_BPF_ARG_PTR_MBUF; arg->size = sizeof(struct rte_mbuf); - arg->buf_size = mbuf_data_size; + arg->buf_size = mbuf_data_size[0]; } else if (v == '-') continue; else diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c index 08e123f..3f57182 100644 --- a/app/test-pmd/cmdline.c +++ b/app/test-pmd/cmdline.c @@ -2898,7 +2898,7 @@ struct cmd_setup_rxtx_queue { if (!numa_support || socket_id == NUMA_NO_CONFIG) socket_id = port->socket_id; - mp = mbuf_pool_find(socket_id); + mp = mbuf_pool_find(socket_id, 0); if (mp == NULL) { printf("Failed to setup RX queue: " "No mempool allocation" diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c index 17a6efe..7048288 100644 --- a/app/test-pmd/config.c +++ b/app/test-pmd/config.c @@ -625,7 +625,7 @@ static int bus_match_all(const struct rte_bus *bus, const void *data) printf("\nConnect to socket: %u", port->socket_id); if (port_numa[port_id] != NUMA_NO_CONFIG) { - mp = mbuf_pool_find(port_numa[port_id]); + mp = mbuf_pool_find(port_numa[port_id], 0); if (mp) printf("\nmemory allocation on the socket: %d", port_numa[port_id]); @@ -3124,9 +3124,9 @@ struct igb_ring_desc_16_bytes { */ tx_pkt_len = 0; for (i = 0; i < nb_segs; i++) { - if (seg_lengths[i] > (unsigned) mbuf_data_size) { + if (seg_lengths[i] > mbuf_data_size[0]) { printf("length[%u]=%u > mbuf_data_size=%u - give up\n", - i, seg_lengths[i], (unsigned) mbuf_data_size); + i, seg_lengths[i], mbuf_data_size[0]); return; } tx_pkt_len = (uint16_t)(tx_pkt_len + seg_lengths[i]); diff --git a/app/test-pmd/parameters.c b/app/test-pmd/parameters.c index 1ead595..1f40d73 100644 --- a/app/test-pmd/parameters.c +++ b/app/test-pmd/parameters.c @@ -106,7 +106,9 @@ "(flag: 1 for RX; 2 for TX; 3 for RX and TX).\n"); printf(" --socket-num=N: set socket from which all memory is allocated " "in NUMA mode.\n"); - printf(" --mbuf-size=N: set the data size of mbuf to N bytes.\n"); + printf(" --mbuf-size=N,[N1[,..Nn]: set the data size of mbuf to " + "N bytes. If multiple numbers are specified the extra pools " + "will be created to receive with packet split features\n"); printf(" --total-num-mbufs=N: set the number of mbufs to be allocated " "in mbuf pools.\n"); printf(" --max-pkt-len=N: set the maximum size of packet to N bytes.\n"); @@ -890,12 +892,22 @@ } } if (!strcmp(lgopts[opt_idx].name, "mbuf-size")) { - n = atoi(optarg); - if (n > 0 && n <= 0xFFFF) - mbuf_data_size = (uint16_t) n; - else + unsigned int mb_sz[MAX_SEGS_BUFFER_SPLIT]; + unsigned int nb_segs, i; + + nb_segs = parse_item_list(optarg, "mbuf-size", + MAX_SEGS_BUFFER_SPLIT, mb_sz, 0); + if (nb_segs <= 0) rte_exit(EXIT_FAILURE, - "mbuf-size should be > 0 and < 65536\n"); + "bad mbuf-size\n"); + for (i = 0; i < nb_segs; i++) { + if (mb_sz[i] <= 0 || mb_sz[i] > 0xFFFF) + rte_exit(EXIT_FAILURE, + "mbuf-size should be " + "> 0 and < 65536\n"); + mbuf_data_size[i] = (uint16_t) mb_sz[i]; + } + mbuf_data_size_n = nb_segs; } if (!strcmp(lgopts[opt_idx].name, "total-num-mbufs")) { n = atoi(optarg); diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c index fe6450c..f5060ee 100644 --- a/app/test-pmd/testpmd.c +++ b/app/test-pmd/testpmd.c @@ -186,7 +186,7 @@ struct fwd_engine * fwd_engines[] = { NULL, }; -struct rte_mempool *mempools[RTE_MAX_NUMA_NODES]; +struct rte_mempool *mempools[RTE_MAX_NUMA_NODES * MAX_SEGS_BUFFER_SPLIT]; uint16_t mempool_flags; struct fwd_config cur_fwd_config; @@ -195,7 +195,10 @@ struct fwd_engine * fwd_engines[] = { uint32_t burst_tx_delay_time = BURST_TX_WAIT_US; uint32_t burst_tx_retry_num = BURST_TX_RETRIES; -uint16_t mbuf_data_size = DEFAULT_MBUF_DATA_SIZE; /**< Mbuf data space size. */ +uint32_t mbuf_data_size_n = 1; /* Number of specified mbuf sizes. */ +uint16_t mbuf_data_size[MAX_SEGS_BUFFER_SPLIT] = { + DEFAULT_MBUF_DATA_SIZE +}; /**< Mbuf data space size. */ uint32_t param_total_num_mbufs = 0; /**< number of mbufs in all pools - if * specified on command-line. */ uint16_t stats_period; /**< Period to show statistics (disabled by default) */ @@ -955,14 +958,14 @@ struct extmem_param { */ static struct rte_mempool * mbuf_pool_create(uint16_t mbuf_seg_size, unsigned nb_mbuf, - unsigned int socket_id) + unsigned int socket_id, unsigned int size_idx) { char pool_name[RTE_MEMPOOL_NAMESIZE]; struct rte_mempool *rte_mp = NULL; uint32_t mb_size; mb_size = sizeof(struct rte_mbuf) + mbuf_seg_size; - mbuf_poolname_build(socket_id, pool_name, sizeof(pool_name)); + mbuf_poolname_build(socket_id, pool_name, sizeof(pool_name), size_idx); TESTPMD_LOG(INFO, "create a new mbuf pool <%s>: n=%u, size=%u, socket=%u\n", @@ -1485,8 +1488,8 @@ struct extmem_param { port->dev_info.rx_desc_lim.nb_mtu_seg_max; if ((data_size + RTE_PKTMBUF_HEADROOM) > - mbuf_data_size) { - mbuf_data_size = data_size + + mbuf_data_size[0]) { + mbuf_data_size[0] = data_size + RTE_PKTMBUF_HEADROOM; warning = 1; } @@ -1494,9 +1497,9 @@ struct extmem_param { } if (warning) - TESTPMD_LOG(WARNING, "Configured mbuf size %hu\n", - mbuf_data_size); - + TESTPMD_LOG(WARNING, + "Configured mbuf size of the first segment %hu\n", + mbuf_data_size[0]); /* * Create pools of mbuf. * If NUMA support is disabled, create a single pool of mbuf in @@ -1516,21 +1519,23 @@ struct extmem_param { } if (numa_support) { - uint8_t i; + uint8_t i, j; for (i = 0; i < num_sockets; i++) - mempools[i] = mbuf_pool_create(mbuf_data_size, - nb_mbuf_per_pool, - socket_ids[i]); + for (j = 0; j < mbuf_data_size_n; j++) + mempools[i * MAX_SEGS_BUFFER_SPLIT + j] = + mbuf_pool_create(mbuf_data_size[j], + nb_mbuf_per_pool, + socket_ids[i], 0); } else { - if (socket_num == UMA_NO_CONFIG) - mempools[0] = mbuf_pool_create(mbuf_data_size, - nb_mbuf_per_pool, 0); - else - mempools[socket_num] = mbuf_pool_create - (mbuf_data_size, - nb_mbuf_per_pool, - socket_num); + uint8_t i; + + for (i = 0; i < mbuf_data_size_n; i++) + mempools[i] = mbuf_pool_create + (mbuf_data_size[i], + nb_mbuf_per_pool, + socket_num == UMA_NO_CONFIG ? + 0 : socket_num, 0); } init_port_config(); @@ -1542,10 +1547,10 @@ struct extmem_param { */ for (lc_id = 0; lc_id < nb_lcores; lc_id++) { mbp = mbuf_pool_find( - rte_lcore_to_socket_id(fwd_lcores_cpuids[lc_id])); + rte_lcore_to_socket_id(fwd_lcores_cpuids[lc_id]), 0); if (mbp == NULL) - mbp = mbuf_pool_find(0); + mbp = mbuf_pool_find(0, 0); fwd_lcores[lc_id]->mbp = mbp; /* initialize GSO context */ fwd_lcores[lc_id]->gso_ctx.direct_pool = mbp; @@ -2498,7 +2503,8 @@ struct extmem_param { if ((numa_support) && (rxring_numa[pi] != NUMA_NO_CONFIG)) { struct rte_mempool * mp = - mbuf_pool_find(rxring_numa[pi]); + mbuf_pool_find + (rxring_numa[pi], 0); if (mp == NULL) { printf("Failed to setup RX queue:" "No mempool allocation" @@ -2514,7 +2520,8 @@ struct extmem_param { mp); } else { struct rte_mempool *mp = - mbuf_pool_find(port->socket_id); + mbuf_pool_find + (port->socket_id, 0); if (mp == NULL) { printf("Failed to setup RX queue:" "No mempool allocation" @@ -2928,13 +2935,13 @@ struct extmem_param { pmd_test_exit(void) { portid_t pt_id; + unsigned int i; int ret; - int i; if (test_done == 0) stop_packet_forwarding(); - for (i = 0 ; i < RTE_MAX_NUMA_NODES ; i++) { + for (i = 0 ; i < RTE_DIM(mempools) ; i++) { if (mempools[i]) { if (mp_alloc_type == MP_ALLOC_ANON) rte_mempool_mem_iter(mempools[i], dma_unmap_cb, @@ -2978,7 +2985,7 @@ struct extmem_param { return; } } - for (i = 0 ; i < RTE_MAX_NUMA_NODES ; i++) { + for (i = 0 ; i < RTE_DIM(mempools) ; i++) { if (mempools[i]) rte_mempool_free(mempools[i]); } diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h index c7e7e41..e5cdd12 100644 --- a/app/test-pmd/testpmd.h +++ b/app/test-pmd/testpmd.h @@ -42,6 +42,13 @@ */ #define RTE_MAX_SEGS_PER_PKT 255 /**< nb_segs is a 8-bit unsigned char. */ +/* + * The maximum number of segments per packet is used to configure + * buffer split feature, also specifies the maximum amount of + * optional Rx pools to allocate mbufs to split. + */ +#define MAX_SEGS_BUFFER_SPLIT 8 /**< nb_segs is a 8-bit unsigned char. */ + #define MAX_PKT_BURST 512 #define DEF_PKT_BURST 32 @@ -393,7 +400,9 @@ struct queue_stats_mappings { extern uint8_t dcb_config; extern uint8_t dcb_test; -extern uint16_t mbuf_data_size; /**< Mbuf data space size. */ +extern uint32_t mbuf_data_size_n; +extern uint16_t mbuf_data_size[MAX_SEGS_BUFFER_SPLIT]; +/**< Mbuf data space size. */ extern uint32_t param_total_num_mbufs; extern uint16_t stats_period; @@ -604,17 +613,22 @@ struct mplsoudp_decap_conf { /* Mbuf Pools */ static inline void -mbuf_poolname_build(unsigned int sock_id, char* mp_name, int name_size) +mbuf_poolname_build(unsigned int sock_id, char *mp_name, + int name_size, unsigned int idx) { - snprintf(mp_name, name_size, "mbuf_pool_socket_%u", sock_id); + if (!idx) + snprintf(mp_name, name_size, "mbuf_pool_socket_%u", sock_id); + else + snprintf(mp_name, name_size, "mbuf_pool_socket_%u_%u", + sock_id, idx); } static inline struct rte_mempool * -mbuf_pool_find(unsigned int sock_id) +mbuf_pool_find(unsigned int sock_id, unsigned int idx) { char pool_name[RTE_MEMPOOL_NAMESIZE]; - mbuf_poolname_build(sock_id, pool_name, sizeof(pool_name)); + mbuf_poolname_build(sock_id, pool_name, sizeof(pool_name), idx); return rte_mempool_lookup((const char *)pool_name); } diff --git a/doc/guides/testpmd_app_ug/run_app.rst b/doc/guides/testpmd_app_ug/run_app.rst index e2539f6..2d5a263 100644 --- a/doc/guides/testpmd_app_ug/run_app.rst +++ b/doc/guides/testpmd_app_ug/run_app.rst @@ -107,9 +107,12 @@ The command line options are: Set the socket from which all memory is allocated in NUMA mode, where 0 <= N < number of sockets on the board. -* ``--mbuf-size=N`` +* ``--mbuf-size=N[,N1[,...Nn]`` - Set the data size of the mbufs used to N bytes, where N < 65536. The default value is 2048. + Set the data size of the mbufs used to N bytes, where N < 65536. + The default value is 2048. If multiple mbuf-size values are specified the + extra memory pools will be created for allocating mbufs to receive packets + with buffer splittling features. * ``--total-num-mbufs=N`` -- 1.8.3.1