From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by dpdk.space (Postfix) with ESMTP id 834C1A05D3 for ; Thu, 25 Apr 2019 11:50:29 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 470011B579; Thu, 25 Apr 2019 11:50:28 +0200 (CEST) Received: from inva020.nxp.com (inva020.nxp.com [92.121.34.13]) by dpdk.org (Postfix) with ESMTP id 6D9AF1B578 for ; Thu, 25 Apr 2019 11:50:27 +0200 (CEST) Received: from inva020.nxp.com (localhost [127.0.0.1]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id D9F291A01D2; Thu, 25 Apr 2019 11:50:26 +0200 (CEST) Received: from invc005.ap-rdc01.nxp.com (invc005.ap-rdc01.nxp.com [165.114.16.14]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id D36551A006D; Thu, 25 Apr 2019 11:50:24 +0200 (CEST) Received: from GDB1.ap.freescale.net (GDB1.ap.freescale.net [10.232.132.179]) by invc005.ap-rdc01.nxp.com (Postfix) with ESMTP id D5FE8402FC; Thu, 25 Apr 2019 17:50:20 +0800 (SGT) From: Shreyansh Jain To: dev@dpdk.org Cc: thomas@monjalon.net, Shreyansh Jain Date: Thu, 25 Apr 2019 15:10:19 +0530 Message-Id: <20190425094019.11430-1-shreyansh.jain@nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190103112932.4415-1-shreyansh.jain@nxp.com> References: <20190103112932.4415-1-shreyansh.jain@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Subject: [dpdk-dev] [PATCH v2] examples/l3fwd: support separate buffer pool per port X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Content-Type: text/plain; charset="UTF-8" Message-ID: <20190425094019.B__9rNBf2E5SiHyEd2AqCVQGiGVdnqLfHXI9HhV3V2A@z> Traditionally, only a single buffer pool per port (or, per-port-per-socket) is created in l3fwd application. If separate pools are created per-port, it might lead to gain in performance as packet alloc/dealloc requests would be isolated across ports (and their corresponding lcores). This patch adds an argument '--per-port-pool' to the l3fwd application. By default, old mode of single pool per port (split on sockets) is active. L3fwd user guide is also updated by this patch. Signed-off-by: Shreyansh Jain Acked-by: Ruifeng Wang --- v2: - Updated documentation doc/guides/sample_app_ug/l3_forward.rst | 3 + examples/l3fwd/main.c | 74 ++++++++++++++++++------- 2 files changed, 56 insertions(+), 21 deletions(-) diff --git a/doc/guides/sample_app_ug/l3_forward.rst b/doc/guides/sample_app_ug/l3_forward.rst index ddd0f9a86..670918c57 100644 --- a/doc/guides/sample_app_ug/l3_forward.rst +++ b/doc/guides/sample_app_ug/l3_forward.rst @@ -55,6 +55,7 @@ The application has a number of command line options:: [--hash-entry-num] [--ipv6] [--parse-ptype] + [--per-port-pool] Where, @@ -83,6 +84,8 @@ Where, * ``--parse-ptype:`` Optional, set to use software to analyze packet type. Without this option, hardware will check the packet type. +* ``--per-port-pool:`` Optional, set to use independent buffer pools per port. Without this option, single buffer pool is used for all ports. + For example, consider a dual processor socket platform with 8 physical cores, where cores 0-7 and 16-23 appear on socket 0, while cores 8-15 and 24-31 appear on socket 1. diff --git a/examples/l3fwd/main.c b/examples/l3fwd/main.c index e4b99efe0..7b9683187 100644 --- a/examples/l3fwd/main.c +++ b/examples/l3fwd/main.c @@ -69,11 +69,13 @@ static int promiscuous_on; static int l3fwd_lpm_on; static int l3fwd_em_on; +/* Global variables. */ + static int numa_on = 1; /**< NUMA is enabled by default. */ static int parse_ptype; /**< Parse packet type using rx callback, and */ /**< disabled by default */ - -/* Global variables. */ +static int per_port_pool; /**< Use separate buffer pools per port; disabled */ + /**< by default */ volatile bool force_quit; @@ -133,7 +135,8 @@ static struct rte_eth_conf port_conf = { }, }; -static struct rte_mempool * pktmbuf_pool[NB_SOCKETS]; +static struct rte_mempool *pktmbuf_pool[RTE_MAX_ETHPORTS][NB_SOCKETS]; +static uint8_t lkp_per_socket[NB_SOCKETS]; struct l3fwd_lkp_mode { void (*setup)(int); @@ -285,7 +288,8 @@ print_usage(const char *prgname) " [--no-numa]" " [--hash-entry-num]" " [--ipv6]" - " [--parse-ptype]\n\n" + " [--parse-ptype]" + " [--per-port-pool]\n\n" " -p PORTMASK: Hexadecimal bitmask of ports to configure\n" " -P : Enable promiscuous mode\n" @@ -299,7 +303,8 @@ print_usage(const char *prgname) " --no-numa: Disable numa awareness\n" " --hash-entry-num: Specify the hash entry number in hexadecimal to be setup\n" " --ipv6: Set if running ipv6 packets\n" - " --parse-ptype: Set to use software to analyze packet type\n\n", + " --parse-ptype: Set to use software to analyze packet type\n" + " --per-port-pool: Use separate buffer pool per port\n\n", prgname); } @@ -452,6 +457,7 @@ static const char short_options[] = #define CMD_LINE_OPT_ENABLE_JUMBO "enable-jumbo" #define CMD_LINE_OPT_HASH_ENTRY_NUM "hash-entry-num" #define CMD_LINE_OPT_PARSE_PTYPE "parse-ptype" +#define CMD_LINE_OPT_PER_PORT_POOL "per-port-pool" enum { /* long options mapped to a short option */ @@ -465,6 +471,7 @@ enum { CMD_LINE_OPT_ENABLE_JUMBO_NUM, CMD_LINE_OPT_HASH_ENTRY_NUM_NUM, CMD_LINE_OPT_PARSE_PTYPE_NUM, + CMD_LINE_OPT_PARSE_PER_PORT_POOL, }; static const struct option lgopts[] = { @@ -475,6 +482,7 @@ static const struct option lgopts[] = { {CMD_LINE_OPT_ENABLE_JUMBO, 0, 0, CMD_LINE_OPT_ENABLE_JUMBO_NUM}, {CMD_LINE_OPT_HASH_ENTRY_NUM, 1, 0, CMD_LINE_OPT_HASH_ENTRY_NUM_NUM}, {CMD_LINE_OPT_PARSE_PTYPE, 0, 0, CMD_LINE_OPT_PARSE_PTYPE_NUM}, + {CMD_LINE_OPT_PER_PORT_POOL, 0, 0, CMD_LINE_OPT_PARSE_PER_PORT_POOL}, {NULL, 0, 0, 0} }; @@ -485,10 +493,10 @@ static const struct option lgopts[] = { * RTE_MAX is used to ensure that NB_MBUF never goes below a minimum * value of 8192 */ -#define NB_MBUF RTE_MAX( \ - (nb_ports*nb_rx_queue*nb_rxd + \ - nb_ports*nb_lcores*MAX_PKT_BURST + \ - nb_ports*n_tx_queue*nb_txd + \ +#define NB_MBUF(nports) RTE_MAX( \ + (nports*nb_rx_queue*nb_rxd + \ + nports*nb_lcores*MAX_PKT_BURST + \ + nports*n_tx_queue*nb_txd + \ nb_lcores*MEMPOOL_CACHE_SIZE), \ (unsigned)8192) @@ -594,6 +602,11 @@ parse_args(int argc, char **argv) parse_ptype = 1; break; + case CMD_LINE_OPT_PARSE_PER_PORT_POOL: + printf("per port buffer pool is enabled\n"); + per_port_pool = 1; + break; + default: print_usage(prgname); return -1; @@ -642,7 +655,7 @@ print_ethaddr(const char *name, const struct ether_addr *eth_addr) } static int -init_mem(unsigned nb_mbuf) +init_mem(uint16_t portid, unsigned int nb_mbuf) { struct lcore_conf *qconf; int socketid; @@ -664,13 +677,14 @@ init_mem(unsigned nb_mbuf) socketid, lcore_id, NB_SOCKETS); } - if (pktmbuf_pool[socketid] == NULL) { - snprintf(s, sizeof(s), "mbuf_pool_%d", socketid); - pktmbuf_pool[socketid] = + if (pktmbuf_pool[portid][socketid] == NULL) { + snprintf(s, sizeof(s), "mbuf_pool_%d:%d", + portid, socketid); + pktmbuf_pool[portid][socketid] = rte_pktmbuf_pool_create(s, nb_mbuf, MEMPOOL_CACHE_SIZE, 0, RTE_MBUF_DEFAULT_BUF_SIZE, socketid); - if (pktmbuf_pool[socketid] == NULL) + if (pktmbuf_pool[portid][socketid] == NULL) rte_exit(EXIT_FAILURE, "Cannot init mbuf pool on socket %d\n", socketid); @@ -678,8 +692,13 @@ init_mem(unsigned nb_mbuf) printf("Allocated mbuf pool on socket %d\n", socketid); - /* Setup either LPM or EM(f.e Hash). */ - l3fwd_lkp.setup(socketid); + /* Setup either LPM or EM(f.e Hash). But, only once per + * available socket. + */ + if (!lkp_per_socket[socketid]) { + l3fwd_lkp.setup(socketid); + lkp_per_socket[socketid] = 1; + } } qconf = &lcore_conf[lcore_id]; qconf->ipv4_lookup_struct = @@ -899,7 +918,14 @@ main(int argc, char **argv) (struct ether_addr *)(val_eth + portid) + 1); /* init memory */ - ret = init_mem(NB_MBUF); + if (!per_port_pool) { + /* portid = 0; this is *not* signifying the first port, + * rather, it signifies that portid is ignored. + */ + ret = init_mem(0, NB_MBUF(nb_ports)); + } else { + ret = init_mem(portid, NB_MBUF(1)); + } if (ret < 0) rte_exit(EXIT_FAILURE, "init_mem failed\n"); @@ -966,10 +992,16 @@ main(int argc, char **argv) rte_eth_dev_info_get(portid, &dev_info); rxq_conf = dev_info.default_rxconf; rxq_conf.offloads = conf->rxmode.offloads; - ret = rte_eth_rx_queue_setup(portid, queueid, nb_rxd, - socketid, - &rxq_conf, - pktmbuf_pool[socketid]); + if (!per_port_pool) + ret = rte_eth_rx_queue_setup(portid, queueid, + nb_rxd, socketid, + &rxq_conf, + pktmbuf_pool[0][socketid]); + else + ret = rte_eth_rx_queue_setup(portid, queueid, + nb_rxd, socketid, + &rxq_conf, + pktmbuf_pool[portid][socketid]); if (ret < 0) rte_exit(EXIT_FAILURE, "rte_eth_rx_queue_setup: err=%d, port=%d\n", -- 2.17.1