From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by dpdk.org (Postfix) with ESMTP id 03DC32BD8 for ; Mon, 8 Apr 2019 11:29:36 +0200 (CEST) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga102.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 08 Apr 2019 02:29:36 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.60,324,1549958400"; d="scan'208";a="159683162" Received: from irsmsx107.ger.corp.intel.com ([163.33.3.99]) by fmsmga004.fm.intel.com with ESMTP; 08 Apr 2019 02:29:34 -0700 Received: from irsmsx105.ger.corp.intel.com ([169.254.7.31]) by IRSMSX107.ger.corp.intel.com ([169.254.10.246]) with mapi id 14.03.0415.000; Mon, 8 Apr 2019 10:29:34 +0100 From: "Ananyev, Konstantin" To: "Ruifeng Wang (Arm Technology China)" , "Shreyansh.jain@nxp.com" , "dev@dpdk.org" CC: nd Thread-Topic: [dpdk-dev] [PATCH] examples/l3fwd: support separate buffer pool per port Thread-Index: AQHUo1eqaIUoRjuicUqwMQcvs709jaYyVziwgAA8NpA= Date: Mon, 8 Apr 2019 09:29:33 +0000 Message-ID: <2601191342CEEE43887BDE71AB9772580148A942C6@irsmsx105.ger.corp.intel.com> References: <20190103112932.4415-1-shreyansh.jain@nxp.com> In-Reply-To: Accept-Language: en-IE, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-titus-metadata-40: eyJDYXRlZ29yeUxhYmVscyI6IiIsIk1ldGFkYXRhIjp7Im5zIjoiaHR0cDpcL1wvd3d3LnRpdHVzLmNvbVwvbnNcL0ludGVsMyIsImlkIjoiZWU2MGNmN2EtN2Y0My00MDNlLWIxZTUtZjc1NjlkODQ0ODUwIiwicHJvcHMiOlt7Im4iOiJDVFBDbGFzc2lmaWNhdGlvbiIsInZhbHMiOlt7InZhbHVlIjoiQ1RQX05UIn1dfV19LCJTdWJqZWN0TGFiZWxzIjpbXSwiVE1DVmVyc2lvbiI6IjE3LjEwLjE4MDQuNDkiLCJUcnVzdGVkTGFiZWxIYXNoIjoiKzF5K2g2cnk1S1VMN3A5NkRXaVErQVpsWFlpNEZCZ1ZcL29PYWlKTlNwSWx1NHoxMFFjeFwvSUhWU3BWUUtsXC9VYyJ9 x-ctpclassification: CTP_NT dlp-product: dlpe-windows dlp-version: 11.0.600.7 dlp-reaction: no-action x-originating-ip: [163.33.239.182] Content-Type: text/plain; charset="iso-2022-jp" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dpdk-dev] [PATCH] examples/l3fwd: support separate buffer pool per port X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 08 Apr 2019 09:29:37 -0000 >=20 > Hi Shreyansh, >=20 > I tried this patch on MacchiatoBin + 82599 NIC. > Compared with global-pool mode, per-port-pool mode showed slightly lower = performance in single core test. That was my thought too - for the case when queues from multiple ports are = handled by the same core it probably would only slowdown things. Wonder what is the use case for the patch and what is the performance gain = you observed? Konstantin=20 > In dual core test, both modes had nearly same performance. >=20 > My setup only has two ports which is limited. > Just want to know the per-port-pool mode has more performance gain when m= any ports are bound to different cores? >=20 > Used commands: > sudo ./examples/l3fwd/build/l3fwd -c 0x4 -w 0000:01:00.0 -w 0000:01:00.1 = -- -P -p 3 --config=3D'(0,0,2),(1,0,2)' --per-port-pool > sudo ./examples/l3fwd/build/l3fwd -c 0xc -w 0000:01:00.0 -w 0000:01:00.1 = -- -P -p 3 --config=3D'(0,0,2),(1,0,3)' --per-port-pool >=20 > Regards, > /Ruifeng >=20 > > -----Original Message----- > > From: dev On Behalf Of Shreyansh Jain > > Sent: 2019=1B$BG/=1B(B1=1B$B7n=1B(B3=1B$BF|=1B(B 19:30 > > To: dev@dpdk.org > > Cc: Shreyansh Jain > > Subject: [dpdk-dev] [PATCH] examples/l3fwd: support separate buffer poo= l > > per port > > > > Traditionally, only a single buffer pool per port (or, per-port-per-soc= ket) is > > created in l3fwd application. > > > > If separate pools are created per-port, it might lead to gain in perfor= mance as > > packet alloc/dealloc requests would be isolated across ports (and their > > corresponding lcores). > > > > This patch adds an argument '--per-port-pool' to the l3fwd application. > > By default, old mode of single pool per port (split on sockets) is acti= ve. > > > > Signed-off-by: Shreyansh Jain > > --- > > > > RFC: https://mails.dpdk.org/archives/dev/2018-November/120002.html > > > > examples/l3fwd/main.c | 74 +++++++++++++++++++++++++++++++-------- > > ---- > > 1 file changed, 53 insertions(+), 21 deletions(-) > > > > diff --git a/examples/l3fwd/main.c b/examples/l3fwd/main.c index > > e4b99efe0..7b9683187 100644 > > --- a/examples/l3fwd/main.c > > +++ b/examples/l3fwd/main.c > > @@ -69,11 +69,13 @@ static int promiscuous_on; static int l3fwd_lpm_on= ; > > static int l3fwd_em_on; > > > > +/* Global variables. */ > > + > > static int numa_on =3D 1; /**< NUMA is enabled by default. */ static = int > > parse_ptype; /**< Parse packet type using rx callback, and */ > > /**< disabled by default */ > > - > > -/* Global variables. */ > > +static int per_port_pool; /**< Use separate buffer pools per port; dis= abled > > */ > > + /**< by default */ > > > > volatile bool force_quit; > > > > @@ -133,7 +135,8 @@ static struct rte_eth_conf port_conf =3D { > > }, > > }; > > > > -static struct rte_mempool * pktmbuf_pool[NB_SOCKETS]; > > +static struct rte_mempool > > *pktmbuf_pool[RTE_MAX_ETHPORTS][NB_SOCKETS]; > > +static uint8_t lkp_per_socket[NB_SOCKETS]; > > > > struct l3fwd_lkp_mode { > > void (*setup)(int); > > @@ -285,7 +288,8 @@ print_usage(const char *prgname) > > " [--no-numa]" > > " [--hash-entry-num]" > > " [--ipv6]" > > - " [--parse-ptype]\n\n" > > + " [--parse-ptype]" > > + " [--per-port-pool]\n\n" > > > > " -p PORTMASK: Hexadecimal bitmask of ports to > > configure\n" > > " -P : Enable promiscuous mode\n" > > @@ -299,7 +303,8 @@ print_usage(const char *prgname) > > " --no-numa: Disable numa awareness\n" > > " --hash-entry-num: Specify the hash entry number in > > hexadecimal to be setup\n" > > " --ipv6: Set if running ipv6 packets\n" > > - " --parse-ptype: Set to use software to analyze packet > > type\n\n", > > + " --parse-ptype: Set to use software to analyze packet > > type\n" > > + " --per-port-pool: Use separate buffer pool per port\n\n", > > prgname); > > } > > > > @@ -452,6 +457,7 @@ static const char short_options[] =3D #define > > CMD_LINE_OPT_ENABLE_JUMBO "enable-jumbo" > > #define CMD_LINE_OPT_HASH_ENTRY_NUM "hash-entry-num" > > #define CMD_LINE_OPT_PARSE_PTYPE "parse-ptype" > > +#define CMD_LINE_OPT_PER_PORT_POOL "per-port-pool" > > enum { > > /* long options mapped to a short option */ > > > > @@ -465,6 +471,7 @@ enum { > > CMD_LINE_OPT_ENABLE_JUMBO_NUM, > > CMD_LINE_OPT_HASH_ENTRY_NUM_NUM, > > CMD_LINE_OPT_PARSE_PTYPE_NUM, > > + CMD_LINE_OPT_PARSE_PER_PORT_POOL, > > }; > > > > static const struct option lgopts[] =3D { @@ -475,6 +482,7 @@ static c= onst > > struct option lgopts[] =3D { > > {CMD_LINE_OPT_ENABLE_JUMBO, 0, 0, > > CMD_LINE_OPT_ENABLE_JUMBO_NUM}, > > {CMD_LINE_OPT_HASH_ENTRY_NUM, 1, 0, > > CMD_LINE_OPT_HASH_ENTRY_NUM_NUM}, > > {CMD_LINE_OPT_PARSE_PTYPE, 0, 0, > > CMD_LINE_OPT_PARSE_PTYPE_NUM}, > > + {CMD_LINE_OPT_PER_PORT_POOL, 0, 0, > > CMD_LINE_OPT_PARSE_PER_PORT_POOL}, > > {NULL, 0, 0, 0} > > }; > > > > @@ -485,10 +493,10 @@ static const struct option lgopts[] =3D { > > * RTE_MAX is used to ensure that NB_MBUF never goes below a minimum > > * value of 8192 > > */ > > -#define NB_MBUF RTE_MAX( \ > > - (nb_ports*nb_rx_queue*nb_rxd + \ > > - nb_ports*nb_lcores*MAX_PKT_BURST + \ > > - nb_ports*n_tx_queue*nb_txd + \ > > +#define NB_MBUF(nports) RTE_MAX( \ > > + (nports*nb_rx_queue*nb_rxd + \ > > + nports*nb_lcores*MAX_PKT_BURST + \ > > + nports*n_tx_queue*nb_txd + \ > > nb_lcores*MEMPOOL_CACHE_SIZE), \ > > (unsigned)8192) > > > > @@ -594,6 +602,11 @@ parse_args(int argc, char **argv) > > parse_ptype =3D 1; > > break; > > > > + case CMD_LINE_OPT_PARSE_PER_PORT_POOL: > > + printf("per port buffer pool is enabled\n"); > > + per_port_pool =3D 1; > > + break; > > + > > default: > > print_usage(prgname); > > return -1; > > @@ -642,7 +655,7 @@ print_ethaddr(const char *name, const struct > > ether_addr *eth_addr) } > > > > static int > > -init_mem(unsigned nb_mbuf) > > +init_mem(uint16_t portid, unsigned int nb_mbuf) > > { > > struct lcore_conf *qconf; > > int socketid; > > @@ -664,13 +677,14 @@ init_mem(unsigned nb_mbuf) > > socketid, lcore_id, NB_SOCKETS); > > } > > > > - if (pktmbuf_pool[socketid] =3D=3D NULL) { > > - snprintf(s, sizeof(s), "mbuf_pool_%d", socketid); > > - pktmbuf_pool[socketid] =3D > > + if (pktmbuf_pool[portid][socketid] =3D=3D NULL) { > > + snprintf(s, sizeof(s), "mbuf_pool_%d:%d", > > + portid, socketid); > > + pktmbuf_pool[portid][socketid] =3D > > rte_pktmbuf_pool_create(s, nb_mbuf, > > MEMPOOL_CACHE_SIZE, 0, > > RTE_MBUF_DEFAULT_BUF_SIZE, > > socketid); > > - if (pktmbuf_pool[socketid] =3D=3D NULL) > > + if (pktmbuf_pool[portid][socketid] =3D=3D NULL) > > rte_exit(EXIT_FAILURE, > > "Cannot init mbuf pool on > > socket %d\n", > > socketid); > > @@ -678,8 +692,13 @@ init_mem(unsigned nb_mbuf) > > printf("Allocated mbuf pool on socket %d\n", > > socketid); > > > > - /* Setup either LPM or EM(f.e Hash). */ > > - l3fwd_lkp.setup(socketid); > > + /* Setup either LPM or EM(f.e Hash). But, only once > > per > > + * available socket. > > + */ > > + if (!lkp_per_socket[socketid]) { > > + l3fwd_lkp.setup(socketid); > > + lkp_per_socket[socketid] =3D 1; > > + } > > } > > qconf =3D &lcore_conf[lcore_id]; > > qconf->ipv4_lookup_struct =3D > > @@ -899,7 +918,14 @@ main(int argc, char **argv) > > (struct ether_addr *)(val_eth + portid) + 1); > > > > /* init memory */ > > - ret =3D init_mem(NB_MBUF); > > + if (!per_port_pool) { > > + /* portid =3D 0; this is *not* signifying the first port, > > + * rather, it signifies that portid is ignored. > > + */ > > + ret =3D init_mem(0, NB_MBUF(nb_ports)); > > + } else { > > + ret =3D init_mem(portid, NB_MBUF(1)); > > + } > > if (ret < 0) > > rte_exit(EXIT_FAILURE, "init_mem failed\n"); > > > > @@ -966,10 +992,16 @@ main(int argc, char **argv) > > rte_eth_dev_info_get(portid, &dev_info); > > rxq_conf =3D dev_info.default_rxconf; > > rxq_conf.offloads =3D conf->rxmode.offloads; > > - ret =3D rte_eth_rx_queue_setup(portid, queueid, > > nb_rxd, > > - socketid, > > - &rxq_conf, > > - pktmbuf_pool[socketid]); > > + if (!per_port_pool) > > + ret =3D rte_eth_rx_queue_setup(portid, > > queueid, > > + nb_rxd, socketid, > > + &rxq_conf, > > + pktmbuf_pool[0][socketid]); > > + else > > + ret =3D rte_eth_rx_queue_setup(portid, > > queueid, > > + nb_rxd, socketid, > > + &rxq_conf, > > + > > pktmbuf_pool[portid][socketid]); > > if (ret < 0) > > rte_exit(EXIT_FAILURE, > > "rte_eth_rx_queue_setup: err=3D%d, > > port=3D%d\n", > > -- > > 2.17.1 From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by dpdk.space (Postfix) with ESMTP id A3342A0096 for ; Mon, 8 Apr 2019 11:29:40 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 381432C24; Mon, 8 Apr 2019 11:29:39 +0200 (CEST) Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by dpdk.org (Postfix) with ESMTP id 03DC32BD8 for ; Mon, 8 Apr 2019 11:29:36 +0200 (CEST) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga102.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 08 Apr 2019 02:29:36 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.60,324,1549958400"; d="scan'208";a="159683162" Received: from irsmsx107.ger.corp.intel.com ([163.33.3.99]) by fmsmga004.fm.intel.com with ESMTP; 08 Apr 2019 02:29:34 -0700 Received: from irsmsx105.ger.corp.intel.com ([169.254.7.31]) by IRSMSX107.ger.corp.intel.com ([169.254.10.246]) with mapi id 14.03.0415.000; Mon, 8 Apr 2019 10:29:34 +0100 From: "Ananyev, Konstantin" To: "Ruifeng Wang (Arm Technology China)" , "Shreyansh.jain@nxp.com" , "dev@dpdk.org" CC: nd Thread-Topic: [dpdk-dev] [PATCH] examples/l3fwd: support separate buffer pool per port Thread-Index: AQHUo1eqaIUoRjuicUqwMQcvs709jaYyVziwgAA8NpA= Date: Mon, 8 Apr 2019 09:29:33 +0000 Message-ID: <2601191342CEEE43887BDE71AB9772580148A942C6@irsmsx105.ger.corp.intel.com> References: <20190103112932.4415-1-shreyansh.jain@nxp.com> In-Reply-To: Accept-Language: en-IE, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-titus-metadata-40: eyJDYXRlZ29yeUxhYmVscyI6IiIsIk1ldGFkYXRhIjp7Im5zIjoiaHR0cDpcL1wvd3d3LnRpdHVzLmNvbVwvbnNcL0ludGVsMyIsImlkIjoiZWU2MGNmN2EtN2Y0My00MDNlLWIxZTUtZjc1NjlkODQ0ODUwIiwicHJvcHMiOlt7Im4iOiJDVFBDbGFzc2lmaWNhdGlvbiIsInZhbHMiOlt7InZhbHVlIjoiQ1RQX05UIn1dfV19LCJTdWJqZWN0TGFiZWxzIjpbXSwiVE1DVmVyc2lvbiI6IjE3LjEwLjE4MDQuNDkiLCJUcnVzdGVkTGFiZWxIYXNoIjoiKzF5K2g2cnk1S1VMN3A5NkRXaVErQVpsWFlpNEZCZ1ZcL29PYWlKTlNwSWx1NHoxMFFjeFwvSUhWU3BWUUtsXC9VYyJ9 x-ctpclassification: CTP_NT dlp-product: dlpe-windows dlp-version: 11.0.600.7 dlp-reaction: no-action x-originating-ip: [163.33.239.182] Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dpdk-dev] [PATCH] examples/l3fwd: support separate buffer pool per port X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Message-ID: <20190408092933.XsEqGYbs2rXlRxFeluRUnvUqeZFLj1t--5o8wN2qenc@z> >=20 > Hi Shreyansh, >=20 > I tried this patch on MacchiatoBin + 82599 NIC. > Compared with global-pool mode, per-port-pool mode showed slightly lower = performance in single core test. That was my thought too - for the case when queues from multiple ports are = handled by the same core it probably would only slowdown things. Wonder what is the use case for the patch and what is the performance gain = you observed? Konstantin=20 > In dual core test, both modes had nearly same performance. >=20 > My setup only has two ports which is limited. > Just want to know the per-port-pool mode has more performance gain when m= any ports are bound to different cores? >=20 > Used commands: > sudo ./examples/l3fwd/build/l3fwd -c 0x4 -w 0000:01:00.0 -w 0000:01:00.1 = -- -P -p 3 --config=3D'(0,0,2),(1,0,2)' --per-port-pool > sudo ./examples/l3fwd/build/l3fwd -c 0xc -w 0000:01:00.0 -w 0000:01:00.1 = -- -P -p 3 --config=3D'(0,0,2),(1,0,3)' --per-port-pool >=20 > Regards, > /Ruifeng >=20 > > -----Original Message----- > > From: dev On Behalf Of Shreyansh Jain > > Sent: 2019=1B$BG/=1B(B1=1B$B7n=1B(B3=1B$BF|=1B(B 19:30 > > To: dev@dpdk.org > > Cc: Shreyansh Jain > > Subject: [dpdk-dev] [PATCH] examples/l3fwd: support separate buffer poo= l > > per port > > > > Traditionally, only a single buffer pool per port (or, per-port-per-soc= ket) is > > created in l3fwd application. > > > > If separate pools are created per-port, it might lead to gain in perfor= mance as > > packet alloc/dealloc requests would be isolated across ports (and their > > corresponding lcores). > > > > This patch adds an argument '--per-port-pool' to the l3fwd application. > > By default, old mode of single pool per port (split on sockets) is acti= ve. > > > > Signed-off-by: Shreyansh Jain > > --- > > > > RFC: https://mails.dpdk.org/archives/dev/2018-November/120002.html > > > > examples/l3fwd/main.c | 74 +++++++++++++++++++++++++++++++-------- > > ---- > > 1 file changed, 53 insertions(+), 21 deletions(-) > > > > diff --git a/examples/l3fwd/main.c b/examples/l3fwd/main.c index > > e4b99efe0..7b9683187 100644 > > --- a/examples/l3fwd/main.c > > +++ b/examples/l3fwd/main.c > > @@ -69,11 +69,13 @@ static int promiscuous_on; static int l3fwd_lpm_on= ; > > static int l3fwd_em_on; > > > > +/* Global variables. */ > > + > > static int numa_on =3D 1; /**< NUMA is enabled by default. */ static = int > > parse_ptype; /**< Parse packet type using rx callback, and */ > > /**< disabled by default */ > > - > > -/* Global variables. */ > > +static int per_port_pool; /**< Use separate buffer pools per port; dis= abled > > */ > > + /**< by default */ > > > > volatile bool force_quit; > > > > @@ -133,7 +135,8 @@ static struct rte_eth_conf port_conf =3D { > > }, > > }; > > > > -static struct rte_mempool * pktmbuf_pool[NB_SOCKETS]; > > +static struct rte_mempool > > *pktmbuf_pool[RTE_MAX_ETHPORTS][NB_SOCKETS]; > > +static uint8_t lkp_per_socket[NB_SOCKETS]; > > > > struct l3fwd_lkp_mode { > > void (*setup)(int); > > @@ -285,7 +288,8 @@ print_usage(const char *prgname) > > " [--no-numa]" > > " [--hash-entry-num]" > > " [--ipv6]" > > - " [--parse-ptype]\n\n" > > + " [--parse-ptype]" > > + " [--per-port-pool]\n\n" > > > > " -p PORTMASK: Hexadecimal bitmask of ports to > > configure\n" > > " -P : Enable promiscuous mode\n" > > @@ -299,7 +303,8 @@ print_usage(const char *prgname) > > " --no-numa: Disable numa awareness\n" > > " --hash-entry-num: Specify the hash entry number in > > hexadecimal to be setup\n" > > " --ipv6: Set if running ipv6 packets\n" > > - " --parse-ptype: Set to use software to analyze packet > > type\n\n", > > + " --parse-ptype: Set to use software to analyze packet > > type\n" > > + " --per-port-pool: Use separate buffer pool per port\n\n", > > prgname); > > } > > > > @@ -452,6 +457,7 @@ static const char short_options[] =3D #define > > CMD_LINE_OPT_ENABLE_JUMBO "enable-jumbo" > > #define CMD_LINE_OPT_HASH_ENTRY_NUM "hash-entry-num" > > #define CMD_LINE_OPT_PARSE_PTYPE "parse-ptype" > > +#define CMD_LINE_OPT_PER_PORT_POOL "per-port-pool" > > enum { > > /* long options mapped to a short option */ > > > > @@ -465,6 +471,7 @@ enum { > > CMD_LINE_OPT_ENABLE_JUMBO_NUM, > > CMD_LINE_OPT_HASH_ENTRY_NUM_NUM, > > CMD_LINE_OPT_PARSE_PTYPE_NUM, > > + CMD_LINE_OPT_PARSE_PER_PORT_POOL, > > }; > > > > static const struct option lgopts[] =3D { @@ -475,6 +482,7 @@ static c= onst > > struct option lgopts[] =3D { > > {CMD_LINE_OPT_ENABLE_JUMBO, 0, 0, > > CMD_LINE_OPT_ENABLE_JUMBO_NUM}, > > {CMD_LINE_OPT_HASH_ENTRY_NUM, 1, 0, > > CMD_LINE_OPT_HASH_ENTRY_NUM_NUM}, > > {CMD_LINE_OPT_PARSE_PTYPE, 0, 0, > > CMD_LINE_OPT_PARSE_PTYPE_NUM}, > > + {CMD_LINE_OPT_PER_PORT_POOL, 0, 0, > > CMD_LINE_OPT_PARSE_PER_PORT_POOL}, > > {NULL, 0, 0, 0} > > }; > > > > @@ -485,10 +493,10 @@ static const struct option lgopts[] =3D { > > * RTE_MAX is used to ensure that NB_MBUF never goes below a minimum > > * value of 8192 > > */ > > -#define NB_MBUF RTE_MAX( \ > > - (nb_ports*nb_rx_queue*nb_rxd + \ > > - nb_ports*nb_lcores*MAX_PKT_BURST + \ > > - nb_ports*n_tx_queue*nb_txd + \ > > +#define NB_MBUF(nports) RTE_MAX( \ > > + (nports*nb_rx_queue*nb_rxd + \ > > + nports*nb_lcores*MAX_PKT_BURST + \ > > + nports*n_tx_queue*nb_txd + \ > > nb_lcores*MEMPOOL_CACHE_SIZE), \ > > (unsigned)8192) > > > > @@ -594,6 +602,11 @@ parse_args(int argc, char **argv) > > parse_ptype =3D 1; > > break; > > > > + case CMD_LINE_OPT_PARSE_PER_PORT_POOL: > > + printf("per port buffer pool is enabled\n"); > > + per_port_pool =3D 1; > > + break; > > + > > default: > > print_usage(prgname); > > return -1; > > @@ -642,7 +655,7 @@ print_ethaddr(const char *name, const struct > > ether_addr *eth_addr) } > > > > static int > > -init_mem(unsigned nb_mbuf) > > +init_mem(uint16_t portid, unsigned int nb_mbuf) > > { > > struct lcore_conf *qconf; > > int socketid; > > @@ -664,13 +677,14 @@ init_mem(unsigned nb_mbuf) > > socketid, lcore_id, NB_SOCKETS); > > } > > > > - if (pktmbuf_pool[socketid] =3D=3D NULL) { > > - snprintf(s, sizeof(s), "mbuf_pool_%d", socketid); > > - pktmbuf_pool[socketid] =3D > > + if (pktmbuf_pool[portid][socketid] =3D=3D NULL) { > > + snprintf(s, sizeof(s), "mbuf_pool_%d:%d", > > + portid, socketid); > > + pktmbuf_pool[portid][socketid] =3D > > rte_pktmbuf_pool_create(s, nb_mbuf, > > MEMPOOL_CACHE_SIZE, 0, > > RTE_MBUF_DEFAULT_BUF_SIZE, > > socketid); > > - if (pktmbuf_pool[socketid] =3D=3D NULL) > > + if (pktmbuf_pool[portid][socketid] =3D=3D NULL) > > rte_exit(EXIT_FAILURE, > > "Cannot init mbuf pool on > > socket %d\n", > > socketid); > > @@ -678,8 +692,13 @@ init_mem(unsigned nb_mbuf) > > printf("Allocated mbuf pool on socket %d\n", > > socketid); > > > > - /* Setup either LPM or EM(f.e Hash). */ > > - l3fwd_lkp.setup(socketid); > > + /* Setup either LPM or EM(f.e Hash). But, only once > > per > > + * available socket. > > + */ > > + if (!lkp_per_socket[socketid]) { > > + l3fwd_lkp.setup(socketid); > > + lkp_per_socket[socketid] =3D 1; > > + } > > } > > qconf =3D &lcore_conf[lcore_id]; > > qconf->ipv4_lookup_struct =3D > > @@ -899,7 +918,14 @@ main(int argc, char **argv) > > (struct ether_addr *)(val_eth + portid) + 1); > > > > /* init memory */ > > - ret =3D init_mem(NB_MBUF); > > + if (!per_port_pool) { > > + /* portid =3D 0; this is *not* signifying the first port, > > + * rather, it signifies that portid is ignored. > > + */ > > + ret =3D init_mem(0, NB_MBUF(nb_ports)); > > + } else { > > + ret =3D init_mem(portid, NB_MBUF(1)); > > + } > > if (ret < 0) > > rte_exit(EXIT_FAILURE, "init_mem failed\n"); > > > > @@ -966,10 +992,16 @@ main(int argc, char **argv) > > rte_eth_dev_info_get(portid, &dev_info); > > rxq_conf =3D dev_info.default_rxconf; > > rxq_conf.offloads =3D conf->rxmode.offloads; > > - ret =3D rte_eth_rx_queue_setup(portid, queueid, > > nb_rxd, > > - socketid, > > - &rxq_conf, > > - pktmbuf_pool[socketid]); > > + if (!per_port_pool) > > + ret =3D rte_eth_rx_queue_setup(portid, > > queueid, > > + nb_rxd, socketid, > > + &rxq_conf, > > + pktmbuf_pool[0][socketid]); > > + else > > + ret =3D rte_eth_rx_queue_setup(portid, > > queueid, > > + nb_rxd, socketid, > > + &rxq_conf, > > + > > pktmbuf_pool[portid][socketid]); > > if (ret < 0) > > rte_exit(EXIT_FAILURE, > > "rte_eth_rx_queue_setup: err=3D%d, > > port=3D%d\n", > > -- > > 2.17.1