From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C0BE8A0C4D; Mon, 4 Oct 2021 14:55:39 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id A8DB641361; Mon, 4 Oct 2021 14:55:39 +0200 (CEST) Received: from NAM10-DM6-obe.outbound.protection.outlook.com (mail-dm6nam10on2048.outbound.protection.outlook.com [40.107.93.48]) by mails.dpdk.org (Postfix) with ESMTP id 9B97B412D5 for ; Mon, 4 Oct 2021 14:55:37 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=RluNC1AwwSPDO0RXFP9eU7HDkLLH9ZrzjzYDjD28j/YuQQX2G0XpRjt6ZbHsqNErqWF2FgsyojzrMd6Pd6mzS/ZuuYhdA6Tuk1+TudbR92ZH/zNbCFT/7LaeydDlJ0WvyhfrHCsE7dDC8bmoVnn3mH3I30BB0olNJapQLfL5cg166q+NbrLTFVmvSN0OdpVbIshTVl2n79gzexrxFU7Yl4Cp+57BOZ6zjub9DoV/tk1mi2EedETuqUZtlUtLq+a1Ru2f/bB8wre2Hpmg6m059RDtEhlWK1eBAIp7d1+TbFkUMh3ekT5WPdBKnpilArriT8qgrsh8M+qEDxKyny0ZYg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=qO7Kzh8AwxV7iTzoQvmj13btfQB3Dp32XUovR/M7/r8=; b=nJT5iVA5ui5nV/AxtM9plFzpZ0TP/a5k7JOeZTM5TodYykxiTJj6Wpj2L8jQDVTBAmtb0pRQAekjxZZE7OzLyyKKQKBMDWqRqf6rJgkmGB7LOZhFWswGO/i1eNdh0QJFoIyli3q9s4wMoUSicahWSfw5Fflu8/9uhOSiLU/lqjZwYawXf+Cngh6OLUZ5jNBl9qXKzWrWYpIOxLAEUDpV9NAsYRL0kBDop+IDGSkHJv+zneDt/zq2NuZNq8q/4ovmp/lvfTrCuXv5BfEIqcjBBdZQnB8x/9u6a841VliTkiKPsl31rtXOLWCNFtHawTL2l7MEXcUWe/f4WB0nRi6U0w== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=solarflare.com smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=qO7Kzh8AwxV7iTzoQvmj13btfQB3Dp32XUovR/M7/r8=; b=jC+sMYr0/ivJW9Odu2GnhpWGRXyhRY1kD7RAljt1+oSlXj48h7wZHc/orZ/72hB0jn1oW5w15KftoWm77R2VWkcHuZzDWd5CuMqgstqUKXSUrklR8QHPfecF5bsEaiqSA8H+Gz14LqFas/Z0ujLDBVYqZvUgicvGFaJlCtaGxdkQsSTA3YXgnCfSIOXIBnKj5OKPkmY3AbNrOCa5VhLwZVEa1TGonp8W3Zv4q/+ItJgLRRiWNMs9i0IwYo0jIqdNdARgxq1cQ+5b8uK/uj5bO5LLbjue+CO/beZH9+t/sFJ83ixY1FxQuABjt2C3SCGlhSs77PJ3LzP1BNsccumfdQ== Received: from DM5PR05CA0003.namprd05.prod.outlook.com (2603:10b6:3:d4::13) by SN1PR12MB2464.namprd12.prod.outlook.com (2603:10b6:802:24::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4544.21; Mon, 4 Oct 2021 12:55:35 +0000 Received: from DM6NAM11FT019.eop-nam11.prod.protection.outlook.com (2603:10b6:3:d4:cafe::d1) by DM5PR05CA0003.outlook.office365.com (2603:10b6:3:d4::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4587.7 via Frontend Transport; Mon, 4 Oct 2021 12:55:35 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; solarflare.com; dkim=none (message not signed) header.d=none;solarflare.com; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by DM6NAM11FT019.mail.protection.outlook.com (10.13.172.172) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4566.14 via Frontend Transport; Mon, 4 Oct 2021 12:55:35 +0000 Received: from nvidia.com (172.20.187.5) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Mon, 4 Oct 2021 12:55:32 +0000 From: Wisam Jaddo To: , , CC: Date: Mon, 4 Oct 2021 15:55:13 +0300 Message-ID: <20211004125513.9492-1-wisamm@nvidia.com> X-Mailer: git-send-email 2.17.1 MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [172.20.187.5] X-ClientProxiedBy: HQMAIL107.nvidia.com (172.20.187.13) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 2ee84815-d052-4915-ba0e-08d9873641e0 X-MS-TrafficTypeDiagnostic: SN1PR12MB2464: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:4303; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: whXCc/EErXo4mYYy3aiN5MusZHYsQ2Iw/7xtMCEOSYRdDS7oX9pwwlSVELgGdazMYNqms3Yt8G/Uhicigi+Ighxd+wQF7tlJyqeKwD1MeT7F4I7V6p762Tu2in0IWxEjH4OxZvJLyFUeFTtqLnUf20Aa/bVwQPhL4aHHuSPSw8iH8fTmGgM8UhAaptKKT7zrXmDFLHP0E2Y7Fq7n2Lf5gR9bijrZbmDasqknRFXP6Ohh7kyfl3P22YmWV0lI4avpkxyv5jEvyaicG7OsyU9bkYjGHGwREv0TM0gbucBwpFO4CIYPQNLRZJwluIeh+Km/z+yTEGmJ3Wm8ZUzn0IHf54DFl0Ac+tV4Gyqufh50W+YDuJEqLg4rMjqAUYnQAzWjyVbuqSYI3RKJkM1kX0g7IOr/vhYSNaqYkIdhLP0HhlrJ9XS88TE/radktNJaTXvTh64pP3P58MTZUWZRtAXhQy+n7HY2UfyB6h6uo1tuBdRSwlnEg2ICSYUO7hILBNruYXitHo/60L9kpt5/eRzX1eD5BCoVfwZgGPotqVS+IITqhQpyknJ6jDAVwEp3XdIJ3Gj+vpleYju0a01QvgPxaajtXaF2tlfvImkH91xwjCSVuFT/eZreZ2AJTAR2ztRJDfMh0Ay18hrQYVTrG+7a8lpOusXBp/VPyTTp2Xw6ADuuMWOFVOlp8mS0wLjyV4uFynv6fpkb/2ZSM/aecq9HuQ== X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(36840700001)(46966006)(316002)(30864003)(7696005)(86362001)(6666004)(55016002)(83380400001)(107886003)(6286002)(110136005)(5660300002)(1076003)(2616005)(426003)(47076005)(336012)(36756003)(2906002)(8936002)(82310400003)(16526019)(7636003)(70206006)(8676002)(356005)(4326008)(508600001)(36860700001)(186003)(70586007)(26005); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 Oct 2021 12:55:35.1427 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 2ee84815-d052-4915-ba0e-08d9873641e0 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT019.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN1PR12MB2464 Subject: [dpdk-dev] [PATCH] app/flow-perf: export some configuration options X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Some options are needed in the runtime many times, so leaving it during compilation is not correct. As a result some options has been exported into command line options to be used at run time. The options exported are: --txq=N --rxq=N --txd=N --rxd=N --mbuf-size=N --mbuf-cache-size=N --total-mbuf-count=N Signed-off-by: Wisam Jaddo Reviewed-by: Alexander Kozyrev --- app/test-flow-perf/actions_gen.c | 14 ++--- app/test-flow-perf/actions_gen.h | 2 +- app/test-flow-perf/config.h | 4 +- app/test-flow-perf/flow_gen.c | 3 +- app/test-flow-perf/flow_gen.h | 1 + app/test-flow-perf/main.c | 102 +++++++++++++++++++++++++------ doc/guides/tools/flow-perf.rst | 33 ++++++++-- 7 files changed, 124 insertions(+), 35 deletions(-) diff --git a/app/test-flow-perf/actions_gen.c b/app/test-flow-perf/actions_gen.c index 82cddfc676..7c209f7266 100644 --- a/app/test-flow-perf/actions_gen.c +++ b/app/test-flow-perf/actions_gen.c @@ -909,25 +909,25 @@ void fill_actions(struct rte_flow_action *actions, uint64_t *flow_actions, uint32_t counter, uint16_t next_table, uint16_t hairpinq, uint64_t encap_data, uint64_t decap_data, uint8_t core_idx, - bool unique_data) + bool unique_data, uint8_t rx_queues_count) { struct additional_para additional_para_data; uint8_t actions_counter = 0; uint16_t hairpin_queues[hairpinq]; - uint16_t queues[RXQ_NUM]; + uint16_t queues[rx_queues_count]; uint16_t i, j; - for (i = 0; i < RXQ_NUM; i++) + for (i = 0; i < rx_queues_count; i++) queues[i] = i; for (i = 0; i < hairpinq; i++) - hairpin_queues[i] = i + RXQ_NUM; + hairpin_queues[i] = i + rx_queues_count; additional_para_data = (struct additional_para){ - .queue = counter % RXQ_NUM, + .queue = counter % rx_queues_count, .next_table = next_table, .queues = queues, - .queues_number = RXQ_NUM, + .queues_number = rx_queues_count, .counter = counter, .encap_data = encap_data, .decap_data = decap_data, @@ -938,7 +938,7 @@ fill_actions(struct rte_flow_action *actions, uint64_t *flow_actions, if (hairpinq != 0) { additional_para_data.queues = hairpin_queues; additional_para_data.queues_number = hairpinq; - additional_para_data.queue = (counter % hairpinq) + RXQ_NUM; + additional_para_data.queue = (counter % hairpinq) + rx_queues_count; } static const struct actions_dict { diff --git a/app/test-flow-perf/actions_gen.h b/app/test-flow-perf/actions_gen.h index 6f2f833496..8990686269 100644 --- a/app/test-flow-perf/actions_gen.h +++ b/app/test-flow-perf/actions_gen.h @@ -20,6 +20,6 @@ void fill_actions(struct rte_flow_action *actions, uint64_t *flow_actions, uint32_t counter, uint16_t next_table, uint16_t hairpinq, uint64_t encap_data, uint64_t decap_data, uint8_t core_idx, - bool unique_data); + bool unique_data, uint8_t rx_queues_count); #endif /* FLOW_PERF_ACTION_GEN */ diff --git a/app/test-flow-perf/config.h b/app/test-flow-perf/config.h index a14d4e05e1..3d85e0d49a 100644 --- a/app/test-flow-perf/config.h +++ b/app/test-flow-perf/config.h @@ -8,8 +8,8 @@ #define GET_RSS_HF() (ETH_RSS_IP) /* Configuration */ -#define RXQ_NUM 4 -#define TXQ_NUM 4 +#define RXQ_NUM 1 +#define TXQ_NUM 1 #define TOTAL_MBUF_NUM 32000 #define MBUF_SIZE 2048 #define MBUF_CACHE_SIZE 512 diff --git a/app/test-flow-perf/flow_gen.c b/app/test-flow-perf/flow_gen.c index 8f87fac5f6..51871dbfdc 100644 --- a/app/test-flow-perf/flow_gen.c +++ b/app/test-flow-perf/flow_gen.c @@ -46,6 +46,7 @@ generate_flow(uint16_t port_id, uint64_t encap_data, uint64_t decap_data, uint8_t core_idx, + uint8_t rx_queues_count, bool unique_data, struct rte_flow_error *error) { @@ -63,7 +64,7 @@ generate_flow(uint16_t port_id, fill_actions(actions, flow_actions, outer_ip_src, next_table, hairpinq, encap_data, decap_data, core_idx, - unique_data); + unique_data, rx_queues_count); fill_items(items, flow_items, outer_ip_src, core_idx); diff --git a/app/test-flow-perf/flow_gen.h b/app/test-flow-perf/flow_gen.h index dc887fceae..1118a9fc14 100644 --- a/app/test-flow-perf/flow_gen.h +++ b/app/test-flow-perf/flow_gen.h @@ -35,6 +35,7 @@ generate_flow(uint16_t port_id, uint64_t encap_data, uint64_t decap_data, uint8_t core_idx, + uint8_t rx_queues_count, bool unique_data, struct rte_flow_error *error); diff --git a/app/test-flow-perf/main.c b/app/test-flow-perf/main.c index b99e603f81..102e9e6ede 100644 --- a/app/test-flow-perf/main.c +++ b/app/test-flow-perf/main.c @@ -65,6 +65,14 @@ static bool dump_socket_mem_flag; static bool enable_fwd; static bool unique_data; +static uint8_t rx_queues_count; +static uint8_t tx_queues_count; +static uint8_t rxd_count; +static uint8_t txd_count; +static uint32_t mbuf_size; +static uint32_t mbuf_cache_size; +static uint32_t total_mbuf_num; + static struct rte_mempool *mbuf_mp; static uint32_t nb_lcores; static uint32_t rules_count; @@ -145,6 +153,14 @@ usage(char *progname) " default is %d\n", DEFAULT_GROUP); printf(" --cores=N: to set the number of needed " "cores to insert rte_flow rules, default is 1\n"); + printf(" --rxq=N: to set the count of receive queues\n"); + printf(" --txq=N: to set the count of send queues\n"); + printf(" --rxd=N: to set the count of rxd\n"); + printf(" --txd=N: to set the count of txd\n"); + printf(" --mbuf-size=N: to set the size of mbuf\n"); + printf(" --mbuf-cache-size=N: to set the size of mbuf cache\n"); + printf(" --total-mbuf-count=N: to set the count of total mbuf count\n"); + printf("To set flow items:\n"); printf(" --ether: add ether layer in flow items\n"); @@ -575,6 +591,14 @@ args_parse(int argc, char **argv) { "unique-data", 0, 0, 0 }, { "portmask", 1, 0, 0 }, { "cores", 1, 0, 0 }, + { "meter-profile-alg", 1, 0, 0 }, + { "rxq", 1, 0, 0 }, + { "txq", 1, 0, 0 }, + { "rxd", 1, 0, 0 }, + { "txd", 1, 0, 0 }, + { "mbuf-size", 1, 0, 0 }, + { "mbuf-cache-size", 1, 0, 0 }, + { "total-mbuf-count", 1, 0, 0 }, /* Attributes */ { "ingress", 0, 0, 0 }, { "egress", 0, 0, 0 }, @@ -625,7 +649,7 @@ args_parse(int argc, char **argv) { "set-ipv4-dscp", 0, 0, 0 }, { "set-ipv6-dscp", 0, 0, 0 }, { "flag", 0, 0, 0 }, - { "meter", 0, 0, 0 }, + { "meter", 0, 0, 0 }, { "raw-encap", 1, 0, 0 }, { "raw-decap", 1, 0, 0 }, { "vxlan-encap", 0, 0, 0 }, @@ -789,6 +813,34 @@ args_parse(int argc, char **argv) rte_exit(EXIT_FAILURE, "Invalid fwd port mask\n"); ports_mask = pm; } + if (strcmp(lgopts[opt_idx].name, "rxq") == 0) { + n = atoi(optarg); + rx_queues_count = (uint8_t) n; + } + if (strcmp(lgopts[opt_idx].name, "txq") == 0) { + n = atoi(optarg); + tx_queues_count = (uint8_t) n; + } + if (strcmp(lgopts[opt_idx].name, "rxd") == 0) { + n = atoi(optarg); + rxd_count = (uint8_t) n; + } + if (strcmp(lgopts[opt_idx].name, "txd") == 0) { + n = atoi(optarg); + txd_count = (uint8_t) n; + } + if (strcmp(lgopts[opt_idx].name, "mbuf-size") == 0) { + n = atoi(optarg); + mbuf_size = (uint32_t) n; + } + if (strcmp(lgopts[opt_idx].name, "mbuf-cache-size") == 0) { + n = atoi(optarg); + mbuf_cache_size = (uint32_t) n; + } + if (strcmp(lgopts[opt_idx].name, "total-mbuf-count") == 0) { + n = atoi(optarg); + total_mbuf_num = (uint32_t) n; + } if (strcmp(lgopts[opt_idx].name, "cores") == 0) { n = atoi(optarg); if ((int) rte_lcore_count() <= n) { @@ -1175,7 +1227,8 @@ insert_flows(int port_id, uint8_t core_id) */ flow = generate_flow(port_id, 0, flow_attrs, global_items, global_actions, - flow_group, 0, 0, 0, 0, core_id, unique_data, &error); + flow_group, 0, 0, 0, 0, core_id, rx_queues_count, + unique_data, &error); if (flow == NULL) { print_flow_error(error); @@ -1191,7 +1244,8 @@ insert_flows(int port_id, uint8_t core_id) JUMP_ACTION_TABLE, counter, hairpin_queues_num, encap_data, decap_data, - core_id, unique_data, &error); + core_id, rx_queues_count, + unique_data, &error); if (!counter) { first_flow_latency = (double) (rte_get_timer_cycles() - start_batch); @@ -1662,7 +1716,7 @@ init_lcore_info(void) * logical cores except first core, since it's reserved for * stats prints. */ - nb_fwd_streams = nr_port * RXQ_NUM; + nb_fwd_streams = nr_port * rx_queues_count; if ((int)(nb_lcores - 1) >= nb_fwd_streams) for (i = 0; i < (int)(nb_lcores - 1); i++) { lcore = rte_get_next_lcore(lcore, 0, 0); @@ -1692,7 +1746,7 @@ init_lcore_info(void) lcore = rte_get_next_lcore(-1, 0, 0); for (port = 0; port < nr_port; port++) { /* Create FWD stream */ - for (queue = 0; queue < RXQ_NUM; queue++) { + for (queue = 0; queue < rx_queues_count; queue++) { if (!lcore_infos[lcore].streams_nb || !(stream_id % lcore_infos[lcore].streams_nb)) { lcore = rte_get_next_lcore(lcore, 0, 0); @@ -1745,17 +1799,17 @@ init_port(void) struct rte_eth_rxconf rxq_conf; struct rte_eth_dev_info dev_info; - nr_queues = RXQ_NUM; + nr_queues = rx_queues_count; if (hairpin_queues_num != 0) - nr_queues = RXQ_NUM + hairpin_queues_num; + nr_queues = rx_queues_count + hairpin_queues_num; nr_ports = rte_eth_dev_count_avail(); if (nr_ports == 0) rte_exit(EXIT_FAILURE, "Error: no port detected\n"); mbuf_mp = rte_pktmbuf_pool_create("mbuf_pool", - TOTAL_MBUF_NUM, MBUF_CACHE_SIZE, - 0, MBUF_SIZE, + total_mbuf_num, mbuf_cache_size, + 0, mbuf_size, rte_socket_id()); if (mbuf_mp == NULL) rte_exit(EXIT_FAILURE, "Error: can't init mbuf pool\n"); @@ -1781,8 +1835,8 @@ init_port(void) ret, port_id); rxq_conf = dev_info.default_rxconf; - for (std_queue = 0; std_queue < RXQ_NUM; std_queue++) { - ret = rte_eth_rx_queue_setup(port_id, std_queue, NR_RXD, + for (std_queue = 0; std_queue < rx_queues_count; std_queue++) { + ret = rte_eth_rx_queue_setup(port_id, std_queue, rxd_count, rte_eth_dev_socket_id(port_id), &rxq_conf, mbuf_mp); @@ -1793,8 +1847,8 @@ init_port(void) } txq_conf = dev_info.default_txconf; - for (std_queue = 0; std_queue < TXQ_NUM; std_queue++) { - ret = rte_eth_tx_queue_setup(port_id, std_queue, NR_TXD, + for (std_queue = 0; std_queue < tx_queues_count; std_queue++) { + ret = rte_eth_tx_queue_setup(port_id, std_queue, txd_count, rte_eth_dev_socket_id(port_id), &txq_conf); if (ret < 0) @@ -1814,32 +1868,32 @@ init_port(void) /* * Configure peer which represents hairpin Tx. * Hairpin queue numbers start after standard queues - * (RXQ_NUM and TXQ_NUM). + * (rx_queues_count and tx_queues_count). */ - for (hairpin_queue = RXQ_NUM, std_queue = 0; + for (hairpin_queue = rx_queues_count, std_queue = 0; hairpin_queue < nr_queues; hairpin_queue++, std_queue++) { hairpin_conf.peers[0].port = port_id; hairpin_conf.peers[0].queue = - std_queue + TXQ_NUM; + std_queue + tx_queues_count; ret = rte_eth_rx_hairpin_queue_setup( port_id, hairpin_queue, - NR_RXD, &hairpin_conf); + rxd_count, &hairpin_conf); if (ret != 0) rte_exit(EXIT_FAILURE, ":: Hairpin rx queue setup failed: err=%d, port=%u\n", ret, port_id); } - for (hairpin_queue = TXQ_NUM, std_queue = 0; + for (hairpin_queue = tx_queues_count, std_queue = 0; hairpin_queue < nr_queues; hairpin_queue++, std_queue++) { hairpin_conf.peers[0].port = port_id; hairpin_conf.peers[0].queue = - std_queue + RXQ_NUM; + std_queue + rx_queues_count; ret = rte_eth_tx_hairpin_queue_setup( port_id, hairpin_queue, - NR_TXD, &hairpin_conf); + txd_count, &hairpin_conf); if (ret != 0) rte_exit(EXIT_FAILURE, ":: Hairpin tx queue setup failed: err=%d, port=%u\n", @@ -1877,6 +1931,14 @@ main(int argc, char **argv) flow_group = DEFAULT_GROUP; unique_data = false; + rx_queues_count = (uint8_t) RXQ_NUM; + tx_queues_count = (uint8_t) TXQ_NUM; + rxd_count = (uint8_t) NR_RXD; + txd_count = (uint8_t) NR_TXD; + mbuf_size = (uint32_t) MBUF_SIZE; + mbuf_cache_size = (uint32_t) MBUF_CACHE_SIZE; + total_mbuf_num = (uint32_t) TOTAL_MBUF_NUM; + signal(SIGINT, signal_handler); signal(SIGTERM, signal_handler); diff --git a/doc/guides/tools/flow-perf.rst b/doc/guides/tools/flow-perf.rst index 280bf7e0e0..0855f88689 100644 --- a/doc/guides/tools/flow-perf.rst +++ b/doc/guides/tools/flow-perf.rst @@ -100,10 +100,35 @@ The command line options are: Set the number of needed cores to insert/delete rte_flow rules. Default cores count is 1. -* ``--unique-data`` - Flag to set using unique data for all actions that support data, - Such as header modify and encap actions. Default is using fixed - data for any action that support data for all flows. +* ``--meter-profile-alg`` + Set the traffic metering algorithm. + Example: meter-profile-alg=srtcmp, default algorithm is srtcm_rfc2697 + +* ``--unique-data`` + Flag to set using unique data for all actions that support data, + Such as header modify and encap actions. Default is using fixed + data for any action that support data for all flows. + +* ``--rxq=N`` + Set the count of receive queues, default is 1. + +* ``--txq=N`` + Set the count of send queues, default is 1. + +* ``--rxd=N`` + Set the count of rxd, default is 256. + +* ``--txd=N`` + Set the count of txd, default is 256. + +* ``--mbuf-size=N`` + Set the size of mbuf, default size is 2048. + +* ``--mbuf-cache-size=N`` + Set the size of mbuf cache, default size is 512. + +* ``--total-mbuf-count=N`` + Set the count of total mbuf number, default count is 32000. Attributes: -- 2.17.1