DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] [PATCH 0/5] New flow-perf fixes
@ 2021-03-07  9:11 Wisam Jaddo
  2021-03-07  9:11 ` [dpdk-dev] [PATCH 1/5] app/flow-perf: start using more generic wrapper for cycles Wisam Jaddo
                   ` (4 more replies)
  0 siblings, 5 replies; 46+ messages in thread
From: Wisam Jaddo @ 2021-03-07  9:11 UTC (permalink / raw)
  To: arybchenko, thomas, akozyrev, rasland, dev

Wisam Jaddo (5):
  app/flow-perf: start using more generic wrapper for cycles
  app/flow-perf: add new option to use unique data on the fly
  app/flow-perf: fix naming of CPU used structured data
  app/flow-perf: fix report total stats for masked ports
  app/flow-perf: fix the incremental IPv6 src set

 app/test-flow-perf/actions_gen.c | 77 +++++++++++++++++---------------
 app/test-flow-perf/actions_gen.h |  3 +-
 app/test-flow-perf/config.h      |  8 +---
 app/test-flow-perf/flow_gen.c    |  4 +-
 app/test-flow-perf/flow_gen.h    |  1 +
 app/test-flow-perf/items_gen.c   | 13 +++---
 app/test-flow-perf/main.c        | 56 +++++++++++++----------
 doc/guides/tools/flow-perf.rst   |  5 +++
 8 files changed, 92 insertions(+), 75 deletions(-)

-- 
2.17.1


^ permalink raw reply	[flat|nested] 46+ messages in thread

* [dpdk-dev] [PATCH 1/5] app/flow-perf: start using more generic wrapper for cycles
  2021-03-07  9:11 [dpdk-dev] [PATCH 0/5] New flow-perf fixes Wisam Jaddo
@ 2021-03-07  9:11 ` Wisam Jaddo
  2021-03-10 13:45   ` [dpdk-dev] [PATCH v2 0/7] Enhancements and fixes for flow-perf Wisam Jaddo
  2021-03-10 13:48   ` [dpdk-dev] [PATCH v2 0/7] Enhancements and fixes for flow-perf Wisam Jaddo
  2021-03-07  9:11 ` [dpdk-dev] [PATCH 2/5] app/flow-perf: add new option to use unique data on the fly Wisam Jaddo
                   ` (3 subsequent siblings)
  4 siblings, 2 replies; 46+ messages in thread
From: Wisam Jaddo @ 2021-03-07  9:11 UTC (permalink / raw)
  To: arybchenko, thomas, akozyrev, rasland, dev

rdtsc() is x86 related, while this might fail for other archs,
so it's better to use more generic API for cycles measurement.

Signed-off-by: Wisam Jaddo <wisamm@nvidia.com>
---
 app/test-flow-perf/main.c | 24 ++++++++++++------------
 1 file changed, 12 insertions(+), 12 deletions(-)

diff --git a/app/test-flow-perf/main.c b/app/test-flow-perf/main.c
index 99d0463456..8b5a11c15e 100644
--- a/app/test-flow-perf/main.c
+++ b/app/test-flow-perf/main.c
@@ -969,7 +969,7 @@ meters_handler(int port_id, uint8_t core_id, uint8_t ops)
 	end_counter = (core_id + 1) * rules_count_per_core;
 
 	cpu_time_used = 0;
-	start_batch = rte_rdtsc();
+	start_batch = rte_get_timer_cycles();
 	for (counter = start_counter; counter < end_counter; counter++) {
 		if (ops == METER_CREATE)
 			create_meter_rule(port_id, counter);
@@ -984,10 +984,10 @@ meters_handler(int port_id, uint8_t core_id, uint8_t ops)
 		if (!((counter + 1) % rules_batch)) {
 			rules_batch_idx = ((counter + 1) / rules_batch) - 1;
 			cpu_time_per_batch[rules_batch_idx] =
-				((double)(rte_rdtsc() - start_batch))
-				/ rte_get_tsc_hz();
+				((double)(rte_get_timer_cycles() - start_batch))
+				/ rte_get_timer_hz();
 			cpu_time_used += cpu_time_per_batch[rules_batch_idx];
-			start_batch = rte_rdtsc();
+			start_batch = rte_get_timer_cycles();
 		}
 	}
 
@@ -1089,7 +1089,7 @@ destroy_flows(int port_id, uint8_t core_id, struct rte_flow **flows_list)
 	if (flow_group > 0 && core_id == 0)
 		rules_count_per_core++;
 
-	start_batch = rte_rdtsc();
+	start_batch = rte_get_timer_cycles();
 	for (i = 0; i < (uint32_t) rules_count_per_core; i++) {
 		if (flows_list[i] == 0)
 			break;
@@ -1107,12 +1107,12 @@ destroy_flows(int port_id, uint8_t core_id, struct rte_flow **flows_list)
 		 * for this batch.
 		 */
 		if (!((i + 1) % rules_batch)) {
-			end_batch = rte_rdtsc();
+			end_batch = rte_get_timer_cycles();
 			delta = (double) (end_batch - start_batch);
 			rules_batch_idx = ((i + 1) / rules_batch) - 1;
-			cpu_time_per_batch[rules_batch_idx] = delta / rte_get_tsc_hz();
+			cpu_time_per_batch[rules_batch_idx] = delta / rte_get_timer_hz();
 			cpu_time_used += cpu_time_per_batch[rules_batch_idx];
-			start_batch = rte_rdtsc();
+			start_batch = rte_get_timer_cycles();
 		}
 	}
 
@@ -1185,7 +1185,7 @@ insert_flows(int port_id, uint8_t core_id)
 		flows_list[flow_index++] = flow;
 	}
 
-	start_batch = rte_rdtsc();
+	start_batch = rte_get_timer_cycles();
 	for (counter = start_counter; counter < end_counter; counter++) {
 		flow = generate_flow(port_id, flow_group,
 			flow_attrs, flow_items, flow_actions,
@@ -1211,12 +1211,12 @@ insert_flows(int port_id, uint8_t core_id)
 		 * for this batch.
 		 */
 		if (!((counter + 1) % rules_batch)) {
-			end_batch = rte_rdtsc();
+			end_batch = rte_get_timer_cycles();
 			delta = (double) (end_batch - start_batch);
 			rules_batch_idx = ((counter + 1) / rules_batch) - 1;
-			cpu_time_per_batch[rules_batch_idx] = delta / rte_get_tsc_hz();
+			cpu_time_per_batch[rules_batch_idx] = delta / rte_get_timer_hz();
 			cpu_time_used += cpu_time_per_batch[rules_batch_idx];
-			start_batch = rte_rdtsc();
+			start_batch = rte_get_timer_cycles();
 		}
 	}
 
-- 
2.17.1


^ permalink raw reply	[flat|nested] 46+ messages in thread

* [dpdk-dev] [PATCH 2/5] app/flow-perf: add new option to use unique data on the fly
  2021-03-07  9:11 [dpdk-dev] [PATCH 0/5] New flow-perf fixes Wisam Jaddo
  2021-03-07  9:11 ` [dpdk-dev] [PATCH 1/5] app/flow-perf: start using more generic wrapper for cycles Wisam Jaddo
@ 2021-03-07  9:11 ` Wisam Jaddo
  2021-03-07  9:12 ` [dpdk-dev] [PATCH 3/5] app/flow-perf: fix naming of CPU used structured data Wisam Jaddo
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 46+ messages in thread
From: Wisam Jaddo @ 2021-03-07  9:11 UTC (permalink / raw)
  To: arybchenko, thomas, akozyrev, rasland, dev

Current support for unique data is to compile with config.h
var as FIXED_VALUES as 0, and this is only supported on
compilation time, as a result the user may use only single
mode for each compilation.

Starting with this commit the user will have the ability to
use this feature on the fly by using this new option:
--unique-data

Example of unique data usage:
Insert many rules with different encap data for a flows that
have encap action in it.

Signed-off-by: Wisam Jaddo <wisamm@nvidia.com>
---
 app/test-flow-perf/actions_gen.c | 77 +++++++++++++++++---------------
 app/test-flow-perf/actions_gen.h |  3 +-
 app/test-flow-perf/config.h      |  8 +---
 app/test-flow-perf/flow_gen.c    |  4 +-
 app/test-flow-perf/flow_gen.h    |  1 +
 app/test-flow-perf/main.c        | 13 ++++--
 doc/guides/tools/flow-perf.rst   |  5 +++
 7 files changed, 62 insertions(+), 49 deletions(-)

diff --git a/app/test-flow-perf/actions_gen.c b/app/test-flow-perf/actions_gen.c
index 1f5c64fde9..82cddfc676 100644
--- a/app/test-flow-perf/actions_gen.c
+++ b/app/test-flow-perf/actions_gen.c
@@ -30,6 +30,7 @@ struct additional_para {
 	uint64_t encap_data;
 	uint64_t decap_data;
 	uint8_t core_idx;
+	bool unique_data;
 };
 
 /* Storage for struct rte_flow_action_raw_encap including external data. */
@@ -202,14 +203,14 @@ add_count(struct rte_flow_action *actions,
 static void
 add_set_src_mac(struct rte_flow_action *actions,
 	uint8_t actions_counter,
-	__rte_unused struct additional_para para)
+	struct additional_para para)
 {
 	static struct rte_flow_action_set_mac set_macs[RTE_MAX_LCORE] __rte_cache_aligned;
 	uint32_t mac = para.counter;
 	uint16_t i;
 
 	/* Fixed value */
-	if (FIXED_VALUES)
+	if (!para.unique_data)
 		mac = 1;
 
 	/* Mac address to be set is random each time */
@@ -225,14 +226,14 @@ add_set_src_mac(struct rte_flow_action *actions,
 static void
 add_set_dst_mac(struct rte_flow_action *actions,
 	uint8_t actions_counter,
-	__rte_unused struct additional_para para)
+	struct additional_para para)
 {
 	static struct rte_flow_action_set_mac set_macs[RTE_MAX_LCORE] __rte_cache_aligned;
 	uint32_t mac = para.counter;
 	uint16_t i;
 
 	/* Fixed value */
-	if (FIXED_VALUES)
+	if (!para.unique_data)
 		mac = 1;
 
 	/* Mac address to be set is random each time */
@@ -248,13 +249,13 @@ add_set_dst_mac(struct rte_flow_action *actions,
 static void
 add_set_src_ipv4(struct rte_flow_action *actions,
 	uint8_t actions_counter,
-	__rte_unused struct additional_para para)
+	struct additional_para para)
 {
 	static struct rte_flow_action_set_ipv4 set_ipv4[RTE_MAX_LCORE] __rte_cache_aligned;
 	uint32_t ip = para.counter;
 
 	/* Fixed value */
-	if (FIXED_VALUES)
+	if (!para.unique_data)
 		ip = 1;
 
 	/* IPv4 value to be set is random each time */
@@ -267,13 +268,13 @@ add_set_src_ipv4(struct rte_flow_action *actions,
 static void
 add_set_dst_ipv4(struct rte_flow_action *actions,
 	uint8_t actions_counter,
-	__rte_unused struct additional_para para)
+	struct additional_para para)
 {
 	static struct rte_flow_action_set_ipv4 set_ipv4[RTE_MAX_LCORE] __rte_cache_aligned;
 	uint32_t ip = para.counter;
 
 	/* Fixed value */
-	if (FIXED_VALUES)
+	if (!para.unique_data)
 		ip = 1;
 
 	/* IPv4 value to be set is random each time */
@@ -286,14 +287,14 @@ add_set_dst_ipv4(struct rte_flow_action *actions,
 static void
 add_set_src_ipv6(struct rte_flow_action *actions,
 	uint8_t actions_counter,
-	__rte_unused struct additional_para para)
+	struct additional_para para)
 {
 	static struct rte_flow_action_set_ipv6 set_ipv6[RTE_MAX_LCORE] __rte_cache_aligned;
 	uint32_t ipv6 = para.counter;
 	uint8_t i;
 
 	/* Fixed value */
-	if (FIXED_VALUES)
+	if (!para.unique_data)
 		ipv6 = 1;
 
 	/* IPv6 value to set is random each time */
@@ -309,14 +310,14 @@ add_set_src_ipv6(struct rte_flow_action *actions,
 static void
 add_set_dst_ipv6(struct rte_flow_action *actions,
 	uint8_t actions_counter,
-	__rte_unused struct additional_para para)
+	struct additional_para para)
 {
 	static struct rte_flow_action_set_ipv6 set_ipv6[RTE_MAX_LCORE] __rte_cache_aligned;
 	uint32_t ipv6 = para.counter;
 	uint8_t i;
 
 	/* Fixed value */
-	if (FIXED_VALUES)
+	if (!para.unique_data)
 		ipv6 = 1;
 
 	/* IPv6 value to set is random each time */
@@ -332,13 +333,13 @@ add_set_dst_ipv6(struct rte_flow_action *actions,
 static void
 add_set_src_tp(struct rte_flow_action *actions,
 	uint8_t actions_counter,
-	__rte_unused struct additional_para para)
+	struct additional_para para)
 {
 	static struct rte_flow_action_set_tp set_tp[RTE_MAX_LCORE] __rte_cache_aligned;
 	uint32_t tp = para.counter;
 
 	/* Fixed value */
-	if (FIXED_VALUES)
+	if (!para.unique_data)
 		tp = 100;
 
 	/* TP src port is random each time */
@@ -353,13 +354,13 @@ add_set_src_tp(struct rte_flow_action *actions,
 static void
 add_set_dst_tp(struct rte_flow_action *actions,
 	uint8_t actions_counter,
-	__rte_unused struct additional_para para)
+	struct additional_para para)
 {
 	static struct rte_flow_action_set_tp set_tp[RTE_MAX_LCORE] __rte_cache_aligned;
 	uint32_t tp = para.counter;
 
 	/* Fixed value */
-	if (FIXED_VALUES)
+	if (!para.unique_data)
 		tp = 100;
 
 	/* TP src port is random each time */
@@ -375,13 +376,13 @@ add_set_dst_tp(struct rte_flow_action *actions,
 static void
 add_inc_tcp_ack(struct rte_flow_action *actions,
 	uint8_t actions_counter,
-	__rte_unused struct additional_para para)
+	struct additional_para para)
 {
 	static rte_be32_t value[RTE_MAX_LCORE] __rte_cache_aligned;
 	uint32_t ack_value = para.counter;
 
 	/* Fixed value */
-	if (FIXED_VALUES)
+	if (!para.unique_data)
 		ack_value = 1;
 
 	value[para.core_idx] = RTE_BE32(ack_value);
@@ -393,13 +394,13 @@ add_inc_tcp_ack(struct rte_flow_action *actions,
 static void
 add_dec_tcp_ack(struct rte_flow_action *actions,
 	uint8_t actions_counter,
-	__rte_unused struct additional_para para)
+	struct additional_para para)
 {
 	static rte_be32_t value[RTE_MAX_LCORE] __rte_cache_aligned;
 	uint32_t ack_value = para.counter;
 
 	/* Fixed value */
-	if (FIXED_VALUES)
+	if (!para.unique_data)
 		ack_value = 1;
 
 	value[para.core_idx] = RTE_BE32(ack_value);
@@ -411,13 +412,13 @@ add_dec_tcp_ack(struct rte_flow_action *actions,
 static void
 add_inc_tcp_seq(struct rte_flow_action *actions,
 	uint8_t actions_counter,
-	__rte_unused struct additional_para para)
+	struct additional_para para)
 {
 	static rte_be32_t value[RTE_MAX_LCORE] __rte_cache_aligned;
 	uint32_t seq_value = para.counter;
 
 	/* Fixed value */
-	if (FIXED_VALUES)
+	if (!para.unique_data)
 		seq_value = 1;
 
 	value[para.core_idx] = RTE_BE32(seq_value);
@@ -429,13 +430,13 @@ add_inc_tcp_seq(struct rte_flow_action *actions,
 static void
 add_dec_tcp_seq(struct rte_flow_action *actions,
 	uint8_t actions_counter,
-	__rte_unused struct additional_para para)
+	struct additional_para para)
 {
 	static rte_be32_t value[RTE_MAX_LCORE] __rte_cache_aligned;
 	uint32_t seq_value = para.counter;
 
 	/* Fixed value */
-	if (FIXED_VALUES)
+	if (!para.unique_data)
 		seq_value = 1;
 
 	value[para.core_idx] = RTE_BE32(seq_value);
@@ -447,13 +448,13 @@ add_dec_tcp_seq(struct rte_flow_action *actions,
 static void
 add_set_ttl(struct rte_flow_action *actions,
 	uint8_t actions_counter,
-	__rte_unused struct additional_para para)
+	struct additional_para para)
 {
 	static struct rte_flow_action_set_ttl set_ttl[RTE_MAX_LCORE] __rte_cache_aligned;
 	uint32_t ttl_value = para.counter;
 
 	/* Fixed value */
-	if (FIXED_VALUES)
+	if (!para.unique_data)
 		ttl_value = 1;
 
 	/* Set ttl to random value each time */
@@ -476,13 +477,13 @@ add_dec_ttl(struct rte_flow_action *actions,
 static void
 add_set_ipv4_dscp(struct rte_flow_action *actions,
 	uint8_t actions_counter,
-	__rte_unused struct additional_para para)
+	struct additional_para para)
 {
 	static struct rte_flow_action_set_dscp set_dscp[RTE_MAX_LCORE] __rte_cache_aligned;
 	uint32_t dscp_value = para.counter;
 
 	/* Fixed value */
-	if (FIXED_VALUES)
+	if (!para.unique_data)
 		dscp_value = 1;
 
 	/* Set dscp to random value each time */
@@ -497,13 +498,13 @@ add_set_ipv4_dscp(struct rte_flow_action *actions,
 static void
 add_set_ipv6_dscp(struct rte_flow_action *actions,
 	uint8_t actions_counter,
-	__rte_unused struct additional_para para)
+	struct additional_para para)
 {
 	static struct rte_flow_action_set_dscp set_dscp[RTE_MAX_LCORE] __rte_cache_aligned;
 	uint32_t dscp_value = para.counter;
 
 	/* Fixed value */
-	if (FIXED_VALUES)
+	if (!para.unique_data)
 		dscp_value = 1;
 
 	/* Set dscp to random value each time */
@@ -577,7 +578,7 @@ add_ipv4_header(uint8_t **header, uint64_t data,
 		return;
 
 	/* Fixed value */
-	if (FIXED_VALUES)
+	if (!para.unique_data)
 		ip_dst = 1;
 
 	memset(&ipv4_hdr, 0, sizeof(struct rte_ipv4_hdr));
@@ -643,7 +644,7 @@ add_vxlan_header(uint8_t **header, uint64_t data,
 		return;
 
 	/* Fixed value */
-	if (FIXED_VALUES)
+	if (!para.unique_data)
 		vni_value = 1;
 
 	memset(&vxlan_hdr, 0, sizeof(struct rte_vxlan_hdr));
@@ -666,7 +667,7 @@ add_vxlan_gpe_header(uint8_t **header, uint64_t data,
 		return;
 
 	/* Fixed value */
-	if (FIXED_VALUES)
+	if (!para.unique_data)
 		vni_value = 1;
 
 	memset(&vxlan_gpe_hdr, 0, sizeof(struct rte_vxlan_gpe_hdr));
@@ -707,7 +708,7 @@ add_geneve_header(uint8_t **header, uint64_t data,
 		return;
 
 	/* Fixed value */
-	if (FIXED_VALUES)
+	if (!para.unique_data)
 		vni_value = 1;
 
 	memset(&geneve_hdr, 0, sizeof(struct rte_geneve_hdr));
@@ -730,7 +731,7 @@ add_gtp_header(uint8_t **header, uint64_t data,
 		return;
 
 	/* Fixed value */
-	if (FIXED_VALUES)
+	if (!para.unique_data)
 		teid_value = 1;
 
 	memset(&gtp_hdr, 0, sizeof(struct rte_flow_item_gtp));
@@ -849,7 +850,7 @@ add_vxlan_encap(struct rte_flow_action *actions,
 	uint32_t ip_dst = para.counter;
 
 	/* Fixed value */
-	if (FIXED_VALUES)
+	if (!para.unique_data)
 		ip_dst = 1;
 
 	items[0].spec = &item_eth;
@@ -907,7 +908,8 @@ add_meter(struct rte_flow_action *actions,
 void
 fill_actions(struct rte_flow_action *actions, uint64_t *flow_actions,
 	uint32_t counter, uint16_t next_table, uint16_t hairpinq,
-	uint64_t encap_data, uint64_t decap_data, uint8_t core_idx)
+	uint64_t encap_data, uint64_t decap_data, uint8_t core_idx,
+	bool unique_data)
 {
 	struct additional_para additional_para_data;
 	uint8_t actions_counter = 0;
@@ -930,6 +932,7 @@ fill_actions(struct rte_flow_action *actions, uint64_t *flow_actions,
 		.encap_data = encap_data,
 		.decap_data = decap_data,
 		.core_idx = core_idx,
+		.unique_data = unique_data,
 	};
 
 	if (hairpinq != 0) {
diff --git a/app/test-flow-perf/actions_gen.h b/app/test-flow-perf/actions_gen.h
index 77353cfe09..6f2f833496 100644
--- a/app/test-flow-perf/actions_gen.h
+++ b/app/test-flow-perf/actions_gen.h
@@ -19,6 +19,7 @@
 
 void fill_actions(struct rte_flow_action *actions, uint64_t *flow_actions,
 	uint32_t counter, uint16_t next_table, uint16_t hairpinq,
-	uint64_t encap_data, uint64_t decap_data, uint8_t core_idx);
+	uint64_t encap_data, uint64_t decap_data, uint8_t core_idx,
+	bool unique_data);
 
 #endif /* FLOW_PERF_ACTION_GEN */
diff --git a/app/test-flow-perf/config.h b/app/test-flow-perf/config.h
index 3d4696d61a..a14d4e05e1 100644
--- a/app/test-flow-perf/config.h
+++ b/app/test-flow-perf/config.h
@@ -5,7 +5,7 @@
 #define FLOW_ITEM_MASK(_x) (UINT64_C(1) << _x)
 #define FLOW_ACTION_MASK(_x) (UINT64_C(1) << _x)
 #define FLOW_ATTR_MASK(_x) (UINT64_C(1) << _x)
-#define GET_RSS_HF() (ETH_RSS_IP | ETH_RSS_TCP)
+#define GET_RSS_HF() (ETH_RSS_IP)
 
 /* Configuration */
 #define RXQ_NUM 4
@@ -19,12 +19,6 @@
 #define METER_CIR 1250000
 #define DEFAULT_METER_PROF_ID 100
 
-/* This is used for encap/decap & header modify actions.
- * When it's 1: it means all actions have fixed values.
- * When it's 0: it means all actions will have different values.
- */
-#define FIXED_VALUES 1
-
 /* Items/Actions parameters */
 #define JUMP_ACTION_TABLE 2
 #define VLAN_VALUE 1
diff --git a/app/test-flow-perf/flow_gen.c b/app/test-flow-perf/flow_gen.c
index df4af16de8..8f87fac5f6 100644
--- a/app/test-flow-perf/flow_gen.c
+++ b/app/test-flow-perf/flow_gen.c
@@ -46,6 +46,7 @@ generate_flow(uint16_t port_id,
 	uint64_t encap_data,
 	uint64_t decap_data,
 	uint8_t core_idx,
+	bool unique_data,
 	struct rte_flow_error *error)
 {
 	struct rte_flow_attr attr;
@@ -61,7 +62,8 @@ generate_flow(uint16_t port_id,
 
 	fill_actions(actions, flow_actions,
 		outer_ip_src, next_table, hairpinq,
-		encap_data, decap_data, core_idx);
+		encap_data, decap_data, core_idx,
+		unique_data);
 
 	fill_items(items, flow_items, outer_ip_src, core_idx);
 
diff --git a/app/test-flow-perf/flow_gen.h b/app/test-flow-perf/flow_gen.h
index f1d0999af1..dc887fceae 100644
--- a/app/test-flow-perf/flow_gen.h
+++ b/app/test-flow-perf/flow_gen.h
@@ -35,6 +35,7 @@ generate_flow(uint16_t port_id,
 	uint64_t encap_data,
 	uint64_t decap_data,
 	uint8_t core_idx,
+	bool unique_data,
 	struct rte_flow_error *error);
 
 #endif /* FLOW_PERF_FLOW_GEN */
diff --git a/app/test-flow-perf/main.c b/app/test-flow-perf/main.c
index 8b5a11c15e..4054178273 100644
--- a/app/test-flow-perf/main.c
+++ b/app/test-flow-perf/main.c
@@ -61,6 +61,7 @@ static bool dump_iterations;
 static bool delete_flag;
 static bool dump_socket_mem_flag;
 static bool enable_fwd;
+static bool unique_data;
 
 static struct rte_mempool *mbuf_mp;
 static uint32_t nb_lcores;
@@ -131,6 +132,8 @@ usage(char *progname)
 	printf("  --enable-fwd: To enable packets forwarding"
 		" after insertion\n");
 	printf("  --portmask=N: hexadecimal bitmask of ports used\n");
+	printf("  --unique-data: flag to set using unique data for all"
+		" actions that support data, such as header modify and encap actions\n");
 
 	printf("To set flow attributes:\n");
 	printf("  --ingress: set ingress attribute in flows\n");
@@ -567,6 +570,7 @@ args_parse(int argc, char **argv)
 		{ "deletion-rate",              0, 0, 0 },
 		{ "dump-socket-mem",            0, 0, 0 },
 		{ "enable-fwd",                 0, 0, 0 },
+		{ "unique-data",                0, 0, 0 },
 		{ "portmask",                   1, 0, 0 },
 		{ "cores",                      1, 0, 0 },
 		/* Attributes */
@@ -765,6 +769,9 @@ args_parse(int argc, char **argv)
 			if (strcmp(lgopts[opt_idx].name,
 					"dump-iterations") == 0)
 				dump_iterations = true;
+			if (strcmp(lgopts[opt_idx].name,
+					"unique-data") == 0)
+				unique_data = true;
 			if (strcmp(lgopts[opt_idx].name,
 					"deletion-rate") == 0)
 				delete_flag = true;
@@ -1176,7 +1183,7 @@ insert_flows(int port_id, uint8_t core_id)
 		 */
 		flow = generate_flow(port_id, 0, flow_attrs,
 			global_items, global_actions,
-			flow_group, 0, 0, 0, 0, core_id, &error);
+			flow_group, 0, 0, 0, 0, core_id, unique_data, &error);
 
 		if (flow == NULL) {
 			print_flow_error(error);
@@ -1192,7 +1199,7 @@ insert_flows(int port_id, uint8_t core_id)
 			JUMP_ACTION_TABLE, counter,
 			hairpin_queues_num,
 			encap_data, decap_data,
-			core_id, &error);
+			core_id, unique_data, &error);
 
 		if (force_quit)
 			counter = end_counter;
@@ -1863,6 +1870,7 @@ main(int argc, char **argv)
 	delete_flag = false;
 	dump_socket_mem_flag = false;
 	flow_group = DEFAULT_GROUP;
+	unique_data = false;
 
 	signal(SIGINT, signal_handler);
 	signal(SIGTERM, signal_handler);
@@ -1878,7 +1886,6 @@ main(int argc, char **argv)
 	if (nb_lcores <= 1)
 		rte_exit(EXIT_FAILURE, "This app needs at least two cores\n");
 
-
 	printf(":: Flows Count per port: %d\n\n", rules_count);
 
 	if (has_meter())
diff --git a/doc/guides/tools/flow-perf.rst b/doc/guides/tools/flow-perf.rst
index 017e200222..280bf7e0e0 100644
--- a/doc/guides/tools/flow-perf.rst
+++ b/doc/guides/tools/flow-perf.rst
@@ -100,6 +100,11 @@ The command line options are:
 	Set the number of needed cores to insert/delete rte_flow rules.
 	Default cores count is 1.
 
+*       ``--unique-data``
+        Flag to set using unique data for all actions that support data,
+        Such as header modify and encap actions. Default is using fixed
+        data for any action that support data for all flows.
+
 Attributes:
 
 *	``--ingress``
-- 
2.17.1


^ permalink raw reply	[flat|nested] 46+ messages in thread

* [dpdk-dev] [PATCH 3/5] app/flow-perf: fix naming of CPU used structured data
  2021-03-07  9:11 [dpdk-dev] [PATCH 0/5] New flow-perf fixes Wisam Jaddo
  2021-03-07  9:11 ` [dpdk-dev] [PATCH 1/5] app/flow-perf: start using more generic wrapper for cycles Wisam Jaddo
  2021-03-07  9:11 ` [dpdk-dev] [PATCH 2/5] app/flow-perf: add new option to use unique data on the fly Wisam Jaddo
@ 2021-03-07  9:12 ` Wisam Jaddo
  2021-03-07  9:12 ` [dpdk-dev] [PATCH 4/5] app/flow-perf: fix report total stats for masked ports Wisam Jaddo
  2021-03-07  9:12 ` [dpdk-dev] [PATCH 5/5] app/flow-perf: fix the incremental IPv6 src set Wisam Jaddo
  4 siblings, 0 replies; 46+ messages in thread
From: Wisam Jaddo @ 2021-03-07  9:12 UTC (permalink / raw)
  To: arybchenko, thomas, akozyrev, rasland, dev; +Cc: dongzhou

create_flow and create_meter are not correct names since those
are records that contain creation and deletion, which makes
them more of a record for such data.

Fixes: d8099d7ecbd0 ("app/flow-perf: split dump functions")
Cc: dongzhou@nvidia.com

Signed-off-by: Wisam Jaddo <wisamm@nvidia.com>
---
 app/test-flow-perf/main.c | 16 ++++++++--------
 1 file changed, 8 insertions(+), 8 deletions(-)

diff --git a/app/test-flow-perf/main.c b/app/test-flow-perf/main.c
index 4054178273..01607881df 100644
--- a/app/test-flow-perf/main.c
+++ b/app/test-flow-perf/main.c
@@ -105,8 +105,8 @@ struct used_cpu_time {
 struct multi_cores_pool {
 	uint32_t cores_count;
 	uint32_t rules_count;
-	struct used_cpu_time create_meter;
-	struct used_cpu_time create_flow;
+	struct used_cpu_time meters_record;
+	struct used_cpu_time flows_record;
 	int64_t last_alloc[RTE_MAX_LCORE];
 	int64_t current_alloc[RTE_MAX_LCORE];
 } __rte_cache_aligned;
@@ -1013,10 +1013,10 @@ meters_handler(int port_id, uint8_t core_id, uint8_t ops)
 		cpu_time_used, insertion_rate);
 
 	if (ops == METER_CREATE)
-		mc_pool.create_meter.insertion[port_id][core_id]
+		mc_pool.meters_record.insertion[port_id][core_id]
 			= cpu_time_used;
 	else
-		mc_pool.create_meter.deletion[port_id][core_id]
+		mc_pool.meters_record.deletion[port_id][core_id]
 			= cpu_time_used;
 }
 
@@ -1134,7 +1134,7 @@ destroy_flows(int port_id, uint8_t core_id, struct rte_flow **flows_list)
 	printf(":: Port %d :: Core %d :: The time for deleting %d rules is %f seconds\n",
 		port_id, core_id, rules_count_per_core, cpu_time_used);
 
-	mc_pool.create_flow.deletion[port_id][core_id] = cpu_time_used;
+	mc_pool.flows_record.deletion[port_id][core_id] = cpu_time_used;
 }
 
 static struct rte_flow **
@@ -1241,7 +1241,7 @@ insert_flows(int port_id, uint8_t core_id)
 	printf(":: Port %d :: Core %d :: The time for creating %d in rules %f seconds\n",
 		port_id, core_id, rules_count_per_core, cpu_time_used);
 
-	mc_pool.create_flow.insertion[port_id][core_id] = cpu_time_used;
+	mc_pool.flows_record.insertion[port_id][core_id] = cpu_time_used;
 	return flows_list;
 }
 
@@ -1439,9 +1439,9 @@ run_rte_flow_handler_cores(void *data __rte_unused)
 	RTE_ETH_FOREACH_DEV(port) {
 		if (has_meter())
 			dump_used_cpu_time("Meters:",
-				port, &mc_pool.create_meter);
+				port, &mc_pool.meters_record);
 		dump_used_cpu_time("Flows:",
-			port, &mc_pool.create_flow);
+			port, &mc_pool.flows_record);
 		dump_used_mem(port);
 	}
 
-- 
2.17.1


^ permalink raw reply	[flat|nested] 46+ messages in thread

* [dpdk-dev] [PATCH 4/5] app/flow-perf: fix report total stats for masked ports
  2021-03-07  9:11 [dpdk-dev] [PATCH 0/5] New flow-perf fixes Wisam Jaddo
                   ` (2 preceding siblings ...)
  2021-03-07  9:12 ` [dpdk-dev] [PATCH 3/5] app/flow-perf: fix naming of CPU used structured data Wisam Jaddo
@ 2021-03-07  9:12 ` Wisam Jaddo
  2021-03-07  9:12 ` [dpdk-dev] [PATCH 5/5] app/flow-perf: fix the incremental IPv6 src set Wisam Jaddo
  4 siblings, 0 replies; 46+ messages in thread
From: Wisam Jaddo @ 2021-03-07  9:12 UTC (permalink / raw)
  To: arybchenko, thomas, akozyrev, rasland, dev; +Cc: dongzhou, stable

Take into consideration that the user may call portmask for
any run, thus the app should always check if port is needed
to collect and report or not.

Fixes: 070316d01d3e ("app/flow-perf: add multi-core rule insertion and deletion")
Fixes: d8099d7ecbd0 ("app/flow-perf: split dump functions")
Cc: dongzhou@nvidia.com
Cc: stable@dpdk.org

Signed-off-by: Wisam Jaddo <wisamm@nvidia.com>
---
 app/test-flow-perf/main.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/app/test-flow-perf/main.c b/app/test-flow-perf/main.c
index 01607881df..e32714131c 100644
--- a/app/test-flow-perf/main.c
+++ b/app/test-flow-perf/main.c
@@ -1437,6 +1437,9 @@ run_rte_flow_handler_cores(void *data __rte_unused)
 	rte_eal_mp_wait_lcore();
 
 	RTE_ETH_FOREACH_DEV(port) {
+		/* If port outside portmask */
+		if (!((ports_mask >> port) & 0x1))
+			continue;
 		if (has_meter())
 			dump_used_cpu_time("Meters:",
 				port, &mc_pool.meters_record);
-- 
2.17.1


^ permalink raw reply	[flat|nested] 46+ messages in thread

* [dpdk-dev] [PATCH 5/5] app/flow-perf: fix the incremental IPv6 src set
  2021-03-07  9:11 [dpdk-dev] [PATCH 0/5] New flow-perf fixes Wisam Jaddo
                   ` (3 preceding siblings ...)
  2021-03-07  9:12 ` [dpdk-dev] [PATCH 4/5] app/flow-perf: fix report total stats for masked ports Wisam Jaddo
@ 2021-03-07  9:12 ` Wisam Jaddo
  4 siblings, 0 replies; 46+ messages in thread
From: Wisam Jaddo @ 2021-03-07  9:12 UTC (permalink / raw)
  To: arybchenko, thomas, akozyrev, rasland, dev; +Cc: wisamm, stable

Currently the memset() will not set a correct src ip that represent
the incremental value of the counter.

This commit will fix this and each flow will have correct IPv6.src
that it's incremental from previous flow and equal to the decimal
values.

Fixes: bf3688f1e816 ("app/flow-perf: add insertion rate calculation")
Cc: wisamm@mellanox.com
Cc: stable@dpdk.org

Signed-off-by: Wisam Jaddo <wisamm@nvidia.com>
---
 app/test-flow-perf/items_gen.c | 13 +++++++------
 1 file changed, 7 insertions(+), 6 deletions(-)

diff --git a/app/test-flow-perf/items_gen.c b/app/test-flow-perf/items_gen.c
index ccebc08b39..a73de9031f 100644
--- a/app/test-flow-perf/items_gen.c
+++ b/app/test-flow-perf/items_gen.c
@@ -72,14 +72,15 @@ add_ipv6(struct rte_flow_item *items,
 	static struct rte_flow_item_ipv6 ipv6_specs[RTE_MAX_LCORE] __rte_cache_aligned;
 	static struct rte_flow_item_ipv6 ipv6_masks[RTE_MAX_LCORE] __rte_cache_aligned;
 	uint8_t ti = para.core_idx;
+	uint8_t i;
 
 	/** Set ipv6 src **/
-	memset(&ipv6_specs[ti].hdr.src_addr, para.src_ip,
-		sizeof(ipv6_specs->hdr.src_addr) / 2);
-
-	/** Full mask **/
-	memset(&ipv6_masks[ti].hdr.src_addr, 0xff,
-		sizeof(ipv6_specs->hdr.src_addr));
+	for (i = 0; i < 16; i++) {
+		/* Currently src_ip is limited to 32 bit */
+		if (i < 4)
+			ipv6_specs[ti].hdr.src_addr[15 - i] = para.src_ip >> (i * 8);
+		ipv6_masks[ti].hdr.src_addr[15 - i] = 0xff;
+	}
 
 	items[items_counter].type = RTE_FLOW_ITEM_TYPE_IPV6;
 	items[items_counter].spec = &ipv6_specs[ti];
-- 
2.17.1


^ permalink raw reply	[flat|nested] 46+ messages in thread

* [dpdk-dev] [PATCH v2 0/7] Enhancements and fixes for flow-perf
  2021-03-07  9:11 ` [dpdk-dev] [PATCH 1/5] app/flow-perf: start using more generic wrapper for cycles Wisam Jaddo
@ 2021-03-10 13:45   ` Wisam Jaddo
  2021-03-10 13:45     ` [dpdk-dev] [PATCH v2 1/7] app/flow-perf: start using more generic wrapper for cycles Wisam Jaddo
                       ` (5 more replies)
  2021-03-10 13:48   ` [dpdk-dev] [PATCH v2 0/7] Enhancements and fixes for flow-perf Wisam Jaddo
  1 sibling, 6 replies; 46+ messages in thread
From: Wisam Jaddo @ 2021-03-10 13:45 UTC (permalink / raw)
  To: arybchenko, thomas, akozyrev, rasland, dev

changes in V2:
1- Add first flow insertion latency calculation.
2- Fix for decap data in raw decap actions.

Wisam Jaddo (7):
  app/flow-perf: start using more generic wrapper for cycles
  app/flow-perf: add new option to use unique data on the fly
  app/flow-perf: fix naming of CPU used structured data
  app/flow-perf: fix report total stats for masked ports
  app/flow-perf: fix the incremental IPv6 src set
  app/flow-perf: add first flow latency support
  app/flow-perf: fix setting decap data for decap actions

 app/test-flow-perf/actions_gen.c | 77 +++++++++++++++++---------------
 app/test-flow-perf/actions_gen.h |  3 +-
 app/test-flow-perf/config.h      |  8 +---
 app/test-flow-perf/flow_gen.c    |  4 +-
 app/test-flow-perf/flow_gen.h    |  1 +
 app/test-flow-perf/items_gen.c   | 13 +++---
 app/test-flow-perf/main.c        | 67 +++++++++++++++++----------
 doc/guides/tools/flow-perf.rst   |  5 +++
 8 files changed, 102 insertions(+), 76 deletions(-)

-- 
2.17.1


^ permalink raw reply	[flat|nested] 46+ messages in thread

* [dpdk-dev] [PATCH v2 1/7] app/flow-perf: start using more generic wrapper for cycles
  2021-03-10 13:45   ` [dpdk-dev] [PATCH v2 0/7] Enhancements and fixes for flow-perf Wisam Jaddo
@ 2021-03-10 13:45     ` Wisam Jaddo
  2021-03-10 13:45     ` [dpdk-dev] [PATCH v2 2/7] app/flow-perf: add new option to use unique data on the fly Wisam Jaddo
                       ` (4 subsequent siblings)
  5 siblings, 0 replies; 46+ messages in thread
From: Wisam Jaddo @ 2021-03-10 13:45 UTC (permalink / raw)
  To: arybchenko, thomas, akozyrev, rasland, dev

rdtsc() is x86 related, while this might fail for other archs,
so it's better to use more generic API for cycles measurement.

Signed-off-by: Wisam Jaddo <wisamm@nvidia.com>
---
 app/test-flow-perf/main.c | 24 ++++++++++++------------
 1 file changed, 12 insertions(+), 12 deletions(-)

diff --git a/app/test-flow-perf/main.c b/app/test-flow-perf/main.c
index 99d0463456..8b5a11c15e 100644
--- a/app/test-flow-perf/main.c
+++ b/app/test-flow-perf/main.c
@@ -969,7 +969,7 @@ meters_handler(int port_id, uint8_t core_id, uint8_t ops)
 	end_counter = (core_id + 1) * rules_count_per_core;
 
 	cpu_time_used = 0;
-	start_batch = rte_rdtsc();
+	start_batch = rte_get_timer_cycles();
 	for (counter = start_counter; counter < end_counter; counter++) {
 		if (ops == METER_CREATE)
 			create_meter_rule(port_id, counter);
@@ -984,10 +984,10 @@ meters_handler(int port_id, uint8_t core_id, uint8_t ops)
 		if (!((counter + 1) % rules_batch)) {
 			rules_batch_idx = ((counter + 1) / rules_batch) - 1;
 			cpu_time_per_batch[rules_batch_idx] =
-				((double)(rte_rdtsc() - start_batch))
-				/ rte_get_tsc_hz();
+				((double)(rte_get_timer_cycles() - start_batch))
+				/ rte_get_timer_hz();
 			cpu_time_used += cpu_time_per_batch[rules_batch_idx];
-			start_batch = rte_rdtsc();
+			start_batch = rte_get_timer_cycles();
 		}
 	}
 
@@ -1089,7 +1089,7 @@ destroy_flows(int port_id, uint8_t core_id, struct rte_flow **flows_list)
 	if (flow_group > 0 && core_id == 0)
 		rules_count_per_core++;
 
-	start_batch = rte_rdtsc();
+	start_batch = rte_get_timer_cycles();
 	for (i = 0; i < (uint32_t) rules_count_per_core; i++) {
 		if (flows_list[i] == 0)
 			break;
@@ -1107,12 +1107,12 @@ destroy_flows(int port_id, uint8_t core_id, struct rte_flow **flows_list)
 		 * for this batch.
 		 */
 		if (!((i + 1) % rules_batch)) {
-			end_batch = rte_rdtsc();
+			end_batch = rte_get_timer_cycles();
 			delta = (double) (end_batch - start_batch);
 			rules_batch_idx = ((i + 1) / rules_batch) - 1;
-			cpu_time_per_batch[rules_batch_idx] = delta / rte_get_tsc_hz();
+			cpu_time_per_batch[rules_batch_idx] = delta / rte_get_timer_hz();
 			cpu_time_used += cpu_time_per_batch[rules_batch_idx];
-			start_batch = rte_rdtsc();
+			start_batch = rte_get_timer_cycles();
 		}
 	}
 
@@ -1185,7 +1185,7 @@ insert_flows(int port_id, uint8_t core_id)
 		flows_list[flow_index++] = flow;
 	}
 
-	start_batch = rte_rdtsc();
+	start_batch = rte_get_timer_cycles();
 	for (counter = start_counter; counter < end_counter; counter++) {
 		flow = generate_flow(port_id, flow_group,
 			flow_attrs, flow_items, flow_actions,
@@ -1211,12 +1211,12 @@ insert_flows(int port_id, uint8_t core_id)
 		 * for this batch.
 		 */
 		if (!((counter + 1) % rules_batch)) {
-			end_batch = rte_rdtsc();
+			end_batch = rte_get_timer_cycles();
 			delta = (double) (end_batch - start_batch);
 			rules_batch_idx = ((counter + 1) / rules_batch) - 1;
-			cpu_time_per_batch[rules_batch_idx] = delta / rte_get_tsc_hz();
+			cpu_time_per_batch[rules_batch_idx] = delta / rte_get_timer_hz();
 			cpu_time_used += cpu_time_per_batch[rules_batch_idx];
-			start_batch = rte_rdtsc();
+			start_batch = rte_get_timer_cycles();
 		}
 	}
 
-- 
2.17.1


^ permalink raw reply	[flat|nested] 46+ messages in thread

* [dpdk-dev] [PATCH v2 2/7] app/flow-perf: add new option to use unique data on the fly
  2021-03-10 13:45   ` [dpdk-dev] [PATCH v2 0/7] Enhancements and fixes for flow-perf Wisam Jaddo
  2021-03-10 13:45     ` [dpdk-dev] [PATCH v2 1/7] app/flow-perf: start using more generic wrapper for cycles Wisam Jaddo
@ 2021-03-10 13:45     ` Wisam Jaddo
  2021-03-10 13:45     ` [dpdk-dev] [PATCH v2 3/7] app/flow-perf: fix naming of CPU used structured data Wisam Jaddo
                       ` (3 subsequent siblings)
  5 siblings, 0 replies; 46+ messages in thread
From: Wisam Jaddo @ 2021-03-10 13:45 UTC (permalink / raw)
  To: arybchenko, thomas, akozyrev, rasland, dev

Current support for unique data is to compile with config.h
var as FIXED_VALUES as 0, and this is only supported on
compilation time, as a result the user may use only single
mode for each compilation.

Starting with this commit the user will have the ability to
use this feature on the fly by using this new option:
--unique-data

Example of unique data usage:
Insert many rules with different encap data for a flows that
have encap action in it.

Signed-off-by: Wisam Jaddo <wisamm@nvidia.com>
---
 app/test-flow-perf/actions_gen.c | 77 +++++++++++++++++---------------
 app/test-flow-perf/actions_gen.h |  3 +-
 app/test-flow-perf/config.h      |  8 +---
 app/test-flow-perf/flow_gen.c    |  4 +-
 app/test-flow-perf/flow_gen.h    |  1 +
 app/test-flow-perf/main.c        | 13 ++++--
 doc/guides/tools/flow-perf.rst   |  5 +++
 7 files changed, 62 insertions(+), 49 deletions(-)

diff --git a/app/test-flow-perf/actions_gen.c b/app/test-flow-perf/actions_gen.c
index 1f5c64fde9..82cddfc676 100644
--- a/app/test-flow-perf/actions_gen.c
+++ b/app/test-flow-perf/actions_gen.c
@@ -30,6 +30,7 @@ struct additional_para {
 	uint64_t encap_data;
 	uint64_t decap_data;
 	uint8_t core_idx;
+	bool unique_data;
 };
 
 /* Storage for struct rte_flow_action_raw_encap including external data. */
@@ -202,14 +203,14 @@ add_count(struct rte_flow_action *actions,
 static void
 add_set_src_mac(struct rte_flow_action *actions,
 	uint8_t actions_counter,
-	__rte_unused struct additional_para para)
+	struct additional_para para)
 {
 	static struct rte_flow_action_set_mac set_macs[RTE_MAX_LCORE] __rte_cache_aligned;
 	uint32_t mac = para.counter;
 	uint16_t i;
 
 	/* Fixed value */
-	if (FIXED_VALUES)
+	if (!para.unique_data)
 		mac = 1;
 
 	/* Mac address to be set is random each time */
@@ -225,14 +226,14 @@ add_set_src_mac(struct rte_flow_action *actions,
 static void
 add_set_dst_mac(struct rte_flow_action *actions,
 	uint8_t actions_counter,
-	__rte_unused struct additional_para para)
+	struct additional_para para)
 {
 	static struct rte_flow_action_set_mac set_macs[RTE_MAX_LCORE] __rte_cache_aligned;
 	uint32_t mac = para.counter;
 	uint16_t i;
 
 	/* Fixed value */
-	if (FIXED_VALUES)
+	if (!para.unique_data)
 		mac = 1;
 
 	/* Mac address to be set is random each time */
@@ -248,13 +249,13 @@ add_set_dst_mac(struct rte_flow_action *actions,
 static void
 add_set_src_ipv4(struct rte_flow_action *actions,
 	uint8_t actions_counter,
-	__rte_unused struct additional_para para)
+	struct additional_para para)
 {
 	static struct rte_flow_action_set_ipv4 set_ipv4[RTE_MAX_LCORE] __rte_cache_aligned;
 	uint32_t ip = para.counter;
 
 	/* Fixed value */
-	if (FIXED_VALUES)
+	if (!para.unique_data)
 		ip = 1;
 
 	/* IPv4 value to be set is random each time */
@@ -267,13 +268,13 @@ add_set_src_ipv4(struct rte_flow_action *actions,
 static void
 add_set_dst_ipv4(struct rte_flow_action *actions,
 	uint8_t actions_counter,
-	__rte_unused struct additional_para para)
+	struct additional_para para)
 {
 	static struct rte_flow_action_set_ipv4 set_ipv4[RTE_MAX_LCORE] __rte_cache_aligned;
 	uint32_t ip = para.counter;
 
 	/* Fixed value */
-	if (FIXED_VALUES)
+	if (!para.unique_data)
 		ip = 1;
 
 	/* IPv4 value to be set is random each time */
@@ -286,14 +287,14 @@ add_set_dst_ipv4(struct rte_flow_action *actions,
 static void
 add_set_src_ipv6(struct rte_flow_action *actions,
 	uint8_t actions_counter,
-	__rte_unused struct additional_para para)
+	struct additional_para para)
 {
 	static struct rte_flow_action_set_ipv6 set_ipv6[RTE_MAX_LCORE] __rte_cache_aligned;
 	uint32_t ipv6 = para.counter;
 	uint8_t i;
 
 	/* Fixed value */
-	if (FIXED_VALUES)
+	if (!para.unique_data)
 		ipv6 = 1;
 
 	/* IPv6 value to set is random each time */
@@ -309,14 +310,14 @@ add_set_src_ipv6(struct rte_flow_action *actions,
 static void
 add_set_dst_ipv6(struct rte_flow_action *actions,
 	uint8_t actions_counter,
-	__rte_unused struct additional_para para)
+	struct additional_para para)
 {
 	static struct rte_flow_action_set_ipv6 set_ipv6[RTE_MAX_LCORE] __rte_cache_aligned;
 	uint32_t ipv6 = para.counter;
 	uint8_t i;
 
 	/* Fixed value */
-	if (FIXED_VALUES)
+	if (!para.unique_data)
 		ipv6 = 1;
 
 	/* IPv6 value to set is random each time */
@@ -332,13 +333,13 @@ add_set_dst_ipv6(struct rte_flow_action *actions,
 static void
 add_set_src_tp(struct rte_flow_action *actions,
 	uint8_t actions_counter,
-	__rte_unused struct additional_para para)
+	struct additional_para para)
 {
 	static struct rte_flow_action_set_tp set_tp[RTE_MAX_LCORE] __rte_cache_aligned;
 	uint32_t tp = para.counter;
 
 	/* Fixed value */
-	if (FIXED_VALUES)
+	if (!para.unique_data)
 		tp = 100;
 
 	/* TP src port is random each time */
@@ -353,13 +354,13 @@ add_set_src_tp(struct rte_flow_action *actions,
 static void
 add_set_dst_tp(struct rte_flow_action *actions,
 	uint8_t actions_counter,
-	__rte_unused struct additional_para para)
+	struct additional_para para)
 {
 	static struct rte_flow_action_set_tp set_tp[RTE_MAX_LCORE] __rte_cache_aligned;
 	uint32_t tp = para.counter;
 
 	/* Fixed value */
-	if (FIXED_VALUES)
+	if (!para.unique_data)
 		tp = 100;
 
 	/* TP src port is random each time */
@@ -375,13 +376,13 @@ add_set_dst_tp(struct rte_flow_action *actions,
 static void
 add_inc_tcp_ack(struct rte_flow_action *actions,
 	uint8_t actions_counter,
-	__rte_unused struct additional_para para)
+	struct additional_para para)
 {
 	static rte_be32_t value[RTE_MAX_LCORE] __rte_cache_aligned;
 	uint32_t ack_value = para.counter;
 
 	/* Fixed value */
-	if (FIXED_VALUES)
+	if (!para.unique_data)
 		ack_value = 1;
 
 	value[para.core_idx] = RTE_BE32(ack_value);
@@ -393,13 +394,13 @@ add_inc_tcp_ack(struct rte_flow_action *actions,
 static void
 add_dec_tcp_ack(struct rte_flow_action *actions,
 	uint8_t actions_counter,
-	__rte_unused struct additional_para para)
+	struct additional_para para)
 {
 	static rte_be32_t value[RTE_MAX_LCORE] __rte_cache_aligned;
 	uint32_t ack_value = para.counter;
 
 	/* Fixed value */
-	if (FIXED_VALUES)
+	if (!para.unique_data)
 		ack_value = 1;
 
 	value[para.core_idx] = RTE_BE32(ack_value);
@@ -411,13 +412,13 @@ add_dec_tcp_ack(struct rte_flow_action *actions,
 static void
 add_inc_tcp_seq(struct rte_flow_action *actions,
 	uint8_t actions_counter,
-	__rte_unused struct additional_para para)
+	struct additional_para para)
 {
 	static rte_be32_t value[RTE_MAX_LCORE] __rte_cache_aligned;
 	uint32_t seq_value = para.counter;
 
 	/* Fixed value */
-	if (FIXED_VALUES)
+	if (!para.unique_data)
 		seq_value = 1;
 
 	value[para.core_idx] = RTE_BE32(seq_value);
@@ -429,13 +430,13 @@ add_inc_tcp_seq(struct rte_flow_action *actions,
 static void
 add_dec_tcp_seq(struct rte_flow_action *actions,
 	uint8_t actions_counter,
-	__rte_unused struct additional_para para)
+	struct additional_para para)
 {
 	static rte_be32_t value[RTE_MAX_LCORE] __rte_cache_aligned;
 	uint32_t seq_value = para.counter;
 
 	/* Fixed value */
-	if (FIXED_VALUES)
+	if (!para.unique_data)
 		seq_value = 1;
 
 	value[para.core_idx] = RTE_BE32(seq_value);
@@ -447,13 +448,13 @@ add_dec_tcp_seq(struct rte_flow_action *actions,
 static void
 add_set_ttl(struct rte_flow_action *actions,
 	uint8_t actions_counter,
-	__rte_unused struct additional_para para)
+	struct additional_para para)
 {
 	static struct rte_flow_action_set_ttl set_ttl[RTE_MAX_LCORE] __rte_cache_aligned;
 	uint32_t ttl_value = para.counter;
 
 	/* Fixed value */
-	if (FIXED_VALUES)
+	if (!para.unique_data)
 		ttl_value = 1;
 
 	/* Set ttl to random value each time */
@@ -476,13 +477,13 @@ add_dec_ttl(struct rte_flow_action *actions,
 static void
 add_set_ipv4_dscp(struct rte_flow_action *actions,
 	uint8_t actions_counter,
-	__rte_unused struct additional_para para)
+	struct additional_para para)
 {
 	static struct rte_flow_action_set_dscp set_dscp[RTE_MAX_LCORE] __rte_cache_aligned;
 	uint32_t dscp_value = para.counter;
 
 	/* Fixed value */
-	if (FIXED_VALUES)
+	if (!para.unique_data)
 		dscp_value = 1;
 
 	/* Set dscp to random value each time */
@@ -497,13 +498,13 @@ add_set_ipv4_dscp(struct rte_flow_action *actions,
 static void
 add_set_ipv6_dscp(struct rte_flow_action *actions,
 	uint8_t actions_counter,
-	__rte_unused struct additional_para para)
+	struct additional_para para)
 {
 	static struct rte_flow_action_set_dscp set_dscp[RTE_MAX_LCORE] __rte_cache_aligned;
 	uint32_t dscp_value = para.counter;
 
 	/* Fixed value */
-	if (FIXED_VALUES)
+	if (!para.unique_data)
 		dscp_value = 1;
 
 	/* Set dscp to random value each time */
@@ -577,7 +578,7 @@ add_ipv4_header(uint8_t **header, uint64_t data,
 		return;
 
 	/* Fixed value */
-	if (FIXED_VALUES)
+	if (!para.unique_data)
 		ip_dst = 1;
 
 	memset(&ipv4_hdr, 0, sizeof(struct rte_ipv4_hdr));
@@ -643,7 +644,7 @@ add_vxlan_header(uint8_t **header, uint64_t data,
 		return;
 
 	/* Fixed value */
-	if (FIXED_VALUES)
+	if (!para.unique_data)
 		vni_value = 1;
 
 	memset(&vxlan_hdr, 0, sizeof(struct rte_vxlan_hdr));
@@ -666,7 +667,7 @@ add_vxlan_gpe_header(uint8_t **header, uint64_t data,
 		return;
 
 	/* Fixed value */
-	if (FIXED_VALUES)
+	if (!para.unique_data)
 		vni_value = 1;
 
 	memset(&vxlan_gpe_hdr, 0, sizeof(struct rte_vxlan_gpe_hdr));
@@ -707,7 +708,7 @@ add_geneve_header(uint8_t **header, uint64_t data,
 		return;
 
 	/* Fixed value */
-	if (FIXED_VALUES)
+	if (!para.unique_data)
 		vni_value = 1;
 
 	memset(&geneve_hdr, 0, sizeof(struct rte_geneve_hdr));
@@ -730,7 +731,7 @@ add_gtp_header(uint8_t **header, uint64_t data,
 		return;
 
 	/* Fixed value */
-	if (FIXED_VALUES)
+	if (!para.unique_data)
 		teid_value = 1;
 
 	memset(&gtp_hdr, 0, sizeof(struct rte_flow_item_gtp));
@@ -849,7 +850,7 @@ add_vxlan_encap(struct rte_flow_action *actions,
 	uint32_t ip_dst = para.counter;
 
 	/* Fixed value */
-	if (FIXED_VALUES)
+	if (!para.unique_data)
 		ip_dst = 1;
 
 	items[0].spec = &item_eth;
@@ -907,7 +908,8 @@ add_meter(struct rte_flow_action *actions,
 void
 fill_actions(struct rte_flow_action *actions, uint64_t *flow_actions,
 	uint32_t counter, uint16_t next_table, uint16_t hairpinq,
-	uint64_t encap_data, uint64_t decap_data, uint8_t core_idx)
+	uint64_t encap_data, uint64_t decap_data, uint8_t core_idx,
+	bool unique_data)
 {
 	struct additional_para additional_para_data;
 	uint8_t actions_counter = 0;
@@ -930,6 +932,7 @@ fill_actions(struct rte_flow_action *actions, uint64_t *flow_actions,
 		.encap_data = encap_data,
 		.decap_data = decap_data,
 		.core_idx = core_idx,
+		.unique_data = unique_data,
 	};
 
 	if (hairpinq != 0) {
diff --git a/app/test-flow-perf/actions_gen.h b/app/test-flow-perf/actions_gen.h
index 77353cfe09..6f2f833496 100644
--- a/app/test-flow-perf/actions_gen.h
+++ b/app/test-flow-perf/actions_gen.h
@@ -19,6 +19,7 @@
 
 void fill_actions(struct rte_flow_action *actions, uint64_t *flow_actions,
 	uint32_t counter, uint16_t next_table, uint16_t hairpinq,
-	uint64_t encap_data, uint64_t decap_data, uint8_t core_idx);
+	uint64_t encap_data, uint64_t decap_data, uint8_t core_idx,
+	bool unique_data);
 
 #endif /* FLOW_PERF_ACTION_GEN */
diff --git a/app/test-flow-perf/config.h b/app/test-flow-perf/config.h
index 3d4696d61a..a14d4e05e1 100644
--- a/app/test-flow-perf/config.h
+++ b/app/test-flow-perf/config.h
@@ -5,7 +5,7 @@
 #define FLOW_ITEM_MASK(_x) (UINT64_C(1) << _x)
 #define FLOW_ACTION_MASK(_x) (UINT64_C(1) << _x)
 #define FLOW_ATTR_MASK(_x) (UINT64_C(1) << _x)
-#define GET_RSS_HF() (ETH_RSS_IP | ETH_RSS_TCP)
+#define GET_RSS_HF() (ETH_RSS_IP)
 
 /* Configuration */
 #define RXQ_NUM 4
@@ -19,12 +19,6 @@
 #define METER_CIR 1250000
 #define DEFAULT_METER_PROF_ID 100
 
-/* This is used for encap/decap & header modify actions.
- * When it's 1: it means all actions have fixed values.
- * When it's 0: it means all actions will have different values.
- */
-#define FIXED_VALUES 1
-
 /* Items/Actions parameters */
 #define JUMP_ACTION_TABLE 2
 #define VLAN_VALUE 1
diff --git a/app/test-flow-perf/flow_gen.c b/app/test-flow-perf/flow_gen.c
index df4af16de8..8f87fac5f6 100644
--- a/app/test-flow-perf/flow_gen.c
+++ b/app/test-flow-perf/flow_gen.c
@@ -46,6 +46,7 @@ generate_flow(uint16_t port_id,
 	uint64_t encap_data,
 	uint64_t decap_data,
 	uint8_t core_idx,
+	bool unique_data,
 	struct rte_flow_error *error)
 {
 	struct rte_flow_attr attr;
@@ -61,7 +62,8 @@ generate_flow(uint16_t port_id,
 
 	fill_actions(actions, flow_actions,
 		outer_ip_src, next_table, hairpinq,
-		encap_data, decap_data, core_idx);
+		encap_data, decap_data, core_idx,
+		unique_data);
 
 	fill_items(items, flow_items, outer_ip_src, core_idx);
 
diff --git a/app/test-flow-perf/flow_gen.h b/app/test-flow-perf/flow_gen.h
index f1d0999af1..dc887fceae 100644
--- a/app/test-flow-perf/flow_gen.h
+++ b/app/test-flow-perf/flow_gen.h
@@ -35,6 +35,7 @@ generate_flow(uint16_t port_id,
 	uint64_t encap_data,
 	uint64_t decap_data,
 	uint8_t core_idx,
+	bool unique_data,
 	struct rte_flow_error *error);
 
 #endif /* FLOW_PERF_FLOW_GEN */
diff --git a/app/test-flow-perf/main.c b/app/test-flow-perf/main.c
index 8b5a11c15e..4054178273 100644
--- a/app/test-flow-perf/main.c
+++ b/app/test-flow-perf/main.c
@@ -61,6 +61,7 @@ static bool dump_iterations;
 static bool delete_flag;
 static bool dump_socket_mem_flag;
 static bool enable_fwd;
+static bool unique_data;
 
 static struct rte_mempool *mbuf_mp;
 static uint32_t nb_lcores;
@@ -131,6 +132,8 @@ usage(char *progname)
 	printf("  --enable-fwd: To enable packets forwarding"
 		" after insertion\n");
 	printf("  --portmask=N: hexadecimal bitmask of ports used\n");
+	printf("  --unique-data: flag to set using unique data for all"
+		" actions that support data, such as header modify and encap actions\n");
 
 	printf("To set flow attributes:\n");
 	printf("  --ingress: set ingress attribute in flows\n");
@@ -567,6 +570,7 @@ args_parse(int argc, char **argv)
 		{ "deletion-rate",              0, 0, 0 },
 		{ "dump-socket-mem",            0, 0, 0 },
 		{ "enable-fwd",                 0, 0, 0 },
+		{ "unique-data",                0, 0, 0 },
 		{ "portmask",                   1, 0, 0 },
 		{ "cores",                      1, 0, 0 },
 		/* Attributes */
@@ -765,6 +769,9 @@ args_parse(int argc, char **argv)
 			if (strcmp(lgopts[opt_idx].name,
 					"dump-iterations") == 0)
 				dump_iterations = true;
+			if (strcmp(lgopts[opt_idx].name,
+					"unique-data") == 0)
+				unique_data = true;
 			if (strcmp(lgopts[opt_idx].name,
 					"deletion-rate") == 0)
 				delete_flag = true;
@@ -1176,7 +1183,7 @@ insert_flows(int port_id, uint8_t core_id)
 		 */
 		flow = generate_flow(port_id, 0, flow_attrs,
 			global_items, global_actions,
-			flow_group, 0, 0, 0, 0, core_id, &error);
+			flow_group, 0, 0, 0, 0, core_id, unique_data, &error);
 
 		if (flow == NULL) {
 			print_flow_error(error);
@@ -1192,7 +1199,7 @@ insert_flows(int port_id, uint8_t core_id)
 			JUMP_ACTION_TABLE, counter,
 			hairpin_queues_num,
 			encap_data, decap_data,
-			core_id, &error);
+			core_id, unique_data, &error);
 
 		if (force_quit)
 			counter = end_counter;
@@ -1863,6 +1870,7 @@ main(int argc, char **argv)
 	delete_flag = false;
 	dump_socket_mem_flag = false;
 	flow_group = DEFAULT_GROUP;
+	unique_data = false;
 
 	signal(SIGINT, signal_handler);
 	signal(SIGTERM, signal_handler);
@@ -1878,7 +1886,6 @@ main(int argc, char **argv)
 	if (nb_lcores <= 1)
 		rte_exit(EXIT_FAILURE, "This app needs at least two cores\n");
 
-
 	printf(":: Flows Count per port: %d\n\n", rules_count);
 
 	if (has_meter())
diff --git a/doc/guides/tools/flow-perf.rst b/doc/guides/tools/flow-perf.rst
index 017e200222..280bf7e0e0 100644
--- a/doc/guides/tools/flow-perf.rst
+++ b/doc/guides/tools/flow-perf.rst
@@ -100,6 +100,11 @@ The command line options are:
 	Set the number of needed cores to insert/delete rte_flow rules.
 	Default cores count is 1.
 
+*       ``--unique-data``
+        Flag to set using unique data for all actions that support data,
+        Such as header modify and encap actions. Default is using fixed
+        data for any action that support data for all flows.
+
 Attributes:
 
 *	``--ingress``
-- 
2.17.1


^ permalink raw reply	[flat|nested] 46+ messages in thread

* [dpdk-dev] [PATCH v2 3/7] app/flow-perf: fix naming of CPU used structured data
  2021-03-10 13:45   ` [dpdk-dev] [PATCH v2 0/7] Enhancements and fixes for flow-perf Wisam Jaddo
  2021-03-10 13:45     ` [dpdk-dev] [PATCH v2 1/7] app/flow-perf: start using more generic wrapper for cycles Wisam Jaddo
  2021-03-10 13:45     ` [dpdk-dev] [PATCH v2 2/7] app/flow-perf: add new option to use unique data on the fly Wisam Jaddo
@ 2021-03-10 13:45     ` Wisam Jaddo
  2021-03-10 13:45     ` [dpdk-dev] [PATCH v2 4/7] app/flow-perf: fix report total stats for masked ports Wisam Jaddo
                       ` (2 subsequent siblings)
  5 siblings, 0 replies; 46+ messages in thread
From: Wisam Jaddo @ 2021-03-10 13:45 UTC (permalink / raw)
  To: arybchenko, thomas, akozyrev, rasland, dev; +Cc: dongzhou

create_flow and create_meter are not correct names since those
are records that contain creation and deletion, which makes
them more of a record for such data.

Fixes: d8099d7ecbd0 ("app/flow-perf: split dump functions")
Cc: dongzhou@nvidia.com

Signed-off-by: Wisam Jaddo <wisamm@nvidia.com>
---
 app/test-flow-perf/main.c | 16 ++++++++--------
 1 file changed, 8 insertions(+), 8 deletions(-)

diff --git a/app/test-flow-perf/main.c b/app/test-flow-perf/main.c
index 4054178273..01607881df 100644
--- a/app/test-flow-perf/main.c
+++ b/app/test-flow-perf/main.c
@@ -105,8 +105,8 @@ struct used_cpu_time {
 struct multi_cores_pool {
 	uint32_t cores_count;
 	uint32_t rules_count;
-	struct used_cpu_time create_meter;
-	struct used_cpu_time create_flow;
+	struct used_cpu_time meters_record;
+	struct used_cpu_time flows_record;
 	int64_t last_alloc[RTE_MAX_LCORE];
 	int64_t current_alloc[RTE_MAX_LCORE];
 } __rte_cache_aligned;
@@ -1013,10 +1013,10 @@ meters_handler(int port_id, uint8_t core_id, uint8_t ops)
 		cpu_time_used, insertion_rate);
 
 	if (ops == METER_CREATE)
-		mc_pool.create_meter.insertion[port_id][core_id]
+		mc_pool.meters_record.insertion[port_id][core_id]
 			= cpu_time_used;
 	else
-		mc_pool.create_meter.deletion[port_id][core_id]
+		mc_pool.meters_record.deletion[port_id][core_id]
 			= cpu_time_used;
 }
 
@@ -1134,7 +1134,7 @@ destroy_flows(int port_id, uint8_t core_id, struct rte_flow **flows_list)
 	printf(":: Port %d :: Core %d :: The time for deleting %d rules is %f seconds\n",
 		port_id, core_id, rules_count_per_core, cpu_time_used);
 
-	mc_pool.create_flow.deletion[port_id][core_id] = cpu_time_used;
+	mc_pool.flows_record.deletion[port_id][core_id] = cpu_time_used;
 }
 
 static struct rte_flow **
@@ -1241,7 +1241,7 @@ insert_flows(int port_id, uint8_t core_id)
 	printf(":: Port %d :: Core %d :: The time for creating %d in rules %f seconds\n",
 		port_id, core_id, rules_count_per_core, cpu_time_used);
 
-	mc_pool.create_flow.insertion[port_id][core_id] = cpu_time_used;
+	mc_pool.flows_record.insertion[port_id][core_id] = cpu_time_used;
 	return flows_list;
 }
 
@@ -1439,9 +1439,9 @@ run_rte_flow_handler_cores(void *data __rte_unused)
 	RTE_ETH_FOREACH_DEV(port) {
 		if (has_meter())
 			dump_used_cpu_time("Meters:",
-				port, &mc_pool.create_meter);
+				port, &mc_pool.meters_record);
 		dump_used_cpu_time("Flows:",
-			port, &mc_pool.create_flow);
+			port, &mc_pool.flows_record);
 		dump_used_mem(port);
 	}
 
-- 
2.17.1


^ permalink raw reply	[flat|nested] 46+ messages in thread

* [dpdk-dev] [PATCH v2 4/7] app/flow-perf: fix report total stats for masked ports
  2021-03-10 13:45   ` [dpdk-dev] [PATCH v2 0/7] Enhancements and fixes for flow-perf Wisam Jaddo
                       ` (2 preceding siblings ...)
  2021-03-10 13:45     ` [dpdk-dev] [PATCH v2 3/7] app/flow-perf: fix naming of CPU used structured data Wisam Jaddo
@ 2021-03-10 13:45     ` Wisam Jaddo
  2021-03-10 13:45     ` [dpdk-dev] [PATCH v2 5/7] app/flow-perf: fix the incremental IPv6 src set Wisam Jaddo
  2021-03-10 13:45     ` [dpdk-dev] [PATCH v2 6/7] app/flow-perf: add first flow latency support Wisam Jaddo
  5 siblings, 0 replies; 46+ messages in thread
From: Wisam Jaddo @ 2021-03-10 13:45 UTC (permalink / raw)
  To: arybchenko, thomas, akozyrev, rasland, dev; +Cc: dongzhou, stable

Take into consideration that the user may call portmask for
any run, thus the app should always check if port is needed
to collect and report or not.

Fixes: 070316d01d3e ("app/flow-perf: add multi-core rule insertion and deletion")
Fixes: d8099d7ecbd0 ("app/flow-perf: split dump functions")
Cc: dongzhou@nvidia.com
Cc: stable@dpdk.org

Signed-off-by: Wisam Jaddo <wisamm@nvidia.com>
---
 app/test-flow-perf/main.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/app/test-flow-perf/main.c b/app/test-flow-perf/main.c
index 01607881df..e32714131c 100644
--- a/app/test-flow-perf/main.c
+++ b/app/test-flow-perf/main.c
@@ -1437,6 +1437,9 @@ run_rte_flow_handler_cores(void *data __rte_unused)
 	rte_eal_mp_wait_lcore();
 
 	RTE_ETH_FOREACH_DEV(port) {
+		/* If port outside portmask */
+		if (!((ports_mask >> port) & 0x1))
+			continue;
 		if (has_meter())
 			dump_used_cpu_time("Meters:",
 				port, &mc_pool.meters_record);
-- 
2.17.1


^ permalink raw reply	[flat|nested] 46+ messages in thread

* [dpdk-dev] [PATCH v2 5/7] app/flow-perf: fix the incremental IPv6 src set
  2021-03-10 13:45   ` [dpdk-dev] [PATCH v2 0/7] Enhancements and fixes for flow-perf Wisam Jaddo
                       ` (3 preceding siblings ...)
  2021-03-10 13:45     ` [dpdk-dev] [PATCH v2 4/7] app/flow-perf: fix report total stats for masked ports Wisam Jaddo
@ 2021-03-10 13:45     ` Wisam Jaddo
  2021-03-10 13:45     ` [dpdk-dev] [PATCH v2 6/7] app/flow-perf: add first flow latency support Wisam Jaddo
  5 siblings, 0 replies; 46+ messages in thread
From: Wisam Jaddo @ 2021-03-10 13:45 UTC (permalink / raw)
  To: arybchenko, thomas, akozyrev, rasland, dev; +Cc: wisamm, stable

Currently the memset() will not set a correct src ip that represent
the incremental value of the counter.

This commit will fix this and each flow will have correct IPv6.src
that it's incremental from previous flow and equal to the decimal
values.

Fixes: bf3688f1e816 ("app/flow-perf: add insertion rate calculation")
Cc: wisamm@mellanox.com
Cc: stable@dpdk.org

Signed-off-by: Wisam Jaddo <wisamm@nvidia.com>
---
 app/test-flow-perf/items_gen.c | 13 +++++++------
 1 file changed, 7 insertions(+), 6 deletions(-)

diff --git a/app/test-flow-perf/items_gen.c b/app/test-flow-perf/items_gen.c
index ccebc08b39..a73de9031f 100644
--- a/app/test-flow-perf/items_gen.c
+++ b/app/test-flow-perf/items_gen.c
@@ -72,14 +72,15 @@ add_ipv6(struct rte_flow_item *items,
 	static struct rte_flow_item_ipv6 ipv6_specs[RTE_MAX_LCORE] __rte_cache_aligned;
 	static struct rte_flow_item_ipv6 ipv6_masks[RTE_MAX_LCORE] __rte_cache_aligned;
 	uint8_t ti = para.core_idx;
+	uint8_t i;
 
 	/** Set ipv6 src **/
-	memset(&ipv6_specs[ti].hdr.src_addr, para.src_ip,
-		sizeof(ipv6_specs->hdr.src_addr) / 2);
-
-	/** Full mask **/
-	memset(&ipv6_masks[ti].hdr.src_addr, 0xff,
-		sizeof(ipv6_specs->hdr.src_addr));
+	for (i = 0; i < 16; i++) {
+		/* Currently src_ip is limited to 32 bit */
+		if (i < 4)
+			ipv6_specs[ti].hdr.src_addr[15 - i] = para.src_ip >> (i * 8);
+		ipv6_masks[ti].hdr.src_addr[15 - i] = 0xff;
+	}
 
 	items[items_counter].type = RTE_FLOW_ITEM_TYPE_IPV6;
 	items[items_counter].spec = &ipv6_specs[ti];
-- 
2.17.1


^ permalink raw reply	[flat|nested] 46+ messages in thread

* [dpdk-dev] [PATCH v2 6/7] app/flow-perf: add first flow latency support
  2021-03-10 13:45   ` [dpdk-dev] [PATCH v2 0/7] Enhancements and fixes for flow-perf Wisam Jaddo
                       ` (4 preceding siblings ...)
  2021-03-10 13:45     ` [dpdk-dev] [PATCH v2 5/7] app/flow-perf: fix the incremental IPv6 src set Wisam Jaddo
@ 2021-03-10 13:45     ` Wisam Jaddo
  5 siblings, 0 replies; 46+ messages in thread
From: Wisam Jaddo @ 2021-03-10 13:45 UTC (permalink / raw)
  To: arybchenko, thomas, akozyrev, rasland, dev

Starting from this commit the app will always
report the first flow latency.

This is useful in debugging to check the first
flow insertion before any caching effect.

Signed-off-by: Wisam Jaddo <wisamm@nvidia.com>
---
 app/test-flow-perf/main.c | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/app/test-flow-perf/main.c b/app/test-flow-perf/main.c
index e32714131c..3d79430e9a 100644
--- a/app/test-flow-perf/main.c
+++ b/app/test-flow-perf/main.c
@@ -1143,6 +1143,7 @@ insert_flows(int port_id, uint8_t core_id)
 	struct rte_flow **flows_list;
 	struct rte_flow_error error;
 	clock_t start_batch, end_batch;
+	double first_flow_latency;
 	double cpu_time_used;
 	double insertion_rate;
 	double cpu_time_per_batch[MAX_BATCHES_COUNT] = { 0 };
@@ -1201,6 +1202,14 @@ insert_flows(int port_id, uint8_t core_id)
 			encap_data, decap_data,
 			core_id, unique_data, &error);
 
+		if (!counter) {
+			first_flow_latency = ((double) (rte_get_timer_cycles() - start_batch) / rte_get_timer_hz());
+			/* In millisecond */
+			first_flow_latency *= 1000;
+			printf(":: First Flow Latency :: Port %d :: First flow installed in %f milliseconds\n",
+				port_id, first_flow_latency);
+		}
+
 		if (force_quit)
 			counter = end_counter;
 
-- 
2.17.1


^ permalink raw reply	[flat|nested] 46+ messages in thread

* [dpdk-dev] [PATCH v2 0/7] Enhancements and fixes for flow-perf
  2021-03-07  9:11 ` [dpdk-dev] [PATCH 1/5] app/flow-perf: start using more generic wrapper for cycles Wisam Jaddo
  2021-03-10 13:45   ` [dpdk-dev] [PATCH v2 0/7] Enhancements and fixes for flow-perf Wisam Jaddo
@ 2021-03-10 13:48   ` Wisam Jaddo
  2021-03-10 13:48     ` [dpdk-dev] [PATCH v2 1/7] app/flow-perf: start using more generic wrapper for cycles Wisam Jaddo
                       ` (6 more replies)
  1 sibling, 7 replies; 46+ messages in thread
From: Wisam Jaddo @ 2021-03-10 13:48 UTC (permalink / raw)
  To: arybchenko, thomas, akozyrev, rasland, dev

changes in V2:
1- Add first flow insertion latency calculation.
2- Fix for decap data in raw decap actions.

Wisam Jaddo (7):
  app/flow-perf: start using more generic wrapper for cycles
  app/flow-perf: add new option to use unique data on the fly
  app/flow-perf: fix naming of CPU used structured data
  app/flow-perf: fix report total stats for masked ports
  app/flow-perf: fix the incremental IPv6 src set
  app/flow-perf: add first flow latency support
  app/flow-perf: fix setting decap data for decap actions

 app/test-flow-perf/actions_gen.c | 77 +++++++++++++++++---------------
 app/test-flow-perf/actions_gen.h |  3 +-
 app/test-flow-perf/config.h      |  8 +---
 app/test-flow-perf/flow_gen.c    |  4 +-
 app/test-flow-perf/flow_gen.h    |  1 +
 app/test-flow-perf/items_gen.c   | 13 +++---
 app/test-flow-perf/main.c        | 67 +++++++++++++++++----------
 doc/guides/tools/flow-perf.rst   |  5 +++
 8 files changed, 102 insertions(+), 76 deletions(-)

-- 
2.17.1


^ permalink raw reply	[flat|nested] 46+ messages in thread

* [dpdk-dev] [PATCH v2 1/7] app/flow-perf: start using more generic wrapper for cycles
  2021-03-10 13:48   ` [dpdk-dev] [PATCH v2 0/7] Enhancements and fixes for flow-perf Wisam Jaddo
@ 2021-03-10 13:48     ` Wisam Jaddo
  2021-03-10 13:53       ` [dpdk-dev] [PATCH v3 0/7] Enhancements and fixes for flow-perf Wisam Jaddo
  2021-03-10 13:55       ` [dpdk-dev] [PATCH v3 0/7] Enhancements and fixes for flow-perf Wisam Jaddo
  2021-03-10 13:48     ` [dpdk-dev] [PATCH v2 2/7] app/flow-perf: add new option to use unique data on the fly Wisam Jaddo
                       ` (5 subsequent siblings)
  6 siblings, 2 replies; 46+ messages in thread
From: Wisam Jaddo @ 2021-03-10 13:48 UTC (permalink / raw)
  To: arybchenko, thomas, akozyrev, rasland, dev

rdtsc() is x86 related, while this might fail for other archs,
so it's better to use more generic API for cycles measurement.

Signed-off-by: Wisam Jaddo <wisamm@nvidia.com>
---
 app/test-flow-perf/main.c | 24 ++++++++++++------------
 1 file changed, 12 insertions(+), 12 deletions(-)

diff --git a/app/test-flow-perf/main.c b/app/test-flow-perf/main.c
index 99d0463456..8b5a11c15e 100644
--- a/app/test-flow-perf/main.c
+++ b/app/test-flow-perf/main.c
@@ -969,7 +969,7 @@ meters_handler(int port_id, uint8_t core_id, uint8_t ops)
 	end_counter = (core_id + 1) * rules_count_per_core;
 
 	cpu_time_used = 0;
-	start_batch = rte_rdtsc();
+	start_batch = rte_get_timer_cycles();
 	for (counter = start_counter; counter < end_counter; counter++) {
 		if (ops == METER_CREATE)
 			create_meter_rule(port_id, counter);
@@ -984,10 +984,10 @@ meters_handler(int port_id, uint8_t core_id, uint8_t ops)
 		if (!((counter + 1) % rules_batch)) {
 			rules_batch_idx = ((counter + 1) / rules_batch) - 1;
 			cpu_time_per_batch[rules_batch_idx] =
-				((double)(rte_rdtsc() - start_batch))
-				/ rte_get_tsc_hz();
+				((double)(rte_get_timer_cycles() - start_batch))
+				/ rte_get_timer_hz();
 			cpu_time_used += cpu_time_per_batch[rules_batch_idx];
-			start_batch = rte_rdtsc();
+			start_batch = rte_get_timer_cycles();
 		}
 	}
 
@@ -1089,7 +1089,7 @@ destroy_flows(int port_id, uint8_t core_id, struct rte_flow **flows_list)
 	if (flow_group > 0 && core_id == 0)
 		rules_count_per_core++;
 
-	start_batch = rte_rdtsc();
+	start_batch = rte_get_timer_cycles();
 	for (i = 0; i < (uint32_t) rules_count_per_core; i++) {
 		if (flows_list[i] == 0)
 			break;
@@ -1107,12 +1107,12 @@ destroy_flows(int port_id, uint8_t core_id, struct rte_flow **flows_list)
 		 * for this batch.
 		 */
 		if (!((i + 1) % rules_batch)) {
-			end_batch = rte_rdtsc();
+			end_batch = rte_get_timer_cycles();
 			delta = (double) (end_batch - start_batch);
 			rules_batch_idx = ((i + 1) / rules_batch) - 1;
-			cpu_time_per_batch[rules_batch_idx] = delta / rte_get_tsc_hz();
+			cpu_time_per_batch[rules_batch_idx] = delta / rte_get_timer_hz();
 			cpu_time_used += cpu_time_per_batch[rules_batch_idx];
-			start_batch = rte_rdtsc();
+			start_batch = rte_get_timer_cycles();
 		}
 	}
 
@@ -1185,7 +1185,7 @@ insert_flows(int port_id, uint8_t core_id)
 		flows_list[flow_index++] = flow;
 	}
 
-	start_batch = rte_rdtsc();
+	start_batch = rte_get_timer_cycles();
 	for (counter = start_counter; counter < end_counter; counter++) {
 		flow = generate_flow(port_id, flow_group,
 			flow_attrs, flow_items, flow_actions,
@@ -1211,12 +1211,12 @@ insert_flows(int port_id, uint8_t core_id)
 		 * for this batch.
 		 */
 		if (!((counter + 1) % rules_batch)) {
-			end_batch = rte_rdtsc();
+			end_batch = rte_get_timer_cycles();
 			delta = (double) (end_batch - start_batch);
 			rules_batch_idx = ((counter + 1) / rules_batch) - 1;
-			cpu_time_per_batch[rules_batch_idx] = delta / rte_get_tsc_hz();
+			cpu_time_per_batch[rules_batch_idx] = delta / rte_get_timer_hz();
 			cpu_time_used += cpu_time_per_batch[rules_batch_idx];
-			start_batch = rte_rdtsc();
+			start_batch = rte_get_timer_cycles();
 		}
 	}
 
-- 
2.17.1


^ permalink raw reply	[flat|nested] 46+ messages in thread

* [dpdk-dev] [PATCH v2 2/7] app/flow-perf: add new option to use unique data on the fly
  2021-03-10 13:48   ` [dpdk-dev] [PATCH v2 0/7] Enhancements and fixes for flow-perf Wisam Jaddo
  2021-03-10 13:48     ` [dpdk-dev] [PATCH v2 1/7] app/flow-perf: start using more generic wrapper for cycles Wisam Jaddo
@ 2021-03-10 13:48     ` Wisam Jaddo
  2021-03-10 13:48     ` [dpdk-dev] [PATCH v2 3/7] app/flow-perf: fix naming of CPU used structured data Wisam Jaddo
                       ` (4 subsequent siblings)
  6 siblings, 0 replies; 46+ messages in thread
From: Wisam Jaddo @ 2021-03-10 13:48 UTC (permalink / raw)
  To: arybchenko, thomas, akozyrev, rasland, dev

Current support for unique data is to compile with config.h
var as FIXED_VALUES as 0, and this is only supported on
compilation time, as a result the user may use only single
mode for each compilation.

Starting with this commit the user will have the ability to
use this feature on the fly by using this new option:
--unique-data

Example of unique data usage:
Insert many rules with different encap data for a flows that
have encap action in it.

Signed-off-by: Wisam Jaddo <wisamm@nvidia.com>
---
 app/test-flow-perf/actions_gen.c | 77 +++++++++++++++++---------------
 app/test-flow-perf/actions_gen.h |  3 +-
 app/test-flow-perf/config.h      |  8 +---
 app/test-flow-perf/flow_gen.c    |  4 +-
 app/test-flow-perf/flow_gen.h    |  1 +
 app/test-flow-perf/main.c        | 13 ++++--
 doc/guides/tools/flow-perf.rst   |  5 +++
 7 files changed, 62 insertions(+), 49 deletions(-)

diff --git a/app/test-flow-perf/actions_gen.c b/app/test-flow-perf/actions_gen.c
index 1f5c64fde9..82cddfc676 100644
--- a/app/test-flow-perf/actions_gen.c
+++ b/app/test-flow-perf/actions_gen.c
@@ -30,6 +30,7 @@ struct additional_para {
 	uint64_t encap_data;
 	uint64_t decap_data;
 	uint8_t core_idx;
+	bool unique_data;
 };
 
 /* Storage for struct rte_flow_action_raw_encap including external data. */
@@ -202,14 +203,14 @@ add_count(struct rte_flow_action *actions,
 static void
 add_set_src_mac(struct rte_flow_action *actions,
 	uint8_t actions_counter,
-	__rte_unused struct additional_para para)
+	struct additional_para para)
 {
 	static struct rte_flow_action_set_mac set_macs[RTE_MAX_LCORE] __rte_cache_aligned;
 	uint32_t mac = para.counter;
 	uint16_t i;
 
 	/* Fixed value */
-	if (FIXED_VALUES)
+	if (!para.unique_data)
 		mac = 1;
 
 	/* Mac address to be set is random each time */
@@ -225,14 +226,14 @@ add_set_src_mac(struct rte_flow_action *actions,
 static void
 add_set_dst_mac(struct rte_flow_action *actions,
 	uint8_t actions_counter,
-	__rte_unused struct additional_para para)
+	struct additional_para para)
 {
 	static struct rte_flow_action_set_mac set_macs[RTE_MAX_LCORE] __rte_cache_aligned;
 	uint32_t mac = para.counter;
 	uint16_t i;
 
 	/* Fixed value */
-	if (FIXED_VALUES)
+	if (!para.unique_data)
 		mac = 1;
 
 	/* Mac address to be set is random each time */
@@ -248,13 +249,13 @@ add_set_dst_mac(struct rte_flow_action *actions,
 static void
 add_set_src_ipv4(struct rte_flow_action *actions,
 	uint8_t actions_counter,
-	__rte_unused struct additional_para para)
+	struct additional_para para)
 {
 	static struct rte_flow_action_set_ipv4 set_ipv4[RTE_MAX_LCORE] __rte_cache_aligned;
 	uint32_t ip = para.counter;
 
 	/* Fixed value */
-	if (FIXED_VALUES)
+	if (!para.unique_data)
 		ip = 1;
 
 	/* IPv4 value to be set is random each time */
@@ -267,13 +268,13 @@ add_set_src_ipv4(struct rte_flow_action *actions,
 static void
 add_set_dst_ipv4(struct rte_flow_action *actions,
 	uint8_t actions_counter,
-	__rte_unused struct additional_para para)
+	struct additional_para para)
 {
 	static struct rte_flow_action_set_ipv4 set_ipv4[RTE_MAX_LCORE] __rte_cache_aligned;
 	uint32_t ip = para.counter;
 
 	/* Fixed value */
-	if (FIXED_VALUES)
+	if (!para.unique_data)
 		ip = 1;
 
 	/* IPv4 value to be set is random each time */
@@ -286,14 +287,14 @@ add_set_dst_ipv4(struct rte_flow_action *actions,
 static void
 add_set_src_ipv6(struct rte_flow_action *actions,
 	uint8_t actions_counter,
-	__rte_unused struct additional_para para)
+	struct additional_para para)
 {
 	static struct rte_flow_action_set_ipv6 set_ipv6[RTE_MAX_LCORE] __rte_cache_aligned;
 	uint32_t ipv6 = para.counter;
 	uint8_t i;
 
 	/* Fixed value */
-	if (FIXED_VALUES)
+	if (!para.unique_data)
 		ipv6 = 1;
 
 	/* IPv6 value to set is random each time */
@@ -309,14 +310,14 @@ add_set_src_ipv6(struct rte_flow_action *actions,
 static void
 add_set_dst_ipv6(struct rte_flow_action *actions,
 	uint8_t actions_counter,
-	__rte_unused struct additional_para para)
+	struct additional_para para)
 {
 	static struct rte_flow_action_set_ipv6 set_ipv6[RTE_MAX_LCORE] __rte_cache_aligned;
 	uint32_t ipv6 = para.counter;
 	uint8_t i;
 
 	/* Fixed value */
-	if (FIXED_VALUES)
+	if (!para.unique_data)
 		ipv6 = 1;
 
 	/* IPv6 value to set is random each time */
@@ -332,13 +333,13 @@ add_set_dst_ipv6(struct rte_flow_action *actions,
 static void
 add_set_src_tp(struct rte_flow_action *actions,
 	uint8_t actions_counter,
-	__rte_unused struct additional_para para)
+	struct additional_para para)
 {
 	static struct rte_flow_action_set_tp set_tp[RTE_MAX_LCORE] __rte_cache_aligned;
 	uint32_t tp = para.counter;
 
 	/* Fixed value */
-	if (FIXED_VALUES)
+	if (!para.unique_data)
 		tp = 100;
 
 	/* TP src port is random each time */
@@ -353,13 +354,13 @@ add_set_src_tp(struct rte_flow_action *actions,
 static void
 add_set_dst_tp(struct rte_flow_action *actions,
 	uint8_t actions_counter,
-	__rte_unused struct additional_para para)
+	struct additional_para para)
 {
 	static struct rte_flow_action_set_tp set_tp[RTE_MAX_LCORE] __rte_cache_aligned;
 	uint32_t tp = para.counter;
 
 	/* Fixed value */
-	if (FIXED_VALUES)
+	if (!para.unique_data)
 		tp = 100;
 
 	/* TP src port is random each time */
@@ -375,13 +376,13 @@ add_set_dst_tp(struct rte_flow_action *actions,
 static void
 add_inc_tcp_ack(struct rte_flow_action *actions,
 	uint8_t actions_counter,
-	__rte_unused struct additional_para para)
+	struct additional_para para)
 {
 	static rte_be32_t value[RTE_MAX_LCORE] __rte_cache_aligned;
 	uint32_t ack_value = para.counter;
 
 	/* Fixed value */
-	if (FIXED_VALUES)
+	if (!para.unique_data)
 		ack_value = 1;
 
 	value[para.core_idx] = RTE_BE32(ack_value);
@@ -393,13 +394,13 @@ add_inc_tcp_ack(struct rte_flow_action *actions,
 static void
 add_dec_tcp_ack(struct rte_flow_action *actions,
 	uint8_t actions_counter,
-	__rte_unused struct additional_para para)
+	struct additional_para para)
 {
 	static rte_be32_t value[RTE_MAX_LCORE] __rte_cache_aligned;
 	uint32_t ack_value = para.counter;
 
 	/* Fixed value */
-	if (FIXED_VALUES)
+	if (!para.unique_data)
 		ack_value = 1;
 
 	value[para.core_idx] = RTE_BE32(ack_value);
@@ -411,13 +412,13 @@ add_dec_tcp_ack(struct rte_flow_action *actions,
 static void
 add_inc_tcp_seq(struct rte_flow_action *actions,
 	uint8_t actions_counter,
-	__rte_unused struct additional_para para)
+	struct additional_para para)
 {
 	static rte_be32_t value[RTE_MAX_LCORE] __rte_cache_aligned;
 	uint32_t seq_value = para.counter;
 
 	/* Fixed value */
-	if (FIXED_VALUES)
+	if (!para.unique_data)
 		seq_value = 1;
 
 	value[para.core_idx] = RTE_BE32(seq_value);
@@ -429,13 +430,13 @@ add_inc_tcp_seq(struct rte_flow_action *actions,
 static void
 add_dec_tcp_seq(struct rte_flow_action *actions,
 	uint8_t actions_counter,
-	__rte_unused struct additional_para para)
+	struct additional_para para)
 {
 	static rte_be32_t value[RTE_MAX_LCORE] __rte_cache_aligned;
 	uint32_t seq_value = para.counter;
 
 	/* Fixed value */
-	if (FIXED_VALUES)
+	if (!para.unique_data)
 		seq_value = 1;
 
 	value[para.core_idx] = RTE_BE32(seq_value);
@@ -447,13 +448,13 @@ add_dec_tcp_seq(struct rte_flow_action *actions,
 static void
 add_set_ttl(struct rte_flow_action *actions,
 	uint8_t actions_counter,
-	__rte_unused struct additional_para para)
+	struct additional_para para)
 {
 	static struct rte_flow_action_set_ttl set_ttl[RTE_MAX_LCORE] __rte_cache_aligned;
 	uint32_t ttl_value = para.counter;
 
 	/* Fixed value */
-	if (FIXED_VALUES)
+	if (!para.unique_data)
 		ttl_value = 1;
 
 	/* Set ttl to random value each time */
@@ -476,13 +477,13 @@ add_dec_ttl(struct rte_flow_action *actions,
 static void
 add_set_ipv4_dscp(struct rte_flow_action *actions,
 	uint8_t actions_counter,
-	__rte_unused struct additional_para para)
+	struct additional_para para)
 {
 	static struct rte_flow_action_set_dscp set_dscp[RTE_MAX_LCORE] __rte_cache_aligned;
 	uint32_t dscp_value = para.counter;
 
 	/* Fixed value */
-	if (FIXED_VALUES)
+	if (!para.unique_data)
 		dscp_value = 1;
 
 	/* Set dscp to random value each time */
@@ -497,13 +498,13 @@ add_set_ipv4_dscp(struct rte_flow_action *actions,
 static void
 add_set_ipv6_dscp(struct rte_flow_action *actions,
 	uint8_t actions_counter,
-	__rte_unused struct additional_para para)
+	struct additional_para para)
 {
 	static struct rte_flow_action_set_dscp set_dscp[RTE_MAX_LCORE] __rte_cache_aligned;
 	uint32_t dscp_value = para.counter;
 
 	/* Fixed value */
-	if (FIXED_VALUES)
+	if (!para.unique_data)
 		dscp_value = 1;
 
 	/* Set dscp to random value each time */
@@ -577,7 +578,7 @@ add_ipv4_header(uint8_t **header, uint64_t data,
 		return;
 
 	/* Fixed value */
-	if (FIXED_VALUES)
+	if (!para.unique_data)
 		ip_dst = 1;
 
 	memset(&ipv4_hdr, 0, sizeof(struct rte_ipv4_hdr));
@@ -643,7 +644,7 @@ add_vxlan_header(uint8_t **header, uint64_t data,
 		return;
 
 	/* Fixed value */
-	if (FIXED_VALUES)
+	if (!para.unique_data)
 		vni_value = 1;
 
 	memset(&vxlan_hdr, 0, sizeof(struct rte_vxlan_hdr));
@@ -666,7 +667,7 @@ add_vxlan_gpe_header(uint8_t **header, uint64_t data,
 		return;
 
 	/* Fixed value */
-	if (FIXED_VALUES)
+	if (!para.unique_data)
 		vni_value = 1;
 
 	memset(&vxlan_gpe_hdr, 0, sizeof(struct rte_vxlan_gpe_hdr));
@@ -707,7 +708,7 @@ add_geneve_header(uint8_t **header, uint64_t data,
 		return;
 
 	/* Fixed value */
-	if (FIXED_VALUES)
+	if (!para.unique_data)
 		vni_value = 1;
 
 	memset(&geneve_hdr, 0, sizeof(struct rte_geneve_hdr));
@@ -730,7 +731,7 @@ add_gtp_header(uint8_t **header, uint64_t data,
 		return;
 
 	/* Fixed value */
-	if (FIXED_VALUES)
+	if (!para.unique_data)
 		teid_value = 1;
 
 	memset(&gtp_hdr, 0, sizeof(struct rte_flow_item_gtp));
@@ -849,7 +850,7 @@ add_vxlan_encap(struct rte_flow_action *actions,
 	uint32_t ip_dst = para.counter;
 
 	/* Fixed value */
-	if (FIXED_VALUES)
+	if (!para.unique_data)
 		ip_dst = 1;
 
 	items[0].spec = &item_eth;
@@ -907,7 +908,8 @@ add_meter(struct rte_flow_action *actions,
 void
 fill_actions(struct rte_flow_action *actions, uint64_t *flow_actions,
 	uint32_t counter, uint16_t next_table, uint16_t hairpinq,
-	uint64_t encap_data, uint64_t decap_data, uint8_t core_idx)
+	uint64_t encap_data, uint64_t decap_data, uint8_t core_idx,
+	bool unique_data)
 {
 	struct additional_para additional_para_data;
 	uint8_t actions_counter = 0;
@@ -930,6 +932,7 @@ fill_actions(struct rte_flow_action *actions, uint64_t *flow_actions,
 		.encap_data = encap_data,
 		.decap_data = decap_data,
 		.core_idx = core_idx,
+		.unique_data = unique_data,
 	};
 
 	if (hairpinq != 0) {
diff --git a/app/test-flow-perf/actions_gen.h b/app/test-flow-perf/actions_gen.h
index 77353cfe09..6f2f833496 100644
--- a/app/test-flow-perf/actions_gen.h
+++ b/app/test-flow-perf/actions_gen.h
@@ -19,6 +19,7 @@
 
 void fill_actions(struct rte_flow_action *actions, uint64_t *flow_actions,
 	uint32_t counter, uint16_t next_table, uint16_t hairpinq,
-	uint64_t encap_data, uint64_t decap_data, uint8_t core_idx);
+	uint64_t encap_data, uint64_t decap_data, uint8_t core_idx,
+	bool unique_data);
 
 #endif /* FLOW_PERF_ACTION_GEN */
diff --git a/app/test-flow-perf/config.h b/app/test-flow-perf/config.h
index 3d4696d61a..a14d4e05e1 100644
--- a/app/test-flow-perf/config.h
+++ b/app/test-flow-perf/config.h
@@ -5,7 +5,7 @@
 #define FLOW_ITEM_MASK(_x) (UINT64_C(1) << _x)
 #define FLOW_ACTION_MASK(_x) (UINT64_C(1) << _x)
 #define FLOW_ATTR_MASK(_x) (UINT64_C(1) << _x)
-#define GET_RSS_HF() (ETH_RSS_IP | ETH_RSS_TCP)
+#define GET_RSS_HF() (ETH_RSS_IP)
 
 /* Configuration */
 #define RXQ_NUM 4
@@ -19,12 +19,6 @@
 #define METER_CIR 1250000
 #define DEFAULT_METER_PROF_ID 100
 
-/* This is used for encap/decap & header modify actions.
- * When it's 1: it means all actions have fixed values.
- * When it's 0: it means all actions will have different values.
- */
-#define FIXED_VALUES 1
-
 /* Items/Actions parameters */
 #define JUMP_ACTION_TABLE 2
 #define VLAN_VALUE 1
diff --git a/app/test-flow-perf/flow_gen.c b/app/test-flow-perf/flow_gen.c
index df4af16de8..8f87fac5f6 100644
--- a/app/test-flow-perf/flow_gen.c
+++ b/app/test-flow-perf/flow_gen.c
@@ -46,6 +46,7 @@ generate_flow(uint16_t port_id,
 	uint64_t encap_data,
 	uint64_t decap_data,
 	uint8_t core_idx,
+	bool unique_data,
 	struct rte_flow_error *error)
 {
 	struct rte_flow_attr attr;
@@ -61,7 +62,8 @@ generate_flow(uint16_t port_id,
 
 	fill_actions(actions, flow_actions,
 		outer_ip_src, next_table, hairpinq,
-		encap_data, decap_data, core_idx);
+		encap_data, decap_data, core_idx,
+		unique_data);
 
 	fill_items(items, flow_items, outer_ip_src, core_idx);
 
diff --git a/app/test-flow-perf/flow_gen.h b/app/test-flow-perf/flow_gen.h
index f1d0999af1..dc887fceae 100644
--- a/app/test-flow-perf/flow_gen.h
+++ b/app/test-flow-perf/flow_gen.h
@@ -35,6 +35,7 @@ generate_flow(uint16_t port_id,
 	uint64_t encap_data,
 	uint64_t decap_data,
 	uint8_t core_idx,
+	bool unique_data,
 	struct rte_flow_error *error);
 
 #endif /* FLOW_PERF_FLOW_GEN */
diff --git a/app/test-flow-perf/main.c b/app/test-flow-perf/main.c
index 8b5a11c15e..4054178273 100644
--- a/app/test-flow-perf/main.c
+++ b/app/test-flow-perf/main.c
@@ -61,6 +61,7 @@ static bool dump_iterations;
 static bool delete_flag;
 static bool dump_socket_mem_flag;
 static bool enable_fwd;
+static bool unique_data;
 
 static struct rte_mempool *mbuf_mp;
 static uint32_t nb_lcores;
@@ -131,6 +132,8 @@ usage(char *progname)
 	printf("  --enable-fwd: To enable packets forwarding"
 		" after insertion\n");
 	printf("  --portmask=N: hexadecimal bitmask of ports used\n");
+	printf("  --unique-data: flag to set using unique data for all"
+		" actions that support data, such as header modify and encap actions\n");
 
 	printf("To set flow attributes:\n");
 	printf("  --ingress: set ingress attribute in flows\n");
@@ -567,6 +570,7 @@ args_parse(int argc, char **argv)
 		{ "deletion-rate",              0, 0, 0 },
 		{ "dump-socket-mem",            0, 0, 0 },
 		{ "enable-fwd",                 0, 0, 0 },
+		{ "unique-data",                0, 0, 0 },
 		{ "portmask",                   1, 0, 0 },
 		{ "cores",                      1, 0, 0 },
 		/* Attributes */
@@ -765,6 +769,9 @@ args_parse(int argc, char **argv)
 			if (strcmp(lgopts[opt_idx].name,
 					"dump-iterations") == 0)
 				dump_iterations = true;
+			if (strcmp(lgopts[opt_idx].name,
+					"unique-data") == 0)
+				unique_data = true;
 			if (strcmp(lgopts[opt_idx].name,
 					"deletion-rate") == 0)
 				delete_flag = true;
@@ -1176,7 +1183,7 @@ insert_flows(int port_id, uint8_t core_id)
 		 */
 		flow = generate_flow(port_id, 0, flow_attrs,
 			global_items, global_actions,
-			flow_group, 0, 0, 0, 0, core_id, &error);
+			flow_group, 0, 0, 0, 0, core_id, unique_data, &error);
 
 		if (flow == NULL) {
 			print_flow_error(error);
@@ -1192,7 +1199,7 @@ insert_flows(int port_id, uint8_t core_id)
 			JUMP_ACTION_TABLE, counter,
 			hairpin_queues_num,
 			encap_data, decap_data,
-			core_id, &error);
+			core_id, unique_data, &error);
 
 		if (force_quit)
 			counter = end_counter;
@@ -1863,6 +1870,7 @@ main(int argc, char **argv)
 	delete_flag = false;
 	dump_socket_mem_flag = false;
 	flow_group = DEFAULT_GROUP;
+	unique_data = false;
 
 	signal(SIGINT, signal_handler);
 	signal(SIGTERM, signal_handler);
@@ -1878,7 +1886,6 @@ main(int argc, char **argv)
 	if (nb_lcores <= 1)
 		rte_exit(EXIT_FAILURE, "This app needs at least two cores\n");
 
-
 	printf(":: Flows Count per port: %d\n\n", rules_count);
 
 	if (has_meter())
diff --git a/doc/guides/tools/flow-perf.rst b/doc/guides/tools/flow-perf.rst
index 017e200222..280bf7e0e0 100644
--- a/doc/guides/tools/flow-perf.rst
+++ b/doc/guides/tools/flow-perf.rst
@@ -100,6 +100,11 @@ The command line options are:
 	Set the number of needed cores to insert/delete rte_flow rules.
 	Default cores count is 1.
 
+*       ``--unique-data``
+        Flag to set using unique data for all actions that support data,
+        Such as header modify and encap actions. Default is using fixed
+        data for any action that support data for all flows.
+
 Attributes:
 
 *	``--ingress``
-- 
2.17.1


^ permalink raw reply	[flat|nested] 46+ messages in thread

* [dpdk-dev] [PATCH v2 3/7] app/flow-perf: fix naming of CPU used structured data
  2021-03-10 13:48   ` [dpdk-dev] [PATCH v2 0/7] Enhancements and fixes for flow-perf Wisam Jaddo
  2021-03-10 13:48     ` [dpdk-dev] [PATCH v2 1/7] app/flow-perf: start using more generic wrapper for cycles Wisam Jaddo
  2021-03-10 13:48     ` [dpdk-dev] [PATCH v2 2/7] app/flow-perf: add new option to use unique data on the fly Wisam Jaddo
@ 2021-03-10 13:48     ` Wisam Jaddo
  2021-03-10 13:48     ` [dpdk-dev] [PATCH v2 4/7] app/flow-perf: fix report total stats for masked ports Wisam Jaddo
                       ` (3 subsequent siblings)
  6 siblings, 0 replies; 46+ messages in thread
From: Wisam Jaddo @ 2021-03-10 13:48 UTC (permalink / raw)
  To: arybchenko, thomas, akozyrev, rasland, dev; +Cc: dongzhou

create_flow and create_meter are not correct names since those
are records that contain creation and deletion, which makes
them more of a record for such data.

Fixes: d8099d7ecbd0 ("app/flow-perf: split dump functions")
Cc: dongzhou@nvidia.com

Signed-off-by: Wisam Jaddo <wisamm@nvidia.com>
---
 app/test-flow-perf/main.c | 16 ++++++++--------
 1 file changed, 8 insertions(+), 8 deletions(-)

diff --git a/app/test-flow-perf/main.c b/app/test-flow-perf/main.c
index 4054178273..01607881df 100644
--- a/app/test-flow-perf/main.c
+++ b/app/test-flow-perf/main.c
@@ -105,8 +105,8 @@ struct used_cpu_time {
 struct multi_cores_pool {
 	uint32_t cores_count;
 	uint32_t rules_count;
-	struct used_cpu_time create_meter;
-	struct used_cpu_time create_flow;
+	struct used_cpu_time meters_record;
+	struct used_cpu_time flows_record;
 	int64_t last_alloc[RTE_MAX_LCORE];
 	int64_t current_alloc[RTE_MAX_LCORE];
 } __rte_cache_aligned;
@@ -1013,10 +1013,10 @@ meters_handler(int port_id, uint8_t core_id, uint8_t ops)
 		cpu_time_used, insertion_rate);
 
 	if (ops == METER_CREATE)
-		mc_pool.create_meter.insertion[port_id][core_id]
+		mc_pool.meters_record.insertion[port_id][core_id]
 			= cpu_time_used;
 	else
-		mc_pool.create_meter.deletion[port_id][core_id]
+		mc_pool.meters_record.deletion[port_id][core_id]
 			= cpu_time_used;
 }
 
@@ -1134,7 +1134,7 @@ destroy_flows(int port_id, uint8_t core_id, struct rte_flow **flows_list)
 	printf(":: Port %d :: Core %d :: The time for deleting %d rules is %f seconds\n",
 		port_id, core_id, rules_count_per_core, cpu_time_used);
 
-	mc_pool.create_flow.deletion[port_id][core_id] = cpu_time_used;
+	mc_pool.flows_record.deletion[port_id][core_id] = cpu_time_used;
 }
 
 static struct rte_flow **
@@ -1241,7 +1241,7 @@ insert_flows(int port_id, uint8_t core_id)
 	printf(":: Port %d :: Core %d :: The time for creating %d in rules %f seconds\n",
 		port_id, core_id, rules_count_per_core, cpu_time_used);
 
-	mc_pool.create_flow.insertion[port_id][core_id] = cpu_time_used;
+	mc_pool.flows_record.insertion[port_id][core_id] = cpu_time_used;
 	return flows_list;
 }
 
@@ -1439,9 +1439,9 @@ run_rte_flow_handler_cores(void *data __rte_unused)
 	RTE_ETH_FOREACH_DEV(port) {
 		if (has_meter())
 			dump_used_cpu_time("Meters:",
-				port, &mc_pool.create_meter);
+				port, &mc_pool.meters_record);
 		dump_used_cpu_time("Flows:",
-			port, &mc_pool.create_flow);
+			port, &mc_pool.flows_record);
 		dump_used_mem(port);
 	}
 
-- 
2.17.1


^ permalink raw reply	[flat|nested] 46+ messages in thread

* [dpdk-dev] [PATCH v2 4/7] app/flow-perf: fix report total stats for masked ports
  2021-03-10 13:48   ` [dpdk-dev] [PATCH v2 0/7] Enhancements and fixes for flow-perf Wisam Jaddo
                       ` (2 preceding siblings ...)
  2021-03-10 13:48     ` [dpdk-dev] [PATCH v2 3/7] app/flow-perf: fix naming of CPU used structured data Wisam Jaddo
@ 2021-03-10 13:48     ` Wisam Jaddo
  2021-03-10 13:48     ` [dpdk-dev] [PATCH v2 5/7] app/flow-perf: fix the incremental IPv6 src set Wisam Jaddo
                       ` (2 subsequent siblings)
  6 siblings, 0 replies; 46+ messages in thread
From: Wisam Jaddo @ 2021-03-10 13:48 UTC (permalink / raw)
  To: arybchenko, thomas, akozyrev, rasland, dev; +Cc: dongzhou, stable

Take into consideration that the user may call portmask for
any run, thus the app should always check if port is needed
to collect and report or not.

Fixes: 070316d01d3e ("app/flow-perf: add multi-core rule insertion and deletion")
Fixes: d8099d7ecbd0 ("app/flow-perf: split dump functions")
Cc: dongzhou@nvidia.com
Cc: stable@dpdk.org

Signed-off-by: Wisam Jaddo <wisamm@nvidia.com>
---
 app/test-flow-perf/main.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/app/test-flow-perf/main.c b/app/test-flow-perf/main.c
index 01607881df..e32714131c 100644
--- a/app/test-flow-perf/main.c
+++ b/app/test-flow-perf/main.c
@@ -1437,6 +1437,9 @@ run_rte_flow_handler_cores(void *data __rte_unused)
 	rte_eal_mp_wait_lcore();
 
 	RTE_ETH_FOREACH_DEV(port) {
+		/* If port outside portmask */
+		if (!((ports_mask >> port) & 0x1))
+			continue;
 		if (has_meter())
 			dump_used_cpu_time("Meters:",
 				port, &mc_pool.meters_record);
-- 
2.17.1


^ permalink raw reply	[flat|nested] 46+ messages in thread

* [dpdk-dev] [PATCH v2 5/7] app/flow-perf: fix the incremental IPv6 src set
  2021-03-10 13:48   ` [dpdk-dev] [PATCH v2 0/7] Enhancements and fixes for flow-perf Wisam Jaddo
                       ` (3 preceding siblings ...)
  2021-03-10 13:48     ` [dpdk-dev] [PATCH v2 4/7] app/flow-perf: fix report total stats for masked ports Wisam Jaddo
@ 2021-03-10 13:48     ` Wisam Jaddo
  2021-03-10 13:48     ` [dpdk-dev] [PATCH v2 6/7] app/flow-perf: add first flow latency support Wisam Jaddo
  2021-03-10 13:48     ` [dpdk-dev] [PATCH v2 7/7] app/flow-perf: fix setting decap data for decap actions Wisam Jaddo
  6 siblings, 0 replies; 46+ messages in thread
From: Wisam Jaddo @ 2021-03-10 13:48 UTC (permalink / raw)
  To: arybchenko, thomas, akozyrev, rasland, dev; +Cc: wisamm, stable

Currently the memset() will not set a correct src ip that represent
the incremental value of the counter.

This commit will fix this and each flow will have correct IPv6.src
that it's incremental from previous flow and equal to the decimal
values.

Fixes: bf3688f1e816 ("app/flow-perf: add insertion rate calculation")
Cc: wisamm@mellanox.com
Cc: stable@dpdk.org

Signed-off-by: Wisam Jaddo <wisamm@nvidia.com>
---
 app/test-flow-perf/items_gen.c | 13 +++++++------
 1 file changed, 7 insertions(+), 6 deletions(-)

diff --git a/app/test-flow-perf/items_gen.c b/app/test-flow-perf/items_gen.c
index ccebc08b39..a73de9031f 100644
--- a/app/test-flow-perf/items_gen.c
+++ b/app/test-flow-perf/items_gen.c
@@ -72,14 +72,15 @@ add_ipv6(struct rte_flow_item *items,
 	static struct rte_flow_item_ipv6 ipv6_specs[RTE_MAX_LCORE] __rte_cache_aligned;
 	static struct rte_flow_item_ipv6 ipv6_masks[RTE_MAX_LCORE] __rte_cache_aligned;
 	uint8_t ti = para.core_idx;
+	uint8_t i;
 
 	/** Set ipv6 src **/
-	memset(&ipv6_specs[ti].hdr.src_addr, para.src_ip,
-		sizeof(ipv6_specs->hdr.src_addr) / 2);
-
-	/** Full mask **/
-	memset(&ipv6_masks[ti].hdr.src_addr, 0xff,
-		sizeof(ipv6_specs->hdr.src_addr));
+	for (i = 0; i < 16; i++) {
+		/* Currently src_ip is limited to 32 bit */
+		if (i < 4)
+			ipv6_specs[ti].hdr.src_addr[15 - i] = para.src_ip >> (i * 8);
+		ipv6_masks[ti].hdr.src_addr[15 - i] = 0xff;
+	}
 
 	items[items_counter].type = RTE_FLOW_ITEM_TYPE_IPV6;
 	items[items_counter].spec = &ipv6_specs[ti];
-- 
2.17.1


^ permalink raw reply	[flat|nested] 46+ messages in thread

* [dpdk-dev] [PATCH v2 6/7] app/flow-perf: add first flow latency support
  2021-03-10 13:48   ` [dpdk-dev] [PATCH v2 0/7] Enhancements and fixes for flow-perf Wisam Jaddo
                       ` (4 preceding siblings ...)
  2021-03-10 13:48     ` [dpdk-dev] [PATCH v2 5/7] app/flow-perf: fix the incremental IPv6 src set Wisam Jaddo
@ 2021-03-10 13:48     ` Wisam Jaddo
  2021-03-10 13:48     ` [dpdk-dev] [PATCH v2 7/7] app/flow-perf: fix setting decap data for decap actions Wisam Jaddo
  6 siblings, 0 replies; 46+ messages in thread
From: Wisam Jaddo @ 2021-03-10 13:48 UTC (permalink / raw)
  To: arybchenko, thomas, akozyrev, rasland, dev

Starting from this commit the app will always
report the first flow latency.

This is useful in debugging to check the first
flow insertion before any caching effect.

Signed-off-by: Wisam Jaddo <wisamm@nvidia.com>
---
 app/test-flow-perf/main.c | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/app/test-flow-perf/main.c b/app/test-flow-perf/main.c
index e32714131c..3d79430e9a 100644
--- a/app/test-flow-perf/main.c
+++ b/app/test-flow-perf/main.c
@@ -1143,6 +1143,7 @@ insert_flows(int port_id, uint8_t core_id)
 	struct rte_flow **flows_list;
 	struct rte_flow_error error;
 	clock_t start_batch, end_batch;
+	double first_flow_latency;
 	double cpu_time_used;
 	double insertion_rate;
 	double cpu_time_per_batch[MAX_BATCHES_COUNT] = { 0 };
@@ -1201,6 +1202,14 @@ insert_flows(int port_id, uint8_t core_id)
 			encap_data, decap_data,
 			core_id, unique_data, &error);
 
+		if (!counter) {
+			first_flow_latency = ((double) (rte_get_timer_cycles() - start_batch) / rte_get_timer_hz());
+			/* In millisecond */
+			first_flow_latency *= 1000;
+			printf(":: First Flow Latency :: Port %d :: First flow installed in %f milliseconds\n",
+				port_id, first_flow_latency);
+		}
+
 		if (force_quit)
 			counter = end_counter;
 
-- 
2.17.1


^ permalink raw reply	[flat|nested] 46+ messages in thread

* [dpdk-dev] [PATCH v2 7/7] app/flow-perf: fix setting decap data for decap actions
  2021-03-10 13:48   ` [dpdk-dev] [PATCH v2 0/7] Enhancements and fixes for flow-perf Wisam Jaddo
                       ` (5 preceding siblings ...)
  2021-03-10 13:48     ` [dpdk-dev] [PATCH v2 6/7] app/flow-perf: add first flow latency support Wisam Jaddo
@ 2021-03-10 13:48     ` Wisam Jaddo
  6 siblings, 0 replies; 46+ messages in thread
From: Wisam Jaddo @ 2021-03-10 13:48 UTC (permalink / raw)
  To: arybchenko, thomas, akozyrev, rasland, dev; +Cc: stable

When using decap actions it's been set to the data to decap
into the encap_data instead of decap_data, as a results we end
up with bad encap and decap data in many cases.

Fixes: 0c8f1f4ab90e ("app/flow-perf: support raw encap/decap actions")
Cc: stable@dpdk.org

Signed-off-by: Wisam Jaddo <wisamm@nvidia.com>
---
 app/test-flow-perf/main.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/app/test-flow-perf/main.c b/app/test-flow-perf/main.c
index 3d79430e9a..6bdffef186 100644
--- a/app/test-flow-perf/main.c
+++ b/app/test-flow-perf/main.c
@@ -730,7 +730,7 @@ args_parse(int argc, char **argv)
 					for (i = 0; i < RTE_DIM(flow_options); i++) {
 						if (strcmp(flow_options[i].str, token) == 0) {
 							printf("%s,", token);
-							encap_data |= flow_options[i].mask;
+							decap_data |= flow_options[i].mask;
 							break;
 						}
 						/* Reached last item with no match */
-- 
2.17.1


^ permalink raw reply	[flat|nested] 46+ messages in thread

* [dpdk-dev] [PATCH v3 0/7] Enhancements and fixes for flow-perf
  2021-03-10 13:48     ` [dpdk-dev] [PATCH v2 1/7] app/flow-perf: start using more generic wrapper for cycles Wisam Jaddo
@ 2021-03-10 13:53       ` Wisam Jaddo
  2021-03-10 13:53         ` [dpdk-dev] [PATCH v3 1/7] app/flow-perf: start using more generic wrapper for cycles Wisam Jaddo
                           ` (5 more replies)
  2021-03-10 13:55       ` [dpdk-dev] [PATCH v3 0/7] Enhancements and fixes for flow-perf Wisam Jaddo
  1 sibling, 6 replies; 46+ messages in thread
From: Wisam Jaddo @ 2021-03-10 13:53 UTC (permalink / raw)
  To: arybchenko, thomas, akozyrev, rasland, dev

------
v2:
* Add first insertion flow latency calculation.
* Fix for decap data set.

v3:
* Fixes in commit message.
* Fixes the cover page.

Wisam Jaddo (7):
  app/flow-perf: start using more generic wrapper for cycles
  app/flow-perf: add new option to use unique data on the fly
  app/flow-perf: fix naming of CPU used structured data
  app/flow-perf: fix report total stats for masked ports
  app/flow-perf: fix the incremental IPv6 src set
  app/flow-perf: add first flow latency support
  app/flow-perf: fix setting decap data for decap actions

 app/test-flow-perf/actions_gen.c | 77 +++++++++++++++++---------------
 app/test-flow-perf/actions_gen.h |  3 +-
 app/test-flow-perf/config.h      |  8 +---
 app/test-flow-perf/flow_gen.c    |  4 +-
 app/test-flow-perf/flow_gen.h    |  1 +
 app/test-flow-perf/items_gen.c   | 13 +++---
 app/test-flow-perf/main.c        | 67 +++++++++++++++++----------
 doc/guides/tools/flow-perf.rst   |  5 +++
 8 files changed, 102 insertions(+), 76 deletions(-)

-- 
2.17.1


^ permalink raw reply	[flat|nested] 46+ messages in thread

* [dpdk-dev] [PATCH v3 1/7] app/flow-perf: start using more generic wrapper for cycles
  2021-03-10 13:53       ` [dpdk-dev] [PATCH v3 0/7] Enhancements and fixes for flow-perf Wisam Jaddo
@ 2021-03-10 13:53         ` Wisam Jaddo
  2021-03-10 13:53         ` [dpdk-dev] [PATCH v3 2/7] app/flow-perf: add new option to use unique data on the fly Wisam Jaddo
                           ` (4 subsequent siblings)
  5 siblings, 0 replies; 46+ messages in thread
From: Wisam Jaddo @ 2021-03-10 13:53 UTC (permalink / raw)
  To: arybchenko, thomas, akozyrev, rasland, dev

rdtsc() is x86 related, while this might fail for other archs,
so it's better to use more generic API for cycles measurement.

Signed-off-by: Wisam Jaddo <wisamm@nvidia.com>
---
 app/test-flow-perf/main.c | 24 ++++++++++++------------
 1 file changed, 12 insertions(+), 12 deletions(-)

diff --git a/app/test-flow-perf/main.c b/app/test-flow-perf/main.c
index 99d0463456..8b5a11c15e 100644
--- a/app/test-flow-perf/main.c
+++ b/app/test-flow-perf/main.c
@@ -969,7 +969,7 @@ meters_handler(int port_id, uint8_t core_id, uint8_t ops)
 	end_counter = (core_id + 1) * rules_count_per_core;
 
 	cpu_time_used = 0;
-	start_batch = rte_rdtsc();
+	start_batch = rte_get_timer_cycles();
 	for (counter = start_counter; counter < end_counter; counter++) {
 		if (ops == METER_CREATE)
 			create_meter_rule(port_id, counter);
@@ -984,10 +984,10 @@ meters_handler(int port_id, uint8_t core_id, uint8_t ops)
 		if (!((counter + 1) % rules_batch)) {
 			rules_batch_idx = ((counter + 1) / rules_batch) - 1;
 			cpu_time_per_batch[rules_batch_idx] =
-				((double)(rte_rdtsc() - start_batch))
-				/ rte_get_tsc_hz();
+				((double)(rte_get_timer_cycles() - start_batch))
+				/ rte_get_timer_hz();
 			cpu_time_used += cpu_time_per_batch[rules_batch_idx];
-			start_batch = rte_rdtsc();
+			start_batch = rte_get_timer_cycles();
 		}
 	}
 
@@ -1089,7 +1089,7 @@ destroy_flows(int port_id, uint8_t core_id, struct rte_flow **flows_list)
 	if (flow_group > 0 && core_id == 0)
 		rules_count_per_core++;
 
-	start_batch = rte_rdtsc();
+	start_batch = rte_get_timer_cycles();
 	for (i = 0; i < (uint32_t) rules_count_per_core; i++) {
 		if (flows_list[i] == 0)
 			break;
@@ -1107,12 +1107,12 @@ destroy_flows(int port_id, uint8_t core_id, struct rte_flow **flows_list)
 		 * for this batch.
 		 */
 		if (!((i + 1) % rules_batch)) {
-			end_batch = rte_rdtsc();
+			end_batch = rte_get_timer_cycles();
 			delta = (double) (end_batch - start_batch);
 			rules_batch_idx = ((i + 1) / rules_batch) - 1;
-			cpu_time_per_batch[rules_batch_idx] = delta / rte_get_tsc_hz();
+			cpu_time_per_batch[rules_batch_idx] = delta / rte_get_timer_hz();
 			cpu_time_used += cpu_time_per_batch[rules_batch_idx];
-			start_batch = rte_rdtsc();
+			start_batch = rte_get_timer_cycles();
 		}
 	}
 
@@ -1185,7 +1185,7 @@ insert_flows(int port_id, uint8_t core_id)
 		flows_list[flow_index++] = flow;
 	}
 
-	start_batch = rte_rdtsc();
+	start_batch = rte_get_timer_cycles();
 	for (counter = start_counter; counter < end_counter; counter++) {
 		flow = generate_flow(port_id, flow_group,
 			flow_attrs, flow_items, flow_actions,
@@ -1211,12 +1211,12 @@ insert_flows(int port_id, uint8_t core_id)
 		 * for this batch.
 		 */
 		if (!((counter + 1) % rules_batch)) {
-			end_batch = rte_rdtsc();
+			end_batch = rte_get_timer_cycles();
 			delta = (double) (end_batch - start_batch);
 			rules_batch_idx = ((counter + 1) / rules_batch) - 1;
-			cpu_time_per_batch[rules_batch_idx] = delta / rte_get_tsc_hz();
+			cpu_time_per_batch[rules_batch_idx] = delta / rte_get_timer_hz();
 			cpu_time_used += cpu_time_per_batch[rules_batch_idx];
-			start_batch = rte_rdtsc();
+			start_batch = rte_get_timer_cycles();
 		}
 	}
 
-- 
2.17.1


^ permalink raw reply	[flat|nested] 46+ messages in thread

* [dpdk-dev] [PATCH v3 2/7] app/flow-perf: add new option to use unique data on the fly
  2021-03-10 13:53       ` [dpdk-dev] [PATCH v3 0/7] Enhancements and fixes for flow-perf Wisam Jaddo
  2021-03-10 13:53         ` [dpdk-dev] [PATCH v3 1/7] app/flow-perf: start using more generic wrapper for cycles Wisam Jaddo
@ 2021-03-10 13:53         ` Wisam Jaddo
  2021-03-10 13:53         ` [dpdk-dev] [PATCH v3 3/7] app/flow-perf: fix naming of CPU used structured data Wisam Jaddo
                           ` (3 subsequent siblings)
  5 siblings, 0 replies; 46+ messages in thread
From: Wisam Jaddo @ 2021-03-10 13:53 UTC (permalink / raw)
  To: arybchenko, thomas, akozyrev, rasland, dev

Current support for unique data is to compile with config.h
var as FIXED_VALUES as 0, and this is only supported on
compilation time, as a result the user may use only single
mode for each compilation.

Starting with this commit the user will have the ability to
use this feature on the fly by using this new option:
--unique-data

Example of unique data usage:
Insert many rules with different encap data for a flows that
have encap action in it.

Signed-off-by: Wisam Jaddo <wisamm@nvidia.com>
---
 app/test-flow-perf/actions_gen.c | 77 +++++++++++++++++---------------
 app/test-flow-perf/actions_gen.h |  3 +-
 app/test-flow-perf/config.h      |  8 +---
 app/test-flow-perf/flow_gen.c    |  4 +-
 app/test-flow-perf/flow_gen.h    |  1 +
 app/test-flow-perf/main.c        | 13 ++++--
 doc/guides/tools/flow-perf.rst   |  5 +++
 7 files changed, 62 insertions(+), 49 deletions(-)

diff --git a/app/test-flow-perf/actions_gen.c b/app/test-flow-perf/actions_gen.c
index 1f5c64fde9..82cddfc676 100644
--- a/app/test-flow-perf/actions_gen.c
+++ b/app/test-flow-perf/actions_gen.c
@@ -30,6 +30,7 @@ struct additional_para {
 	uint64_t encap_data;
 	uint64_t decap_data;
 	uint8_t core_idx;
+	bool unique_data;
 };
 
 /* Storage for struct rte_flow_action_raw_encap including external data. */
@@ -202,14 +203,14 @@ add_count(struct rte_flow_action *actions,
 static void
 add_set_src_mac(struct rte_flow_action *actions,
 	uint8_t actions_counter,
-	__rte_unused struct additional_para para)
+	struct additional_para para)
 {
 	static struct rte_flow_action_set_mac set_macs[RTE_MAX_LCORE] __rte_cache_aligned;
 	uint32_t mac = para.counter;
 	uint16_t i;
 
 	/* Fixed value */
-	if (FIXED_VALUES)
+	if (!para.unique_data)
 		mac = 1;
 
 	/* Mac address to be set is random each time */
@@ -225,14 +226,14 @@ add_set_src_mac(struct rte_flow_action *actions,
 static void
 add_set_dst_mac(struct rte_flow_action *actions,
 	uint8_t actions_counter,
-	__rte_unused struct additional_para para)
+	struct additional_para para)
 {
 	static struct rte_flow_action_set_mac set_macs[RTE_MAX_LCORE] __rte_cache_aligned;
 	uint32_t mac = para.counter;
 	uint16_t i;
 
 	/* Fixed value */
-	if (FIXED_VALUES)
+	if (!para.unique_data)
 		mac = 1;
 
 	/* Mac address to be set is random each time */
@@ -248,13 +249,13 @@ add_set_dst_mac(struct rte_flow_action *actions,
 static void
 add_set_src_ipv4(struct rte_flow_action *actions,
 	uint8_t actions_counter,
-	__rte_unused struct additional_para para)
+	struct additional_para para)
 {
 	static struct rte_flow_action_set_ipv4 set_ipv4[RTE_MAX_LCORE] __rte_cache_aligned;
 	uint32_t ip = para.counter;
 
 	/* Fixed value */
-	if (FIXED_VALUES)
+	if (!para.unique_data)
 		ip = 1;
 
 	/* IPv4 value to be set is random each time */
@@ -267,13 +268,13 @@ add_set_src_ipv4(struct rte_flow_action *actions,
 static void
 add_set_dst_ipv4(struct rte_flow_action *actions,
 	uint8_t actions_counter,
-	__rte_unused struct additional_para para)
+	struct additional_para para)
 {
 	static struct rte_flow_action_set_ipv4 set_ipv4[RTE_MAX_LCORE] __rte_cache_aligned;
 	uint32_t ip = para.counter;
 
 	/* Fixed value */
-	if (FIXED_VALUES)
+	if (!para.unique_data)
 		ip = 1;
 
 	/* IPv4 value to be set is random each time */
@@ -286,14 +287,14 @@ add_set_dst_ipv4(struct rte_flow_action *actions,
 static void
 add_set_src_ipv6(struct rte_flow_action *actions,
 	uint8_t actions_counter,
-	__rte_unused struct additional_para para)
+	struct additional_para para)
 {
 	static struct rte_flow_action_set_ipv6 set_ipv6[RTE_MAX_LCORE] __rte_cache_aligned;
 	uint32_t ipv6 = para.counter;
 	uint8_t i;
 
 	/* Fixed value */
-	if (FIXED_VALUES)
+	if (!para.unique_data)
 		ipv6 = 1;
 
 	/* IPv6 value to set is random each time */
@@ -309,14 +310,14 @@ add_set_src_ipv6(struct rte_flow_action *actions,
 static void
 add_set_dst_ipv6(struct rte_flow_action *actions,
 	uint8_t actions_counter,
-	__rte_unused struct additional_para para)
+	struct additional_para para)
 {
 	static struct rte_flow_action_set_ipv6 set_ipv6[RTE_MAX_LCORE] __rte_cache_aligned;
 	uint32_t ipv6 = para.counter;
 	uint8_t i;
 
 	/* Fixed value */
-	if (FIXED_VALUES)
+	if (!para.unique_data)
 		ipv6 = 1;
 
 	/* IPv6 value to set is random each time */
@@ -332,13 +333,13 @@ add_set_dst_ipv6(struct rte_flow_action *actions,
 static void
 add_set_src_tp(struct rte_flow_action *actions,
 	uint8_t actions_counter,
-	__rte_unused struct additional_para para)
+	struct additional_para para)
 {
 	static struct rte_flow_action_set_tp set_tp[RTE_MAX_LCORE] __rte_cache_aligned;
 	uint32_t tp = para.counter;
 
 	/* Fixed value */
-	if (FIXED_VALUES)
+	if (!para.unique_data)
 		tp = 100;
 
 	/* TP src port is random each time */
@@ -353,13 +354,13 @@ add_set_src_tp(struct rte_flow_action *actions,
 static void
 add_set_dst_tp(struct rte_flow_action *actions,
 	uint8_t actions_counter,
-	__rte_unused struct additional_para para)
+	struct additional_para para)
 {
 	static struct rte_flow_action_set_tp set_tp[RTE_MAX_LCORE] __rte_cache_aligned;
 	uint32_t tp = para.counter;
 
 	/* Fixed value */
-	if (FIXED_VALUES)
+	if (!para.unique_data)
 		tp = 100;
 
 	/* TP src port is random each time */
@@ -375,13 +376,13 @@ add_set_dst_tp(struct rte_flow_action *actions,
 static void
 add_inc_tcp_ack(struct rte_flow_action *actions,
 	uint8_t actions_counter,
-	__rte_unused struct additional_para para)
+	struct additional_para para)
 {
 	static rte_be32_t value[RTE_MAX_LCORE] __rte_cache_aligned;
 	uint32_t ack_value = para.counter;
 
 	/* Fixed value */
-	if (FIXED_VALUES)
+	if (!para.unique_data)
 		ack_value = 1;
 
 	value[para.core_idx] = RTE_BE32(ack_value);
@@ -393,13 +394,13 @@ add_inc_tcp_ack(struct rte_flow_action *actions,
 static void
 add_dec_tcp_ack(struct rte_flow_action *actions,
 	uint8_t actions_counter,
-	__rte_unused struct additional_para para)
+	struct additional_para para)
 {
 	static rte_be32_t value[RTE_MAX_LCORE] __rte_cache_aligned;
 	uint32_t ack_value = para.counter;
 
 	/* Fixed value */
-	if (FIXED_VALUES)
+	if (!para.unique_data)
 		ack_value = 1;
 
 	value[para.core_idx] = RTE_BE32(ack_value);
@@ -411,13 +412,13 @@ add_dec_tcp_ack(struct rte_flow_action *actions,
 static void
 add_inc_tcp_seq(struct rte_flow_action *actions,
 	uint8_t actions_counter,
-	__rte_unused struct additional_para para)
+	struct additional_para para)
 {
 	static rte_be32_t value[RTE_MAX_LCORE] __rte_cache_aligned;
 	uint32_t seq_value = para.counter;
 
 	/* Fixed value */
-	if (FIXED_VALUES)
+	if (!para.unique_data)
 		seq_value = 1;
 
 	value[para.core_idx] = RTE_BE32(seq_value);
@@ -429,13 +430,13 @@ add_inc_tcp_seq(struct rte_flow_action *actions,
 static void
 add_dec_tcp_seq(struct rte_flow_action *actions,
 	uint8_t actions_counter,
-	__rte_unused struct additional_para para)
+	struct additional_para para)
 {
 	static rte_be32_t value[RTE_MAX_LCORE] __rte_cache_aligned;
 	uint32_t seq_value = para.counter;
 
 	/* Fixed value */
-	if (FIXED_VALUES)
+	if (!para.unique_data)
 		seq_value = 1;
 
 	value[para.core_idx] = RTE_BE32(seq_value);
@@ -447,13 +448,13 @@ add_dec_tcp_seq(struct rte_flow_action *actions,
 static void
 add_set_ttl(struct rte_flow_action *actions,
 	uint8_t actions_counter,
-	__rte_unused struct additional_para para)
+	struct additional_para para)
 {
 	static struct rte_flow_action_set_ttl set_ttl[RTE_MAX_LCORE] __rte_cache_aligned;
 	uint32_t ttl_value = para.counter;
 
 	/* Fixed value */
-	if (FIXED_VALUES)
+	if (!para.unique_data)
 		ttl_value = 1;
 
 	/* Set ttl to random value each time */
@@ -476,13 +477,13 @@ add_dec_ttl(struct rte_flow_action *actions,
 static void
 add_set_ipv4_dscp(struct rte_flow_action *actions,
 	uint8_t actions_counter,
-	__rte_unused struct additional_para para)
+	struct additional_para para)
 {
 	static struct rte_flow_action_set_dscp set_dscp[RTE_MAX_LCORE] __rte_cache_aligned;
 	uint32_t dscp_value = para.counter;
 
 	/* Fixed value */
-	if (FIXED_VALUES)
+	if (!para.unique_data)
 		dscp_value = 1;
 
 	/* Set dscp to random value each time */
@@ -497,13 +498,13 @@ add_set_ipv4_dscp(struct rte_flow_action *actions,
 static void
 add_set_ipv6_dscp(struct rte_flow_action *actions,
 	uint8_t actions_counter,
-	__rte_unused struct additional_para para)
+	struct additional_para para)
 {
 	static struct rte_flow_action_set_dscp set_dscp[RTE_MAX_LCORE] __rte_cache_aligned;
 	uint32_t dscp_value = para.counter;
 
 	/* Fixed value */
-	if (FIXED_VALUES)
+	if (!para.unique_data)
 		dscp_value = 1;
 
 	/* Set dscp to random value each time */
@@ -577,7 +578,7 @@ add_ipv4_header(uint8_t **header, uint64_t data,
 		return;
 
 	/* Fixed value */
-	if (FIXED_VALUES)
+	if (!para.unique_data)
 		ip_dst = 1;
 
 	memset(&ipv4_hdr, 0, sizeof(struct rte_ipv4_hdr));
@@ -643,7 +644,7 @@ add_vxlan_header(uint8_t **header, uint64_t data,
 		return;
 
 	/* Fixed value */
-	if (FIXED_VALUES)
+	if (!para.unique_data)
 		vni_value = 1;
 
 	memset(&vxlan_hdr, 0, sizeof(struct rte_vxlan_hdr));
@@ -666,7 +667,7 @@ add_vxlan_gpe_header(uint8_t **header, uint64_t data,
 		return;
 
 	/* Fixed value */
-	if (FIXED_VALUES)
+	if (!para.unique_data)
 		vni_value = 1;
 
 	memset(&vxlan_gpe_hdr, 0, sizeof(struct rte_vxlan_gpe_hdr));
@@ -707,7 +708,7 @@ add_geneve_header(uint8_t **header, uint64_t data,
 		return;
 
 	/* Fixed value */
-	if (FIXED_VALUES)
+	if (!para.unique_data)
 		vni_value = 1;
 
 	memset(&geneve_hdr, 0, sizeof(struct rte_geneve_hdr));
@@ -730,7 +731,7 @@ add_gtp_header(uint8_t **header, uint64_t data,
 		return;
 
 	/* Fixed value */
-	if (FIXED_VALUES)
+	if (!para.unique_data)
 		teid_value = 1;
 
 	memset(&gtp_hdr, 0, sizeof(struct rte_flow_item_gtp));
@@ -849,7 +850,7 @@ add_vxlan_encap(struct rte_flow_action *actions,
 	uint32_t ip_dst = para.counter;
 
 	/* Fixed value */
-	if (FIXED_VALUES)
+	if (!para.unique_data)
 		ip_dst = 1;
 
 	items[0].spec = &item_eth;
@@ -907,7 +908,8 @@ add_meter(struct rte_flow_action *actions,
 void
 fill_actions(struct rte_flow_action *actions, uint64_t *flow_actions,
 	uint32_t counter, uint16_t next_table, uint16_t hairpinq,
-	uint64_t encap_data, uint64_t decap_data, uint8_t core_idx)
+	uint64_t encap_data, uint64_t decap_data, uint8_t core_idx,
+	bool unique_data)
 {
 	struct additional_para additional_para_data;
 	uint8_t actions_counter = 0;
@@ -930,6 +932,7 @@ fill_actions(struct rte_flow_action *actions, uint64_t *flow_actions,
 		.encap_data = encap_data,
 		.decap_data = decap_data,
 		.core_idx = core_idx,
+		.unique_data = unique_data,
 	};
 
 	if (hairpinq != 0) {
diff --git a/app/test-flow-perf/actions_gen.h b/app/test-flow-perf/actions_gen.h
index 77353cfe09..6f2f833496 100644
--- a/app/test-flow-perf/actions_gen.h
+++ b/app/test-flow-perf/actions_gen.h
@@ -19,6 +19,7 @@
 
 void fill_actions(struct rte_flow_action *actions, uint64_t *flow_actions,
 	uint32_t counter, uint16_t next_table, uint16_t hairpinq,
-	uint64_t encap_data, uint64_t decap_data, uint8_t core_idx);
+	uint64_t encap_data, uint64_t decap_data, uint8_t core_idx,
+	bool unique_data);
 
 #endif /* FLOW_PERF_ACTION_GEN */
diff --git a/app/test-flow-perf/config.h b/app/test-flow-perf/config.h
index 3d4696d61a..a14d4e05e1 100644
--- a/app/test-flow-perf/config.h
+++ b/app/test-flow-perf/config.h
@@ -5,7 +5,7 @@
 #define FLOW_ITEM_MASK(_x) (UINT64_C(1) << _x)
 #define FLOW_ACTION_MASK(_x) (UINT64_C(1) << _x)
 #define FLOW_ATTR_MASK(_x) (UINT64_C(1) << _x)
-#define GET_RSS_HF() (ETH_RSS_IP | ETH_RSS_TCP)
+#define GET_RSS_HF() (ETH_RSS_IP)
 
 /* Configuration */
 #define RXQ_NUM 4
@@ -19,12 +19,6 @@
 #define METER_CIR 1250000
 #define DEFAULT_METER_PROF_ID 100
 
-/* This is used for encap/decap & header modify actions.
- * When it's 1: it means all actions have fixed values.
- * When it's 0: it means all actions will have different values.
- */
-#define FIXED_VALUES 1
-
 /* Items/Actions parameters */
 #define JUMP_ACTION_TABLE 2
 #define VLAN_VALUE 1
diff --git a/app/test-flow-perf/flow_gen.c b/app/test-flow-perf/flow_gen.c
index df4af16de8..8f87fac5f6 100644
--- a/app/test-flow-perf/flow_gen.c
+++ b/app/test-flow-perf/flow_gen.c
@@ -46,6 +46,7 @@ generate_flow(uint16_t port_id,
 	uint64_t encap_data,
 	uint64_t decap_data,
 	uint8_t core_idx,
+	bool unique_data,
 	struct rte_flow_error *error)
 {
 	struct rte_flow_attr attr;
@@ -61,7 +62,8 @@ generate_flow(uint16_t port_id,
 
 	fill_actions(actions, flow_actions,
 		outer_ip_src, next_table, hairpinq,
-		encap_data, decap_data, core_idx);
+		encap_data, decap_data, core_idx,
+		unique_data);
 
 	fill_items(items, flow_items, outer_ip_src, core_idx);
 
diff --git a/app/test-flow-perf/flow_gen.h b/app/test-flow-perf/flow_gen.h
index f1d0999af1..dc887fceae 100644
--- a/app/test-flow-perf/flow_gen.h
+++ b/app/test-flow-perf/flow_gen.h
@@ -35,6 +35,7 @@ generate_flow(uint16_t port_id,
 	uint64_t encap_data,
 	uint64_t decap_data,
 	uint8_t core_idx,
+	bool unique_data,
 	struct rte_flow_error *error);
 
 #endif /* FLOW_PERF_FLOW_GEN */
diff --git a/app/test-flow-perf/main.c b/app/test-flow-perf/main.c
index 8b5a11c15e..4054178273 100644
--- a/app/test-flow-perf/main.c
+++ b/app/test-flow-perf/main.c
@@ -61,6 +61,7 @@ static bool dump_iterations;
 static bool delete_flag;
 static bool dump_socket_mem_flag;
 static bool enable_fwd;
+static bool unique_data;
 
 static struct rte_mempool *mbuf_mp;
 static uint32_t nb_lcores;
@@ -131,6 +132,8 @@ usage(char *progname)
 	printf("  --enable-fwd: To enable packets forwarding"
 		" after insertion\n");
 	printf("  --portmask=N: hexadecimal bitmask of ports used\n");
+	printf("  --unique-data: flag to set using unique data for all"
+		" actions that support data, such as header modify and encap actions\n");
 
 	printf("To set flow attributes:\n");
 	printf("  --ingress: set ingress attribute in flows\n");
@@ -567,6 +570,7 @@ args_parse(int argc, char **argv)
 		{ "deletion-rate",              0, 0, 0 },
 		{ "dump-socket-mem",            0, 0, 0 },
 		{ "enable-fwd",                 0, 0, 0 },
+		{ "unique-data",                0, 0, 0 },
 		{ "portmask",                   1, 0, 0 },
 		{ "cores",                      1, 0, 0 },
 		/* Attributes */
@@ -765,6 +769,9 @@ args_parse(int argc, char **argv)
 			if (strcmp(lgopts[opt_idx].name,
 					"dump-iterations") == 0)
 				dump_iterations = true;
+			if (strcmp(lgopts[opt_idx].name,
+					"unique-data") == 0)
+				unique_data = true;
 			if (strcmp(lgopts[opt_idx].name,
 					"deletion-rate") == 0)
 				delete_flag = true;
@@ -1176,7 +1183,7 @@ insert_flows(int port_id, uint8_t core_id)
 		 */
 		flow = generate_flow(port_id, 0, flow_attrs,
 			global_items, global_actions,
-			flow_group, 0, 0, 0, 0, core_id, &error);
+			flow_group, 0, 0, 0, 0, core_id, unique_data, &error);
 
 		if (flow == NULL) {
 			print_flow_error(error);
@@ -1192,7 +1199,7 @@ insert_flows(int port_id, uint8_t core_id)
 			JUMP_ACTION_TABLE, counter,
 			hairpin_queues_num,
 			encap_data, decap_data,
-			core_id, &error);
+			core_id, unique_data, &error);
 
 		if (force_quit)
 			counter = end_counter;
@@ -1863,6 +1870,7 @@ main(int argc, char **argv)
 	delete_flag = false;
 	dump_socket_mem_flag = false;
 	flow_group = DEFAULT_GROUP;
+	unique_data = false;
 
 	signal(SIGINT, signal_handler);
 	signal(SIGTERM, signal_handler);
@@ -1878,7 +1886,6 @@ main(int argc, char **argv)
 	if (nb_lcores <= 1)
 		rte_exit(EXIT_FAILURE, "This app needs at least two cores\n");
 
-
 	printf(":: Flows Count per port: %d\n\n", rules_count);
 
 	if (has_meter())
diff --git a/doc/guides/tools/flow-perf.rst b/doc/guides/tools/flow-perf.rst
index 017e200222..280bf7e0e0 100644
--- a/doc/guides/tools/flow-perf.rst
+++ b/doc/guides/tools/flow-perf.rst
@@ -100,6 +100,11 @@ The command line options are:
 	Set the number of needed cores to insert/delete rte_flow rules.
 	Default cores count is 1.
 
+*       ``--unique-data``
+        Flag to set using unique data for all actions that support data,
+        Such as header modify and encap actions. Default is using fixed
+        data for any action that support data for all flows.
+
 Attributes:
 
 *	``--ingress``
-- 
2.17.1


^ permalink raw reply	[flat|nested] 46+ messages in thread

* [dpdk-dev] [PATCH v3 3/7] app/flow-perf: fix naming of CPU used structured data
  2021-03-10 13:53       ` [dpdk-dev] [PATCH v3 0/7] Enhancements and fixes for flow-perf Wisam Jaddo
  2021-03-10 13:53         ` [dpdk-dev] [PATCH v3 1/7] app/flow-perf: start using more generic wrapper for cycles Wisam Jaddo
  2021-03-10 13:53         ` [dpdk-dev] [PATCH v3 2/7] app/flow-perf: add new option to use unique data on the fly Wisam Jaddo
@ 2021-03-10 13:53         ` Wisam Jaddo
  2021-03-10 13:53         ` [dpdk-dev] [PATCH v3 4/7] app/flow-perf: fix report total stats for masked ports Wisam Jaddo
                           ` (2 subsequent siblings)
  5 siblings, 0 replies; 46+ messages in thread
From: Wisam Jaddo @ 2021-03-10 13:53 UTC (permalink / raw)
  To: arybchenko, thomas, akozyrev, rasland, dev; +Cc: dongzhou

create_flow and create_meter are not correct names since those
are records that contain creation and deletion, which makes
them more of a record for such data.

Fixes: d8099d7ecbd0 ("app/flow-perf: split dump functions")
Cc: dongzhou@nvidia.com

Signed-off-by: Wisam Jaddo <wisamm@nvidia.com>
---
 app/test-flow-perf/main.c | 16 ++++++++--------
 1 file changed, 8 insertions(+), 8 deletions(-)

diff --git a/app/test-flow-perf/main.c b/app/test-flow-perf/main.c
index 4054178273..01607881df 100644
--- a/app/test-flow-perf/main.c
+++ b/app/test-flow-perf/main.c
@@ -105,8 +105,8 @@ struct used_cpu_time {
 struct multi_cores_pool {
 	uint32_t cores_count;
 	uint32_t rules_count;
-	struct used_cpu_time create_meter;
-	struct used_cpu_time create_flow;
+	struct used_cpu_time meters_record;
+	struct used_cpu_time flows_record;
 	int64_t last_alloc[RTE_MAX_LCORE];
 	int64_t current_alloc[RTE_MAX_LCORE];
 } __rte_cache_aligned;
@@ -1013,10 +1013,10 @@ meters_handler(int port_id, uint8_t core_id, uint8_t ops)
 		cpu_time_used, insertion_rate);
 
 	if (ops == METER_CREATE)
-		mc_pool.create_meter.insertion[port_id][core_id]
+		mc_pool.meters_record.insertion[port_id][core_id]
 			= cpu_time_used;
 	else
-		mc_pool.create_meter.deletion[port_id][core_id]
+		mc_pool.meters_record.deletion[port_id][core_id]
 			= cpu_time_used;
 }
 
@@ -1134,7 +1134,7 @@ destroy_flows(int port_id, uint8_t core_id, struct rte_flow **flows_list)
 	printf(":: Port %d :: Core %d :: The time for deleting %d rules is %f seconds\n",
 		port_id, core_id, rules_count_per_core, cpu_time_used);
 
-	mc_pool.create_flow.deletion[port_id][core_id] = cpu_time_used;
+	mc_pool.flows_record.deletion[port_id][core_id] = cpu_time_used;
 }
 
 static struct rte_flow **
@@ -1241,7 +1241,7 @@ insert_flows(int port_id, uint8_t core_id)
 	printf(":: Port %d :: Core %d :: The time for creating %d in rules %f seconds\n",
 		port_id, core_id, rules_count_per_core, cpu_time_used);
 
-	mc_pool.create_flow.insertion[port_id][core_id] = cpu_time_used;
+	mc_pool.flows_record.insertion[port_id][core_id] = cpu_time_used;
 	return flows_list;
 }
 
@@ -1439,9 +1439,9 @@ run_rte_flow_handler_cores(void *data __rte_unused)
 	RTE_ETH_FOREACH_DEV(port) {
 		if (has_meter())
 			dump_used_cpu_time("Meters:",
-				port, &mc_pool.create_meter);
+				port, &mc_pool.meters_record);
 		dump_used_cpu_time("Flows:",
-			port, &mc_pool.create_flow);
+			port, &mc_pool.flows_record);
 		dump_used_mem(port);
 	}
 
-- 
2.17.1


^ permalink raw reply	[flat|nested] 46+ messages in thread

* [dpdk-dev] [PATCH v3 4/7] app/flow-perf: fix report total stats for masked ports
  2021-03-10 13:53       ` [dpdk-dev] [PATCH v3 0/7] Enhancements and fixes for flow-perf Wisam Jaddo
                           ` (2 preceding siblings ...)
  2021-03-10 13:53         ` [dpdk-dev] [PATCH v3 3/7] app/flow-perf: fix naming of CPU used structured data Wisam Jaddo
@ 2021-03-10 13:53         ` Wisam Jaddo
  2021-03-10 13:53         ` [dpdk-dev] [PATCH v3 5/7] app/flow-perf: fix the incremental IPv6 src set Wisam Jaddo
  2021-03-10 13:53         ` [dpdk-dev] [PATCH v3 6/7] app/flow-perf: add first flow latency support Wisam Jaddo
  5 siblings, 0 replies; 46+ messages in thread
From: Wisam Jaddo @ 2021-03-10 13:53 UTC (permalink / raw)
  To: arybchenko, thomas, akozyrev, rasland, dev; +Cc: dongzhou, stable

Take into consideration that the user may call portmask for
any run, thus the app should always check if port is needed
to collect and report or not.

Fixes: 070316d01d3e ("app/flow-perf: add multi-core rule insertion and deletion")
Fixes: d8099d7ecbd0 ("app/flow-perf: split dump functions")
Cc: dongzhou@nvidia.com
Cc: stable@dpdk.org

Signed-off-by: Wisam Jaddo <wisamm@nvidia.com>
---
 app/test-flow-perf/main.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/app/test-flow-perf/main.c b/app/test-flow-perf/main.c
index 01607881df..e32714131c 100644
--- a/app/test-flow-perf/main.c
+++ b/app/test-flow-perf/main.c
@@ -1437,6 +1437,9 @@ run_rte_flow_handler_cores(void *data __rte_unused)
 	rte_eal_mp_wait_lcore();
 
 	RTE_ETH_FOREACH_DEV(port) {
+		/* If port outside portmask */
+		if (!((ports_mask >> port) & 0x1))
+			continue;
 		if (has_meter())
 			dump_used_cpu_time("Meters:",
 				port, &mc_pool.meters_record);
-- 
2.17.1


^ permalink raw reply	[flat|nested] 46+ messages in thread

* [dpdk-dev] [PATCH v3 5/7] app/flow-perf: fix the incremental IPv6 src set
  2021-03-10 13:53       ` [dpdk-dev] [PATCH v3 0/7] Enhancements and fixes for flow-perf Wisam Jaddo
                           ` (3 preceding siblings ...)
  2021-03-10 13:53         ` [dpdk-dev] [PATCH v3 4/7] app/flow-perf: fix report total stats for masked ports Wisam Jaddo
@ 2021-03-10 13:53         ` Wisam Jaddo
  2021-03-10 13:53         ` [dpdk-dev] [PATCH v3 6/7] app/flow-perf: add first flow latency support Wisam Jaddo
  5 siblings, 0 replies; 46+ messages in thread
From: Wisam Jaddo @ 2021-03-10 13:53 UTC (permalink / raw)
  To: arybchenko, thomas, akozyrev, rasland, dev; +Cc: wisamm, stable

Currently the memset() will not set a correct src ip that represent
the incremental value of the counter.

This commit will fix this and each flow will have correct IPv6.src
that it's incremental from previous flow and equal to the decimal
values.

Fixes: bf3688f1e816 ("app/flow-perf: add insertion rate calculation")
Cc: wisamm@mellanox.com
Cc: stable@dpdk.org

Signed-off-by: Wisam Jaddo <wisamm@nvidia.com>
---
 app/test-flow-perf/items_gen.c | 13 +++++++------
 1 file changed, 7 insertions(+), 6 deletions(-)

diff --git a/app/test-flow-perf/items_gen.c b/app/test-flow-perf/items_gen.c
index ccebc08b39..a73de9031f 100644
--- a/app/test-flow-perf/items_gen.c
+++ b/app/test-flow-perf/items_gen.c
@@ -72,14 +72,15 @@ add_ipv6(struct rte_flow_item *items,
 	static struct rte_flow_item_ipv6 ipv6_specs[RTE_MAX_LCORE] __rte_cache_aligned;
 	static struct rte_flow_item_ipv6 ipv6_masks[RTE_MAX_LCORE] __rte_cache_aligned;
 	uint8_t ti = para.core_idx;
+	uint8_t i;
 
 	/** Set ipv6 src **/
-	memset(&ipv6_specs[ti].hdr.src_addr, para.src_ip,
-		sizeof(ipv6_specs->hdr.src_addr) / 2);
-
-	/** Full mask **/
-	memset(&ipv6_masks[ti].hdr.src_addr, 0xff,
-		sizeof(ipv6_specs->hdr.src_addr));
+	for (i = 0; i < 16; i++) {
+		/* Currently src_ip is limited to 32 bit */
+		if (i < 4)
+			ipv6_specs[ti].hdr.src_addr[15 - i] = para.src_ip >> (i * 8);
+		ipv6_masks[ti].hdr.src_addr[15 - i] = 0xff;
+	}
 
 	items[items_counter].type = RTE_FLOW_ITEM_TYPE_IPV6;
 	items[items_counter].spec = &ipv6_specs[ti];
-- 
2.17.1


^ permalink raw reply	[flat|nested] 46+ messages in thread

* [dpdk-dev] [PATCH v3 6/7] app/flow-perf: add first flow latency support
  2021-03-10 13:53       ` [dpdk-dev] [PATCH v3 0/7] Enhancements and fixes for flow-perf Wisam Jaddo
                           ` (4 preceding siblings ...)
  2021-03-10 13:53         ` [dpdk-dev] [PATCH v3 5/7] app/flow-perf: fix the incremental IPv6 src set Wisam Jaddo
@ 2021-03-10 13:53         ` Wisam Jaddo
  5 siblings, 0 replies; 46+ messages in thread
From: Wisam Jaddo @ 2021-03-10 13:53 UTC (permalink / raw)
  To: arybchenko, thomas, akozyrev, rasland, dev

Starting from this commit the app will always
report the first flow latency.

This is useful in debugging to check the first
flow insertion before any caching effect.

Signed-off-by: Wisam Jaddo <wisamm@nvidia.com>
---
 app/test-flow-perf/main.c | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/app/test-flow-perf/main.c b/app/test-flow-perf/main.c
index e32714131c..3d79430e9a 100644
--- a/app/test-flow-perf/main.c
+++ b/app/test-flow-perf/main.c
@@ -1143,6 +1143,7 @@ insert_flows(int port_id, uint8_t core_id)
 	struct rte_flow **flows_list;
 	struct rte_flow_error error;
 	clock_t start_batch, end_batch;
+	double first_flow_latency;
 	double cpu_time_used;
 	double insertion_rate;
 	double cpu_time_per_batch[MAX_BATCHES_COUNT] = { 0 };
@@ -1201,6 +1202,14 @@ insert_flows(int port_id, uint8_t core_id)
 			encap_data, decap_data,
 			core_id, unique_data, &error);
 
+		if (!counter) {
+			first_flow_latency = ((double) (rte_get_timer_cycles() - start_batch) / rte_get_timer_hz());
+			/* In millisecond */
+			first_flow_latency *= 1000;
+			printf(":: First Flow Latency :: Port %d :: First flow installed in %f milliseconds\n",
+				port_id, first_flow_latency);
+		}
+
 		if (force_quit)
 			counter = end_counter;
 
-- 
2.17.1


^ permalink raw reply	[flat|nested] 46+ messages in thread

* [dpdk-dev] [PATCH v3 0/7] Enhancements and fixes for flow-perf
  2021-03-10 13:48     ` [dpdk-dev] [PATCH v2 1/7] app/flow-perf: start using more generic wrapper for cycles Wisam Jaddo
  2021-03-10 13:53       ` [dpdk-dev] [PATCH v3 0/7] Enhancements and fixes for flow-perf Wisam Jaddo
@ 2021-03-10 13:55       ` Wisam Jaddo
  2021-03-10 13:55         ` [dpdk-dev] [PATCH v3 1/7] app/flow-perf: start using more generic wrapper for cycles Wisam Jaddo
                           ` (7 more replies)
  1 sibling, 8 replies; 46+ messages in thread
From: Wisam Jaddo @ 2021-03-10 13:55 UTC (permalink / raw)
  To: arybchenko, thomas, akozyrev, rasland, dev

------
v2:
* Add first insertion flow latency calculation.
* Fix for decap data set.

v3:
* Fixes in commit message.
* Fixes the cover page.

Wisam Jaddo (7):
  app/flow-perf: start using more generic wrapper for cycles
  app/flow-perf: add new option to use unique data on the fly
  app/flow-perf: fix naming of CPU used structured data
  app/flow-perf: fix report total stats for masked ports
  app/flow-perf: fix the incremental IPv6 src set
  app/flow-perf: add first flow latency support
  app/flow-perf: fix setting decap data for decap actions

 app/test-flow-perf/actions_gen.c | 77 +++++++++++++++++---------------
 app/test-flow-perf/actions_gen.h |  3 +-
 app/test-flow-perf/config.h      |  8 +---
 app/test-flow-perf/flow_gen.c    |  4 +-
 app/test-flow-perf/flow_gen.h    |  1 +
 app/test-flow-perf/items_gen.c   | 13 +++---
 app/test-flow-perf/main.c        | 67 +++++++++++++++++----------
 doc/guides/tools/flow-perf.rst   |  5 +++
 8 files changed, 102 insertions(+), 76 deletions(-)

-- 
2.17.1


^ permalink raw reply	[flat|nested] 46+ messages in thread

* [dpdk-dev] [PATCH v3 1/7] app/flow-perf: start using more generic wrapper for cycles
  2021-03-10 13:55       ` [dpdk-dev] [PATCH v3 0/7] Enhancements and fixes for flow-perf Wisam Jaddo
@ 2021-03-10 13:55         ` Wisam Jaddo
  2021-03-14  9:54           ` [dpdk-dev] [PATCH v3 0/7] Enhancements and fixes for flow-perf Wisam Jaddo
  2021-03-10 13:55         ` [dpdk-dev] [PATCH v3 2/7] app/flow-perf: add new option to use unique data on the fly Wisam Jaddo
                           ` (6 subsequent siblings)
  7 siblings, 1 reply; 46+ messages in thread
From: Wisam Jaddo @ 2021-03-10 13:55 UTC (permalink / raw)
  To: arybchenko, thomas, akozyrev, rasland, dev

rdtsc() is x86 related, while this might fail for other archs,
so it's better to use more generic API for cycles measurement.

Signed-off-by: Wisam Jaddo <wisamm@nvidia.com>
---
 app/test-flow-perf/main.c | 24 ++++++++++++------------
 1 file changed, 12 insertions(+), 12 deletions(-)

diff --git a/app/test-flow-perf/main.c b/app/test-flow-perf/main.c
index 99d0463456..8b5a11c15e 100644
--- a/app/test-flow-perf/main.c
+++ b/app/test-flow-perf/main.c
@@ -969,7 +969,7 @@ meters_handler(int port_id, uint8_t core_id, uint8_t ops)
 	end_counter = (core_id + 1) * rules_count_per_core;
 
 	cpu_time_used = 0;
-	start_batch = rte_rdtsc();
+	start_batch = rte_get_timer_cycles();
 	for (counter = start_counter; counter < end_counter; counter++) {
 		if (ops == METER_CREATE)
 			create_meter_rule(port_id, counter);
@@ -984,10 +984,10 @@ meters_handler(int port_id, uint8_t core_id, uint8_t ops)
 		if (!((counter + 1) % rules_batch)) {
 			rules_batch_idx = ((counter + 1) / rules_batch) - 1;
 			cpu_time_per_batch[rules_batch_idx] =
-				((double)(rte_rdtsc() - start_batch))
-				/ rte_get_tsc_hz();
+				((double)(rte_get_timer_cycles() - start_batch))
+				/ rte_get_timer_hz();
 			cpu_time_used += cpu_time_per_batch[rules_batch_idx];
-			start_batch = rte_rdtsc();
+			start_batch = rte_get_timer_cycles();
 		}
 	}
 
@@ -1089,7 +1089,7 @@ destroy_flows(int port_id, uint8_t core_id, struct rte_flow **flows_list)
 	if (flow_group > 0 && core_id == 0)
 		rules_count_per_core++;
 
-	start_batch = rte_rdtsc();
+	start_batch = rte_get_timer_cycles();
 	for (i = 0; i < (uint32_t) rules_count_per_core; i++) {
 		if (flows_list[i] == 0)
 			break;
@@ -1107,12 +1107,12 @@ destroy_flows(int port_id, uint8_t core_id, struct rte_flow **flows_list)
 		 * for this batch.
 		 */
 		if (!((i + 1) % rules_batch)) {
-			end_batch = rte_rdtsc();
+			end_batch = rte_get_timer_cycles();
 			delta = (double) (end_batch - start_batch);
 			rules_batch_idx = ((i + 1) / rules_batch) - 1;
-			cpu_time_per_batch[rules_batch_idx] = delta / rte_get_tsc_hz();
+			cpu_time_per_batch[rules_batch_idx] = delta / rte_get_timer_hz();
 			cpu_time_used += cpu_time_per_batch[rules_batch_idx];
-			start_batch = rte_rdtsc();
+			start_batch = rte_get_timer_cycles();
 		}
 	}
 
@@ -1185,7 +1185,7 @@ insert_flows(int port_id, uint8_t core_id)
 		flows_list[flow_index++] = flow;
 	}
 
-	start_batch = rte_rdtsc();
+	start_batch = rte_get_timer_cycles();
 	for (counter = start_counter; counter < end_counter; counter++) {
 		flow = generate_flow(port_id, flow_group,
 			flow_attrs, flow_items, flow_actions,
@@ -1211,12 +1211,12 @@ insert_flows(int port_id, uint8_t core_id)
 		 * for this batch.
 		 */
 		if (!((counter + 1) % rules_batch)) {
-			end_batch = rte_rdtsc();
+			end_batch = rte_get_timer_cycles();
 			delta = (double) (end_batch - start_batch);
 			rules_batch_idx = ((counter + 1) / rules_batch) - 1;
-			cpu_time_per_batch[rules_batch_idx] = delta / rte_get_tsc_hz();
+			cpu_time_per_batch[rules_batch_idx] = delta / rte_get_timer_hz();
 			cpu_time_used += cpu_time_per_batch[rules_batch_idx];
-			start_batch = rte_rdtsc();
+			start_batch = rte_get_timer_cycles();
 		}
 	}
 
-- 
2.17.1


^ permalink raw reply	[flat|nested] 46+ messages in thread

* [dpdk-dev] [PATCH v3 2/7] app/flow-perf: add new option to use unique data on the fly
  2021-03-10 13:55       ` [dpdk-dev] [PATCH v3 0/7] Enhancements and fixes for flow-perf Wisam Jaddo
  2021-03-10 13:55         ` [dpdk-dev] [PATCH v3 1/7] app/flow-perf: start using more generic wrapper for cycles Wisam Jaddo
@ 2021-03-10 13:55         ` Wisam Jaddo
  2021-03-10 13:55         ` [dpdk-dev] [PATCH v3 3/7] app/flow-perf: fix naming of CPU used structured data Wisam Jaddo
                           ` (5 subsequent siblings)
  7 siblings, 0 replies; 46+ messages in thread
From: Wisam Jaddo @ 2021-03-10 13:55 UTC (permalink / raw)
  To: arybchenko, thomas, akozyrev, rasland, dev

Current support for unique data is to compile with config.h
var as FIXED_VALUES as 0, and this is only supported on
compilation time, as a result the user may use only single
mode for each compilation.

Starting with this commit the user will have the ability to
use this feature on the fly by using this new option:
--unique-data

Example of unique data usage:
Insert many rules with different encap data for a flows that
have encap action in it.

Signed-off-by: Wisam Jaddo <wisamm@nvidia.com>
---
 app/test-flow-perf/actions_gen.c | 77 +++++++++++++++++---------------
 app/test-flow-perf/actions_gen.h |  3 +-
 app/test-flow-perf/config.h      |  8 +---
 app/test-flow-perf/flow_gen.c    |  4 +-
 app/test-flow-perf/flow_gen.h    |  1 +
 app/test-flow-perf/main.c        | 13 ++++--
 doc/guides/tools/flow-perf.rst   |  5 +++
 7 files changed, 62 insertions(+), 49 deletions(-)

diff --git a/app/test-flow-perf/actions_gen.c b/app/test-flow-perf/actions_gen.c
index 1f5c64fde9..82cddfc676 100644
--- a/app/test-flow-perf/actions_gen.c
+++ b/app/test-flow-perf/actions_gen.c
@@ -30,6 +30,7 @@ struct additional_para {
 	uint64_t encap_data;
 	uint64_t decap_data;
 	uint8_t core_idx;
+	bool unique_data;
 };
 
 /* Storage for struct rte_flow_action_raw_encap including external data. */
@@ -202,14 +203,14 @@ add_count(struct rte_flow_action *actions,
 static void
 add_set_src_mac(struct rte_flow_action *actions,
 	uint8_t actions_counter,
-	__rte_unused struct additional_para para)
+	struct additional_para para)
 {
 	static struct rte_flow_action_set_mac set_macs[RTE_MAX_LCORE] __rte_cache_aligned;
 	uint32_t mac = para.counter;
 	uint16_t i;
 
 	/* Fixed value */
-	if (FIXED_VALUES)
+	if (!para.unique_data)
 		mac = 1;
 
 	/* Mac address to be set is random each time */
@@ -225,14 +226,14 @@ add_set_src_mac(struct rte_flow_action *actions,
 static void
 add_set_dst_mac(struct rte_flow_action *actions,
 	uint8_t actions_counter,
-	__rte_unused struct additional_para para)
+	struct additional_para para)
 {
 	static struct rte_flow_action_set_mac set_macs[RTE_MAX_LCORE] __rte_cache_aligned;
 	uint32_t mac = para.counter;
 	uint16_t i;
 
 	/* Fixed value */
-	if (FIXED_VALUES)
+	if (!para.unique_data)
 		mac = 1;
 
 	/* Mac address to be set is random each time */
@@ -248,13 +249,13 @@ add_set_dst_mac(struct rte_flow_action *actions,
 static void
 add_set_src_ipv4(struct rte_flow_action *actions,
 	uint8_t actions_counter,
-	__rte_unused struct additional_para para)
+	struct additional_para para)
 {
 	static struct rte_flow_action_set_ipv4 set_ipv4[RTE_MAX_LCORE] __rte_cache_aligned;
 	uint32_t ip = para.counter;
 
 	/* Fixed value */
-	if (FIXED_VALUES)
+	if (!para.unique_data)
 		ip = 1;
 
 	/* IPv4 value to be set is random each time */
@@ -267,13 +268,13 @@ add_set_src_ipv4(struct rte_flow_action *actions,
 static void
 add_set_dst_ipv4(struct rte_flow_action *actions,
 	uint8_t actions_counter,
-	__rte_unused struct additional_para para)
+	struct additional_para para)
 {
 	static struct rte_flow_action_set_ipv4 set_ipv4[RTE_MAX_LCORE] __rte_cache_aligned;
 	uint32_t ip = para.counter;
 
 	/* Fixed value */
-	if (FIXED_VALUES)
+	if (!para.unique_data)
 		ip = 1;
 
 	/* IPv4 value to be set is random each time */
@@ -286,14 +287,14 @@ add_set_dst_ipv4(struct rte_flow_action *actions,
 static void
 add_set_src_ipv6(struct rte_flow_action *actions,
 	uint8_t actions_counter,
-	__rte_unused struct additional_para para)
+	struct additional_para para)
 {
 	static struct rte_flow_action_set_ipv6 set_ipv6[RTE_MAX_LCORE] __rte_cache_aligned;
 	uint32_t ipv6 = para.counter;
 	uint8_t i;
 
 	/* Fixed value */
-	if (FIXED_VALUES)
+	if (!para.unique_data)
 		ipv6 = 1;
 
 	/* IPv6 value to set is random each time */
@@ -309,14 +310,14 @@ add_set_src_ipv6(struct rte_flow_action *actions,
 static void
 add_set_dst_ipv6(struct rte_flow_action *actions,
 	uint8_t actions_counter,
-	__rte_unused struct additional_para para)
+	struct additional_para para)
 {
 	static struct rte_flow_action_set_ipv6 set_ipv6[RTE_MAX_LCORE] __rte_cache_aligned;
 	uint32_t ipv6 = para.counter;
 	uint8_t i;
 
 	/* Fixed value */
-	if (FIXED_VALUES)
+	if (!para.unique_data)
 		ipv6 = 1;
 
 	/* IPv6 value to set is random each time */
@@ -332,13 +333,13 @@ add_set_dst_ipv6(struct rte_flow_action *actions,
 static void
 add_set_src_tp(struct rte_flow_action *actions,
 	uint8_t actions_counter,
-	__rte_unused struct additional_para para)
+	struct additional_para para)
 {
 	static struct rte_flow_action_set_tp set_tp[RTE_MAX_LCORE] __rte_cache_aligned;
 	uint32_t tp = para.counter;
 
 	/* Fixed value */
-	if (FIXED_VALUES)
+	if (!para.unique_data)
 		tp = 100;
 
 	/* TP src port is random each time */
@@ -353,13 +354,13 @@ add_set_src_tp(struct rte_flow_action *actions,
 static void
 add_set_dst_tp(struct rte_flow_action *actions,
 	uint8_t actions_counter,
-	__rte_unused struct additional_para para)
+	struct additional_para para)
 {
 	static struct rte_flow_action_set_tp set_tp[RTE_MAX_LCORE] __rte_cache_aligned;
 	uint32_t tp = para.counter;
 
 	/* Fixed value */
-	if (FIXED_VALUES)
+	if (!para.unique_data)
 		tp = 100;
 
 	/* TP src port is random each time */
@@ -375,13 +376,13 @@ add_set_dst_tp(struct rte_flow_action *actions,
 static void
 add_inc_tcp_ack(struct rte_flow_action *actions,
 	uint8_t actions_counter,
-	__rte_unused struct additional_para para)
+	struct additional_para para)
 {
 	static rte_be32_t value[RTE_MAX_LCORE] __rte_cache_aligned;
 	uint32_t ack_value = para.counter;
 
 	/* Fixed value */
-	if (FIXED_VALUES)
+	if (!para.unique_data)
 		ack_value = 1;
 
 	value[para.core_idx] = RTE_BE32(ack_value);
@@ -393,13 +394,13 @@ add_inc_tcp_ack(struct rte_flow_action *actions,
 static void
 add_dec_tcp_ack(struct rte_flow_action *actions,
 	uint8_t actions_counter,
-	__rte_unused struct additional_para para)
+	struct additional_para para)
 {
 	static rte_be32_t value[RTE_MAX_LCORE] __rte_cache_aligned;
 	uint32_t ack_value = para.counter;
 
 	/* Fixed value */
-	if (FIXED_VALUES)
+	if (!para.unique_data)
 		ack_value = 1;
 
 	value[para.core_idx] = RTE_BE32(ack_value);
@@ -411,13 +412,13 @@ add_dec_tcp_ack(struct rte_flow_action *actions,
 static void
 add_inc_tcp_seq(struct rte_flow_action *actions,
 	uint8_t actions_counter,
-	__rte_unused struct additional_para para)
+	struct additional_para para)
 {
 	static rte_be32_t value[RTE_MAX_LCORE] __rte_cache_aligned;
 	uint32_t seq_value = para.counter;
 
 	/* Fixed value */
-	if (FIXED_VALUES)
+	if (!para.unique_data)
 		seq_value = 1;
 
 	value[para.core_idx] = RTE_BE32(seq_value);
@@ -429,13 +430,13 @@ add_inc_tcp_seq(struct rte_flow_action *actions,
 static void
 add_dec_tcp_seq(struct rte_flow_action *actions,
 	uint8_t actions_counter,
-	__rte_unused struct additional_para para)
+	struct additional_para para)
 {
 	static rte_be32_t value[RTE_MAX_LCORE] __rte_cache_aligned;
 	uint32_t seq_value = para.counter;
 
 	/* Fixed value */
-	if (FIXED_VALUES)
+	if (!para.unique_data)
 		seq_value = 1;
 
 	value[para.core_idx] = RTE_BE32(seq_value);
@@ -447,13 +448,13 @@ add_dec_tcp_seq(struct rte_flow_action *actions,
 static void
 add_set_ttl(struct rte_flow_action *actions,
 	uint8_t actions_counter,
-	__rte_unused struct additional_para para)
+	struct additional_para para)
 {
 	static struct rte_flow_action_set_ttl set_ttl[RTE_MAX_LCORE] __rte_cache_aligned;
 	uint32_t ttl_value = para.counter;
 
 	/* Fixed value */
-	if (FIXED_VALUES)
+	if (!para.unique_data)
 		ttl_value = 1;
 
 	/* Set ttl to random value each time */
@@ -476,13 +477,13 @@ add_dec_ttl(struct rte_flow_action *actions,
 static void
 add_set_ipv4_dscp(struct rte_flow_action *actions,
 	uint8_t actions_counter,
-	__rte_unused struct additional_para para)
+	struct additional_para para)
 {
 	static struct rte_flow_action_set_dscp set_dscp[RTE_MAX_LCORE] __rte_cache_aligned;
 	uint32_t dscp_value = para.counter;
 
 	/* Fixed value */
-	if (FIXED_VALUES)
+	if (!para.unique_data)
 		dscp_value = 1;
 
 	/* Set dscp to random value each time */
@@ -497,13 +498,13 @@ add_set_ipv4_dscp(struct rte_flow_action *actions,
 static void
 add_set_ipv6_dscp(struct rte_flow_action *actions,
 	uint8_t actions_counter,
-	__rte_unused struct additional_para para)
+	struct additional_para para)
 {
 	static struct rte_flow_action_set_dscp set_dscp[RTE_MAX_LCORE] __rte_cache_aligned;
 	uint32_t dscp_value = para.counter;
 
 	/* Fixed value */
-	if (FIXED_VALUES)
+	if (!para.unique_data)
 		dscp_value = 1;
 
 	/* Set dscp to random value each time */
@@ -577,7 +578,7 @@ add_ipv4_header(uint8_t **header, uint64_t data,
 		return;
 
 	/* Fixed value */
-	if (FIXED_VALUES)
+	if (!para.unique_data)
 		ip_dst = 1;
 
 	memset(&ipv4_hdr, 0, sizeof(struct rte_ipv4_hdr));
@@ -643,7 +644,7 @@ add_vxlan_header(uint8_t **header, uint64_t data,
 		return;
 
 	/* Fixed value */
-	if (FIXED_VALUES)
+	if (!para.unique_data)
 		vni_value = 1;
 
 	memset(&vxlan_hdr, 0, sizeof(struct rte_vxlan_hdr));
@@ -666,7 +667,7 @@ add_vxlan_gpe_header(uint8_t **header, uint64_t data,
 		return;
 
 	/* Fixed value */
-	if (FIXED_VALUES)
+	if (!para.unique_data)
 		vni_value = 1;
 
 	memset(&vxlan_gpe_hdr, 0, sizeof(struct rte_vxlan_gpe_hdr));
@@ -707,7 +708,7 @@ add_geneve_header(uint8_t **header, uint64_t data,
 		return;
 
 	/* Fixed value */
-	if (FIXED_VALUES)
+	if (!para.unique_data)
 		vni_value = 1;
 
 	memset(&geneve_hdr, 0, sizeof(struct rte_geneve_hdr));
@@ -730,7 +731,7 @@ add_gtp_header(uint8_t **header, uint64_t data,
 		return;
 
 	/* Fixed value */
-	if (FIXED_VALUES)
+	if (!para.unique_data)
 		teid_value = 1;
 
 	memset(&gtp_hdr, 0, sizeof(struct rte_flow_item_gtp));
@@ -849,7 +850,7 @@ add_vxlan_encap(struct rte_flow_action *actions,
 	uint32_t ip_dst = para.counter;
 
 	/* Fixed value */
-	if (FIXED_VALUES)
+	if (!para.unique_data)
 		ip_dst = 1;
 
 	items[0].spec = &item_eth;
@@ -907,7 +908,8 @@ add_meter(struct rte_flow_action *actions,
 void
 fill_actions(struct rte_flow_action *actions, uint64_t *flow_actions,
 	uint32_t counter, uint16_t next_table, uint16_t hairpinq,
-	uint64_t encap_data, uint64_t decap_data, uint8_t core_idx)
+	uint64_t encap_data, uint64_t decap_data, uint8_t core_idx,
+	bool unique_data)
 {
 	struct additional_para additional_para_data;
 	uint8_t actions_counter = 0;
@@ -930,6 +932,7 @@ fill_actions(struct rte_flow_action *actions, uint64_t *flow_actions,
 		.encap_data = encap_data,
 		.decap_data = decap_data,
 		.core_idx = core_idx,
+		.unique_data = unique_data,
 	};
 
 	if (hairpinq != 0) {
diff --git a/app/test-flow-perf/actions_gen.h b/app/test-flow-perf/actions_gen.h
index 77353cfe09..6f2f833496 100644
--- a/app/test-flow-perf/actions_gen.h
+++ b/app/test-flow-perf/actions_gen.h
@@ -19,6 +19,7 @@
 
 void fill_actions(struct rte_flow_action *actions, uint64_t *flow_actions,
 	uint32_t counter, uint16_t next_table, uint16_t hairpinq,
-	uint64_t encap_data, uint64_t decap_data, uint8_t core_idx);
+	uint64_t encap_data, uint64_t decap_data, uint8_t core_idx,
+	bool unique_data);
 
 #endif /* FLOW_PERF_ACTION_GEN */
diff --git a/app/test-flow-perf/config.h b/app/test-flow-perf/config.h
index 3d4696d61a..a14d4e05e1 100644
--- a/app/test-flow-perf/config.h
+++ b/app/test-flow-perf/config.h
@@ -5,7 +5,7 @@
 #define FLOW_ITEM_MASK(_x) (UINT64_C(1) << _x)
 #define FLOW_ACTION_MASK(_x) (UINT64_C(1) << _x)
 #define FLOW_ATTR_MASK(_x) (UINT64_C(1) << _x)
-#define GET_RSS_HF() (ETH_RSS_IP | ETH_RSS_TCP)
+#define GET_RSS_HF() (ETH_RSS_IP)
 
 /* Configuration */
 #define RXQ_NUM 4
@@ -19,12 +19,6 @@
 #define METER_CIR 1250000
 #define DEFAULT_METER_PROF_ID 100
 
-/* This is used for encap/decap & header modify actions.
- * When it's 1: it means all actions have fixed values.
- * When it's 0: it means all actions will have different values.
- */
-#define FIXED_VALUES 1
-
 /* Items/Actions parameters */
 #define JUMP_ACTION_TABLE 2
 #define VLAN_VALUE 1
diff --git a/app/test-flow-perf/flow_gen.c b/app/test-flow-perf/flow_gen.c
index df4af16de8..8f87fac5f6 100644
--- a/app/test-flow-perf/flow_gen.c
+++ b/app/test-flow-perf/flow_gen.c
@@ -46,6 +46,7 @@ generate_flow(uint16_t port_id,
 	uint64_t encap_data,
 	uint64_t decap_data,
 	uint8_t core_idx,
+	bool unique_data,
 	struct rte_flow_error *error)
 {
 	struct rte_flow_attr attr;
@@ -61,7 +62,8 @@ generate_flow(uint16_t port_id,
 
 	fill_actions(actions, flow_actions,
 		outer_ip_src, next_table, hairpinq,
-		encap_data, decap_data, core_idx);
+		encap_data, decap_data, core_idx,
+		unique_data);
 
 	fill_items(items, flow_items, outer_ip_src, core_idx);
 
diff --git a/app/test-flow-perf/flow_gen.h b/app/test-flow-perf/flow_gen.h
index f1d0999af1..dc887fceae 100644
--- a/app/test-flow-perf/flow_gen.h
+++ b/app/test-flow-perf/flow_gen.h
@@ -35,6 +35,7 @@ generate_flow(uint16_t port_id,
 	uint64_t encap_data,
 	uint64_t decap_data,
 	uint8_t core_idx,
+	bool unique_data,
 	struct rte_flow_error *error);
 
 #endif /* FLOW_PERF_FLOW_GEN */
diff --git a/app/test-flow-perf/main.c b/app/test-flow-perf/main.c
index 8b5a11c15e..4054178273 100644
--- a/app/test-flow-perf/main.c
+++ b/app/test-flow-perf/main.c
@@ -61,6 +61,7 @@ static bool dump_iterations;
 static bool delete_flag;
 static bool dump_socket_mem_flag;
 static bool enable_fwd;
+static bool unique_data;
 
 static struct rte_mempool *mbuf_mp;
 static uint32_t nb_lcores;
@@ -131,6 +132,8 @@ usage(char *progname)
 	printf("  --enable-fwd: To enable packets forwarding"
 		" after insertion\n");
 	printf("  --portmask=N: hexadecimal bitmask of ports used\n");
+	printf("  --unique-data: flag to set using unique data for all"
+		" actions that support data, such as header modify and encap actions\n");
 
 	printf("To set flow attributes:\n");
 	printf("  --ingress: set ingress attribute in flows\n");
@@ -567,6 +570,7 @@ args_parse(int argc, char **argv)
 		{ "deletion-rate",              0, 0, 0 },
 		{ "dump-socket-mem",            0, 0, 0 },
 		{ "enable-fwd",                 0, 0, 0 },
+		{ "unique-data",                0, 0, 0 },
 		{ "portmask",                   1, 0, 0 },
 		{ "cores",                      1, 0, 0 },
 		/* Attributes */
@@ -765,6 +769,9 @@ args_parse(int argc, char **argv)
 			if (strcmp(lgopts[opt_idx].name,
 					"dump-iterations") == 0)
 				dump_iterations = true;
+			if (strcmp(lgopts[opt_idx].name,
+					"unique-data") == 0)
+				unique_data = true;
 			if (strcmp(lgopts[opt_idx].name,
 					"deletion-rate") == 0)
 				delete_flag = true;
@@ -1176,7 +1183,7 @@ insert_flows(int port_id, uint8_t core_id)
 		 */
 		flow = generate_flow(port_id, 0, flow_attrs,
 			global_items, global_actions,
-			flow_group, 0, 0, 0, 0, core_id, &error);
+			flow_group, 0, 0, 0, 0, core_id, unique_data, &error);
 
 		if (flow == NULL) {
 			print_flow_error(error);
@@ -1192,7 +1199,7 @@ insert_flows(int port_id, uint8_t core_id)
 			JUMP_ACTION_TABLE, counter,
 			hairpin_queues_num,
 			encap_data, decap_data,
-			core_id, &error);
+			core_id, unique_data, &error);
 
 		if (force_quit)
 			counter = end_counter;
@@ -1863,6 +1870,7 @@ main(int argc, char **argv)
 	delete_flag = false;
 	dump_socket_mem_flag = false;
 	flow_group = DEFAULT_GROUP;
+	unique_data = false;
 
 	signal(SIGINT, signal_handler);
 	signal(SIGTERM, signal_handler);
@@ -1878,7 +1886,6 @@ main(int argc, char **argv)
 	if (nb_lcores <= 1)
 		rte_exit(EXIT_FAILURE, "This app needs at least two cores\n");
 
-
 	printf(":: Flows Count per port: %d\n\n", rules_count);
 
 	if (has_meter())
diff --git a/doc/guides/tools/flow-perf.rst b/doc/guides/tools/flow-perf.rst
index 017e200222..280bf7e0e0 100644
--- a/doc/guides/tools/flow-perf.rst
+++ b/doc/guides/tools/flow-perf.rst
@@ -100,6 +100,11 @@ The command line options are:
 	Set the number of needed cores to insert/delete rte_flow rules.
 	Default cores count is 1.
 
+*       ``--unique-data``
+        Flag to set using unique data for all actions that support data,
+        Such as header modify and encap actions. Default is using fixed
+        data for any action that support data for all flows.
+
 Attributes:
 
 *	``--ingress``
-- 
2.17.1


^ permalink raw reply	[flat|nested] 46+ messages in thread

* [dpdk-dev] [PATCH v3 3/7] app/flow-perf: fix naming of CPU used structured data
  2021-03-10 13:55       ` [dpdk-dev] [PATCH v3 0/7] Enhancements and fixes for flow-perf Wisam Jaddo
  2021-03-10 13:55         ` [dpdk-dev] [PATCH v3 1/7] app/flow-perf: start using more generic wrapper for cycles Wisam Jaddo
  2021-03-10 13:55         ` [dpdk-dev] [PATCH v3 2/7] app/flow-perf: add new option to use unique data on the fly Wisam Jaddo
@ 2021-03-10 13:55         ` Wisam Jaddo
  2021-03-10 13:55         ` [dpdk-dev] [PATCH v3 4/7] app/flow-perf: fix report total stats for masked ports Wisam Jaddo
                           ` (4 subsequent siblings)
  7 siblings, 0 replies; 46+ messages in thread
From: Wisam Jaddo @ 2021-03-10 13:55 UTC (permalink / raw)
  To: arybchenko, thomas, akozyrev, rasland, dev; +Cc: dongzhou

create_flow and create_meter are not correct names since those
are records that contain creation and deletion, which makes
them more of a record for such data.

Fixes: d8099d7ecbd0 ("app/flow-perf: split dump functions")
Cc: dongzhou@nvidia.com

Signed-off-by: Wisam Jaddo <wisamm@nvidia.com>
---
 app/test-flow-perf/main.c | 16 ++++++++--------
 1 file changed, 8 insertions(+), 8 deletions(-)

diff --git a/app/test-flow-perf/main.c b/app/test-flow-perf/main.c
index 4054178273..01607881df 100644
--- a/app/test-flow-perf/main.c
+++ b/app/test-flow-perf/main.c
@@ -105,8 +105,8 @@ struct used_cpu_time {
 struct multi_cores_pool {
 	uint32_t cores_count;
 	uint32_t rules_count;
-	struct used_cpu_time create_meter;
-	struct used_cpu_time create_flow;
+	struct used_cpu_time meters_record;
+	struct used_cpu_time flows_record;
 	int64_t last_alloc[RTE_MAX_LCORE];
 	int64_t current_alloc[RTE_MAX_LCORE];
 } __rte_cache_aligned;
@@ -1013,10 +1013,10 @@ meters_handler(int port_id, uint8_t core_id, uint8_t ops)
 		cpu_time_used, insertion_rate);
 
 	if (ops == METER_CREATE)
-		mc_pool.create_meter.insertion[port_id][core_id]
+		mc_pool.meters_record.insertion[port_id][core_id]
 			= cpu_time_used;
 	else
-		mc_pool.create_meter.deletion[port_id][core_id]
+		mc_pool.meters_record.deletion[port_id][core_id]
 			= cpu_time_used;
 }
 
@@ -1134,7 +1134,7 @@ destroy_flows(int port_id, uint8_t core_id, struct rte_flow **flows_list)
 	printf(":: Port %d :: Core %d :: The time for deleting %d rules is %f seconds\n",
 		port_id, core_id, rules_count_per_core, cpu_time_used);
 
-	mc_pool.create_flow.deletion[port_id][core_id] = cpu_time_used;
+	mc_pool.flows_record.deletion[port_id][core_id] = cpu_time_used;
 }
 
 static struct rte_flow **
@@ -1241,7 +1241,7 @@ insert_flows(int port_id, uint8_t core_id)
 	printf(":: Port %d :: Core %d :: The time for creating %d in rules %f seconds\n",
 		port_id, core_id, rules_count_per_core, cpu_time_used);
 
-	mc_pool.create_flow.insertion[port_id][core_id] = cpu_time_used;
+	mc_pool.flows_record.insertion[port_id][core_id] = cpu_time_used;
 	return flows_list;
 }
 
@@ -1439,9 +1439,9 @@ run_rte_flow_handler_cores(void *data __rte_unused)
 	RTE_ETH_FOREACH_DEV(port) {
 		if (has_meter())
 			dump_used_cpu_time("Meters:",
-				port, &mc_pool.create_meter);
+				port, &mc_pool.meters_record);
 		dump_used_cpu_time("Flows:",
-			port, &mc_pool.create_flow);
+			port, &mc_pool.flows_record);
 		dump_used_mem(port);
 	}
 
-- 
2.17.1


^ permalink raw reply	[flat|nested] 46+ messages in thread

* [dpdk-dev] [PATCH v3 4/7] app/flow-perf: fix report total stats for masked ports
  2021-03-10 13:55       ` [dpdk-dev] [PATCH v3 0/7] Enhancements and fixes for flow-perf Wisam Jaddo
                           ` (2 preceding siblings ...)
  2021-03-10 13:55         ` [dpdk-dev] [PATCH v3 3/7] app/flow-perf: fix naming of CPU used structured data Wisam Jaddo
@ 2021-03-10 13:55         ` Wisam Jaddo
  2021-03-10 13:55         ` [dpdk-dev] [PATCH v3 5/7] app/flow-perf: fix the incremental IPv6 src set Wisam Jaddo
                           ` (3 subsequent siblings)
  7 siblings, 0 replies; 46+ messages in thread
From: Wisam Jaddo @ 2021-03-10 13:55 UTC (permalink / raw)
  To: arybchenko, thomas, akozyrev, rasland, dev; +Cc: dongzhou, stable

Take into consideration that the user may call portmask for
any run, thus the app should always check if port is needed
to collect and report or not.

Fixes: 070316d01d3e ("app/flow-perf: add multi-core rule insertion and deletion")
Fixes: d8099d7ecbd0 ("app/flow-perf: split dump functions")
Cc: dongzhou@nvidia.com
Cc: stable@dpdk.org

Signed-off-by: Wisam Jaddo <wisamm@nvidia.com>
---
 app/test-flow-perf/main.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/app/test-flow-perf/main.c b/app/test-flow-perf/main.c
index 01607881df..e32714131c 100644
--- a/app/test-flow-perf/main.c
+++ b/app/test-flow-perf/main.c
@@ -1437,6 +1437,9 @@ run_rte_flow_handler_cores(void *data __rte_unused)
 	rte_eal_mp_wait_lcore();
 
 	RTE_ETH_FOREACH_DEV(port) {
+		/* If port outside portmask */
+		if (!((ports_mask >> port) & 0x1))
+			continue;
 		if (has_meter())
 			dump_used_cpu_time("Meters:",
 				port, &mc_pool.meters_record);
-- 
2.17.1


^ permalink raw reply	[flat|nested] 46+ messages in thread

* [dpdk-dev] [PATCH v3 5/7] app/flow-perf: fix the incremental IPv6 src set
  2021-03-10 13:55       ` [dpdk-dev] [PATCH v3 0/7] Enhancements and fixes for flow-perf Wisam Jaddo
                           ` (3 preceding siblings ...)
  2021-03-10 13:55         ` [dpdk-dev] [PATCH v3 4/7] app/flow-perf: fix report total stats for masked ports Wisam Jaddo
@ 2021-03-10 13:55         ` Wisam Jaddo
  2021-03-10 13:55         ` [dpdk-dev] [PATCH v3 6/7] app/flow-perf: add first flow latency support Wisam Jaddo
                           ` (2 subsequent siblings)
  7 siblings, 0 replies; 46+ messages in thread
From: Wisam Jaddo @ 2021-03-10 13:55 UTC (permalink / raw)
  To: arybchenko, thomas, akozyrev, rasland, dev; +Cc: wisamm, stable

Currently the memset() will not set a correct src ip that represent
the incremental value of the counter.

This commit will fix this and each flow will have correct IPv6.src
that it's incremental from previous flow and equal to the decimal
values.

Fixes: bf3688f1e816 ("app/flow-perf: add insertion rate calculation")
Cc: wisamm@mellanox.com
Cc: stable@dpdk.org

Signed-off-by: Wisam Jaddo <wisamm@nvidia.com>
---
 app/test-flow-perf/items_gen.c | 13 +++++++------
 1 file changed, 7 insertions(+), 6 deletions(-)

diff --git a/app/test-flow-perf/items_gen.c b/app/test-flow-perf/items_gen.c
index ccebc08b39..a73de9031f 100644
--- a/app/test-flow-perf/items_gen.c
+++ b/app/test-flow-perf/items_gen.c
@@ -72,14 +72,15 @@ add_ipv6(struct rte_flow_item *items,
 	static struct rte_flow_item_ipv6 ipv6_specs[RTE_MAX_LCORE] __rte_cache_aligned;
 	static struct rte_flow_item_ipv6 ipv6_masks[RTE_MAX_LCORE] __rte_cache_aligned;
 	uint8_t ti = para.core_idx;
+	uint8_t i;
 
 	/** Set ipv6 src **/
-	memset(&ipv6_specs[ti].hdr.src_addr, para.src_ip,
-		sizeof(ipv6_specs->hdr.src_addr) / 2);
-
-	/** Full mask **/
-	memset(&ipv6_masks[ti].hdr.src_addr, 0xff,
-		sizeof(ipv6_specs->hdr.src_addr));
+	for (i = 0; i < 16; i++) {
+		/* Currently src_ip is limited to 32 bit */
+		if (i < 4)
+			ipv6_specs[ti].hdr.src_addr[15 - i] = para.src_ip >> (i * 8);
+		ipv6_masks[ti].hdr.src_addr[15 - i] = 0xff;
+	}
 
 	items[items_counter].type = RTE_FLOW_ITEM_TYPE_IPV6;
 	items[items_counter].spec = &ipv6_specs[ti];
-- 
2.17.1


^ permalink raw reply	[flat|nested] 46+ messages in thread

* [dpdk-dev] [PATCH v3 6/7] app/flow-perf: add first flow latency support
  2021-03-10 13:55       ` [dpdk-dev] [PATCH v3 0/7] Enhancements and fixes for flow-perf Wisam Jaddo
                           ` (4 preceding siblings ...)
  2021-03-10 13:55         ` [dpdk-dev] [PATCH v3 5/7] app/flow-perf: fix the incremental IPv6 src set Wisam Jaddo
@ 2021-03-10 13:55         ` Wisam Jaddo
  2021-03-10 13:55         ` [dpdk-dev] [PATCH v3 7/7] app/flow-perf: fix setting decap data for decap actions Wisam Jaddo
  2021-03-10 21:54         ` [dpdk-dev] [PATCH v3 0/7] Enhancements and fixes for flow-perf Alexander Kozyrev
  7 siblings, 0 replies; 46+ messages in thread
From: Wisam Jaddo @ 2021-03-10 13:55 UTC (permalink / raw)
  To: arybchenko, thomas, akozyrev, rasland, dev

Starting from this commit the app will always
report the first flow latency.

This is useful in debugging to check the first
flow insertion before any caching effect.

Signed-off-by: Wisam Jaddo <wisamm@nvidia.com>
---
 app/test-flow-perf/main.c | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/app/test-flow-perf/main.c b/app/test-flow-perf/main.c
index e32714131c..3d79430e9a 100644
--- a/app/test-flow-perf/main.c
+++ b/app/test-flow-perf/main.c
@@ -1143,6 +1143,7 @@ insert_flows(int port_id, uint8_t core_id)
 	struct rte_flow **flows_list;
 	struct rte_flow_error error;
 	clock_t start_batch, end_batch;
+	double first_flow_latency;
 	double cpu_time_used;
 	double insertion_rate;
 	double cpu_time_per_batch[MAX_BATCHES_COUNT] = { 0 };
@@ -1201,6 +1202,14 @@ insert_flows(int port_id, uint8_t core_id)
 			encap_data, decap_data,
 			core_id, unique_data, &error);
 
+		if (!counter) {
+			first_flow_latency = ((double) (rte_get_timer_cycles() - start_batch) / rte_get_timer_hz());
+			/* In millisecond */
+			first_flow_latency *= 1000;
+			printf(":: First Flow Latency :: Port %d :: First flow installed in %f milliseconds\n",
+				port_id, first_flow_latency);
+		}
+
 		if (force_quit)
 			counter = end_counter;
 
-- 
2.17.1


^ permalink raw reply	[flat|nested] 46+ messages in thread

* [dpdk-dev] [PATCH v3 7/7] app/flow-perf: fix setting decap data for decap actions
  2021-03-10 13:55       ` [dpdk-dev] [PATCH v3 0/7] Enhancements and fixes for flow-perf Wisam Jaddo
                           ` (5 preceding siblings ...)
  2021-03-10 13:55         ` [dpdk-dev] [PATCH v3 6/7] app/flow-perf: add first flow latency support Wisam Jaddo
@ 2021-03-10 13:55         ` Wisam Jaddo
  2021-03-10 21:54         ` [dpdk-dev] [PATCH v3 0/7] Enhancements and fixes for flow-perf Alexander Kozyrev
  7 siblings, 0 replies; 46+ messages in thread
From: Wisam Jaddo @ 2021-03-10 13:55 UTC (permalink / raw)
  To: arybchenko, thomas, akozyrev, rasland, dev; +Cc: stable

When using decap actions it's been set to the data to decap
into the encap_data instead of decap_data, as a results we end
up with bad encap and decap data in many cases.

Fixes: 0c8f1f4ab90e ("app/flow-perf: support raw encap/decap actions")
Cc: stable@dpdk.org

Signed-off-by: Wisam Jaddo <wisamm@nvidia.com>
---
 app/test-flow-perf/main.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/app/test-flow-perf/main.c b/app/test-flow-perf/main.c
index 3d79430e9a..6bdffef186 100644
--- a/app/test-flow-perf/main.c
+++ b/app/test-flow-perf/main.c
@@ -730,7 +730,7 @@ args_parse(int argc, char **argv)
 					for (i = 0; i < RTE_DIM(flow_options); i++) {
 						if (strcmp(flow_options[i].str, token) == 0) {
 							printf("%s,", token);
-							encap_data |= flow_options[i].mask;
+							decap_data |= flow_options[i].mask;
 							break;
 						}
 						/* Reached last item with no match */
-- 
2.17.1


^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [dpdk-dev] [PATCH v3 0/7] Enhancements and fixes for flow-perf
  2021-03-10 13:55       ` [dpdk-dev] [PATCH v3 0/7] Enhancements and fixes for flow-perf Wisam Jaddo
                           ` (6 preceding siblings ...)
  2021-03-10 13:55         ` [dpdk-dev] [PATCH v3 7/7] app/flow-perf: fix setting decap data for decap actions Wisam Jaddo
@ 2021-03-10 21:54         ` Alexander Kozyrev
  7 siblings, 0 replies; 46+ messages in thread
From: Alexander Kozyrev @ 2021-03-10 21:54 UTC (permalink / raw)
  To: Wisam Monther, arybchenko, NBU-Contact-Thomas Monjalon,
	Raslan Darawsheh, dev

> -----Original Message-----
> From: Wisam Monther <wisamm@nvidia.com>
> Sent: Wednesday, March 10, 2021 8:56
> To: arybchenko@solarflare.com; NBU-Contact-Thomas Monjalon
> <thomas@monjalon.net>; Alexander Kozyrev <akozyrev@nvidia.com>;
> Raslan Darawsheh <rasland@nvidia.com>; dev@dpdk.org
> Subject: [PATCH v3 0/7] Enhancements and fixes for flow-perf
> 
> ------
> v2:
> * Add first insertion flow latency calculation.
> * Fix for decap data set.
> 
> v3:
> * Fixes in commit message.
> * Fixes the cover page.
> 
> Wisam Jaddo (7):
>   app/flow-perf: start using more generic wrapper for cycles
>   app/flow-perf: add new option to use unique data on the fly
>   app/flow-perf: fix naming of CPU used structured data
>   app/flow-perf: fix report total stats for masked ports
>   app/flow-perf: fix the incremental IPv6 src set
>   app/flow-perf: add first flow latency support
>   app/flow-perf: fix setting decap data for decap actions
> 
>  app/test-flow-perf/actions_gen.c | 77 +++++++++++++++++---------------
>  app/test-flow-perf/actions_gen.h |  3 +-
>  app/test-flow-perf/config.h      |  8 +---
>  app/test-flow-perf/flow_gen.c    |  4 +-
>  app/test-flow-perf/flow_gen.h    |  1 +
>  app/test-flow-perf/items_gen.c   | 13 +++---
>  app/test-flow-perf/main.c        | 67 +++++++++++++++++----------
>  doc/guides/tools/flow-perf.rst   |  5 +++
>  8 files changed, 102 insertions(+), 76 deletions(-)
> 
> --
> 2.17.1

Please add some summary to the cover letter and
fix the warning in patch 6/7. Other than that:
Acked-by: Alexander Kozyrev <akozyrev@nvidia.com>

^ permalink raw reply	[flat|nested] 46+ messages in thread

* [dpdk-dev] [PATCH v3 0/7] Enhancements and fixes for flow-perf
  2021-03-10 13:55         ` [dpdk-dev] [PATCH v3 1/7] app/flow-perf: start using more generic wrapper for cycles Wisam Jaddo
@ 2021-03-14  9:54           ` Wisam Jaddo
  2021-03-14  9:54             ` [dpdk-dev] [PATCH v4 1/7] app/flow-perf: start using more generic wrapper for cycles Wisam Jaddo
                               ` (7 more replies)
  0 siblings, 8 replies; 46+ messages in thread
From: Wisam Jaddo @ 2021-03-14  9:54 UTC (permalink / raw)
  To: arybchenko, thomas, akozyrev, rasland, dev

Introduce enhancements and fixes for flow-perf:
1- Fix decap data for raw-decap actions.
2- Fix setting IPv6 source.
3- Add first flow latency calculation.
4- fixes in flow-perf: CPU and reporting.
5- Use more generic wrapper for cycles.
6- Use unique data on the fly.

------
v2:
* Add first insertion flow latency calculation.
* Fix for decap data set.

v3:
* Fixes in commit message.
* Fixes the cover page.

v4:
* Fix warrning of 100 char long line.
* Add more discription in cover letter.

Wisam Jaddo (7):
  app/flow-perf: start using more generic wrapper for cycles
  app/flow-perf: add new option to use unique data on the fly
  app/flow-perf: fix naming of CPU used structured data
  app/flow-perf: fix report total stats for masked ports
  app/flow-perf: fix the incremental IPv6 src set
  app/flow-perf: add first flow latency support
  app/flow-perf: fix setting decap data for decap actions

 app/test-flow-perf/actions_gen.c | 77 +++++++++++++++++---------------
 app/test-flow-perf/actions_gen.h |  3 +-
 app/test-flow-perf/config.h      |  8 +---
 app/test-flow-perf/flow_gen.c    |  4 +-
 app/test-flow-perf/flow_gen.h    |  1 +
 app/test-flow-perf/items_gen.c   | 13 +++---
 app/test-flow-perf/main.c        | 67 +++++++++++++++++----------
 doc/guides/tools/flow-perf.rst   |  5 +++
 8 files changed, 102 insertions(+), 76 deletions(-)

-- 
2.17.1


^ permalink raw reply	[flat|nested] 46+ messages in thread

* [dpdk-dev] [PATCH v4 1/7] app/flow-perf: start using more generic wrapper for cycles
  2021-03-14  9:54           ` [dpdk-dev] [PATCH v3 0/7] Enhancements and fixes for flow-perf Wisam Jaddo
@ 2021-03-14  9:54             ` Wisam Jaddo
  2021-03-14  9:54             ` [dpdk-dev] [PATCH v4 2/7] app/flow-perf: add new option to use unique data on the fly Wisam Jaddo
                               ` (6 subsequent siblings)
  7 siblings, 0 replies; 46+ messages in thread
From: Wisam Jaddo @ 2021-03-14  9:54 UTC (permalink / raw)
  To: arybchenko, thomas, akozyrev, rasland, dev

rdtsc() is x86 related, while this might fail for other archs,
so it's better to use more generic API for cycles measurement.

Signed-off-by: Wisam Jaddo <wisamm@nvidia.com>
Acked-by: Alexander Kozyrev <akozyrev@nvidia.com>
---
 app/test-flow-perf/main.c | 24 ++++++++++++------------
 1 file changed, 12 insertions(+), 12 deletions(-)

diff --git a/app/test-flow-perf/main.c b/app/test-flow-perf/main.c
index 99d0463456..8b5a11c15e 100644
--- a/app/test-flow-perf/main.c
+++ b/app/test-flow-perf/main.c
@@ -969,7 +969,7 @@ meters_handler(int port_id, uint8_t core_id, uint8_t ops)
 	end_counter = (core_id + 1) * rules_count_per_core;
 
 	cpu_time_used = 0;
-	start_batch = rte_rdtsc();
+	start_batch = rte_get_timer_cycles();
 	for (counter = start_counter; counter < end_counter; counter++) {
 		if (ops == METER_CREATE)
 			create_meter_rule(port_id, counter);
@@ -984,10 +984,10 @@ meters_handler(int port_id, uint8_t core_id, uint8_t ops)
 		if (!((counter + 1) % rules_batch)) {
 			rules_batch_idx = ((counter + 1) / rules_batch) - 1;
 			cpu_time_per_batch[rules_batch_idx] =
-				((double)(rte_rdtsc() - start_batch))
-				/ rte_get_tsc_hz();
+				((double)(rte_get_timer_cycles() - start_batch))
+				/ rte_get_timer_hz();
 			cpu_time_used += cpu_time_per_batch[rules_batch_idx];
-			start_batch = rte_rdtsc();
+			start_batch = rte_get_timer_cycles();
 		}
 	}
 
@@ -1089,7 +1089,7 @@ destroy_flows(int port_id, uint8_t core_id, struct rte_flow **flows_list)
 	if (flow_group > 0 && core_id == 0)
 		rules_count_per_core++;
 
-	start_batch = rte_rdtsc();
+	start_batch = rte_get_timer_cycles();
 	for (i = 0; i < (uint32_t) rules_count_per_core; i++) {
 		if (flows_list[i] == 0)
 			break;
@@ -1107,12 +1107,12 @@ destroy_flows(int port_id, uint8_t core_id, struct rte_flow **flows_list)
 		 * for this batch.
 		 */
 		if (!((i + 1) % rules_batch)) {
-			end_batch = rte_rdtsc();
+			end_batch = rte_get_timer_cycles();
 			delta = (double) (end_batch - start_batch);
 			rules_batch_idx = ((i + 1) / rules_batch) - 1;
-			cpu_time_per_batch[rules_batch_idx] = delta / rte_get_tsc_hz();
+			cpu_time_per_batch[rules_batch_idx] = delta / rte_get_timer_hz();
 			cpu_time_used += cpu_time_per_batch[rules_batch_idx];
-			start_batch = rte_rdtsc();
+			start_batch = rte_get_timer_cycles();
 		}
 	}
 
@@ -1185,7 +1185,7 @@ insert_flows(int port_id, uint8_t core_id)
 		flows_list[flow_index++] = flow;
 	}
 
-	start_batch = rte_rdtsc();
+	start_batch = rte_get_timer_cycles();
 	for (counter = start_counter; counter < end_counter; counter++) {
 		flow = generate_flow(port_id, flow_group,
 			flow_attrs, flow_items, flow_actions,
@@ -1211,12 +1211,12 @@ insert_flows(int port_id, uint8_t core_id)
 		 * for this batch.
 		 */
 		if (!((counter + 1) % rules_batch)) {
-			end_batch = rte_rdtsc();
+			end_batch = rte_get_timer_cycles();
 			delta = (double) (end_batch - start_batch);
 			rules_batch_idx = ((counter + 1) / rules_batch) - 1;
-			cpu_time_per_batch[rules_batch_idx] = delta / rte_get_tsc_hz();
+			cpu_time_per_batch[rules_batch_idx] = delta / rte_get_timer_hz();
 			cpu_time_used += cpu_time_per_batch[rules_batch_idx];
-			start_batch = rte_rdtsc();
+			start_batch = rte_get_timer_cycles();
 		}
 	}
 
-- 
2.17.1


^ permalink raw reply	[flat|nested] 46+ messages in thread

* [dpdk-dev] [PATCH v4 2/7] app/flow-perf: add new option to use unique data on the fly
  2021-03-14  9:54           ` [dpdk-dev] [PATCH v3 0/7] Enhancements and fixes for flow-perf Wisam Jaddo
  2021-03-14  9:54             ` [dpdk-dev] [PATCH v4 1/7] app/flow-perf: start using more generic wrapper for cycles Wisam Jaddo
@ 2021-03-14  9:54             ` Wisam Jaddo
  2021-03-14  9:54             ` [dpdk-dev] [PATCH v4 3/7] app/flow-perf: fix naming of CPU used structured data Wisam Jaddo
                               ` (5 subsequent siblings)
  7 siblings, 0 replies; 46+ messages in thread
From: Wisam Jaddo @ 2021-03-14  9:54 UTC (permalink / raw)
  To: arybchenko, thomas, akozyrev, rasland, dev

Current support for unique data is to compile with config.h
var as FIXED_VALUES as 0, and this is only supported on
compilation time, as a result the user may use only single
mode for each compilation.

Starting with this commit the user will have the ability to
use this feature on the fly by using this new option:
--unique-data

Example of unique data usage:
Insert many rules with different encap data for a flows that
have encap action in it.

Signed-off-by: Wisam Jaddo <wisamm@nvidia.com>
Acked-by: Alexander Kozyrev <akozyrev@nvidia.com>
---
 app/test-flow-perf/actions_gen.c | 77 +++++++++++++++++---------------
 app/test-flow-perf/actions_gen.h |  3 +-
 app/test-flow-perf/config.h      |  8 +---
 app/test-flow-perf/flow_gen.c    |  4 +-
 app/test-flow-perf/flow_gen.h    |  1 +
 app/test-flow-perf/main.c        | 13 ++++--
 doc/guides/tools/flow-perf.rst   |  5 +++
 7 files changed, 62 insertions(+), 49 deletions(-)

diff --git a/app/test-flow-perf/actions_gen.c b/app/test-flow-perf/actions_gen.c
index 1f5c64fde9..82cddfc676 100644
--- a/app/test-flow-perf/actions_gen.c
+++ b/app/test-flow-perf/actions_gen.c
@@ -30,6 +30,7 @@ struct additional_para {
 	uint64_t encap_data;
 	uint64_t decap_data;
 	uint8_t core_idx;
+	bool unique_data;
 };
 
 /* Storage for struct rte_flow_action_raw_encap including external data. */
@@ -202,14 +203,14 @@ add_count(struct rte_flow_action *actions,
 static void
 add_set_src_mac(struct rte_flow_action *actions,
 	uint8_t actions_counter,
-	__rte_unused struct additional_para para)
+	struct additional_para para)
 {
 	static struct rte_flow_action_set_mac set_macs[RTE_MAX_LCORE] __rte_cache_aligned;
 	uint32_t mac = para.counter;
 	uint16_t i;
 
 	/* Fixed value */
-	if (FIXED_VALUES)
+	if (!para.unique_data)
 		mac = 1;
 
 	/* Mac address to be set is random each time */
@@ -225,14 +226,14 @@ add_set_src_mac(struct rte_flow_action *actions,
 static void
 add_set_dst_mac(struct rte_flow_action *actions,
 	uint8_t actions_counter,
-	__rte_unused struct additional_para para)
+	struct additional_para para)
 {
 	static struct rte_flow_action_set_mac set_macs[RTE_MAX_LCORE] __rte_cache_aligned;
 	uint32_t mac = para.counter;
 	uint16_t i;
 
 	/* Fixed value */
-	if (FIXED_VALUES)
+	if (!para.unique_data)
 		mac = 1;
 
 	/* Mac address to be set is random each time */
@@ -248,13 +249,13 @@ add_set_dst_mac(struct rte_flow_action *actions,
 static void
 add_set_src_ipv4(struct rte_flow_action *actions,
 	uint8_t actions_counter,
-	__rte_unused struct additional_para para)
+	struct additional_para para)
 {
 	static struct rte_flow_action_set_ipv4 set_ipv4[RTE_MAX_LCORE] __rte_cache_aligned;
 	uint32_t ip = para.counter;
 
 	/* Fixed value */
-	if (FIXED_VALUES)
+	if (!para.unique_data)
 		ip = 1;
 
 	/* IPv4 value to be set is random each time */
@@ -267,13 +268,13 @@ add_set_src_ipv4(struct rte_flow_action *actions,
 static void
 add_set_dst_ipv4(struct rte_flow_action *actions,
 	uint8_t actions_counter,
-	__rte_unused struct additional_para para)
+	struct additional_para para)
 {
 	static struct rte_flow_action_set_ipv4 set_ipv4[RTE_MAX_LCORE] __rte_cache_aligned;
 	uint32_t ip = para.counter;
 
 	/* Fixed value */
-	if (FIXED_VALUES)
+	if (!para.unique_data)
 		ip = 1;
 
 	/* IPv4 value to be set is random each time */
@@ -286,14 +287,14 @@ add_set_dst_ipv4(struct rte_flow_action *actions,
 static void
 add_set_src_ipv6(struct rte_flow_action *actions,
 	uint8_t actions_counter,
-	__rte_unused struct additional_para para)
+	struct additional_para para)
 {
 	static struct rte_flow_action_set_ipv6 set_ipv6[RTE_MAX_LCORE] __rte_cache_aligned;
 	uint32_t ipv6 = para.counter;
 	uint8_t i;
 
 	/* Fixed value */
-	if (FIXED_VALUES)
+	if (!para.unique_data)
 		ipv6 = 1;
 
 	/* IPv6 value to set is random each time */
@@ -309,14 +310,14 @@ add_set_src_ipv6(struct rte_flow_action *actions,
 static void
 add_set_dst_ipv6(struct rte_flow_action *actions,
 	uint8_t actions_counter,
-	__rte_unused struct additional_para para)
+	struct additional_para para)
 {
 	static struct rte_flow_action_set_ipv6 set_ipv6[RTE_MAX_LCORE] __rte_cache_aligned;
 	uint32_t ipv6 = para.counter;
 	uint8_t i;
 
 	/* Fixed value */
-	if (FIXED_VALUES)
+	if (!para.unique_data)
 		ipv6 = 1;
 
 	/* IPv6 value to set is random each time */
@@ -332,13 +333,13 @@ add_set_dst_ipv6(struct rte_flow_action *actions,
 static void
 add_set_src_tp(struct rte_flow_action *actions,
 	uint8_t actions_counter,
-	__rte_unused struct additional_para para)
+	struct additional_para para)
 {
 	static struct rte_flow_action_set_tp set_tp[RTE_MAX_LCORE] __rte_cache_aligned;
 	uint32_t tp = para.counter;
 
 	/* Fixed value */
-	if (FIXED_VALUES)
+	if (!para.unique_data)
 		tp = 100;
 
 	/* TP src port is random each time */
@@ -353,13 +354,13 @@ add_set_src_tp(struct rte_flow_action *actions,
 static void
 add_set_dst_tp(struct rte_flow_action *actions,
 	uint8_t actions_counter,
-	__rte_unused struct additional_para para)
+	struct additional_para para)
 {
 	static struct rte_flow_action_set_tp set_tp[RTE_MAX_LCORE] __rte_cache_aligned;
 	uint32_t tp = para.counter;
 
 	/* Fixed value */
-	if (FIXED_VALUES)
+	if (!para.unique_data)
 		tp = 100;
 
 	/* TP src port is random each time */
@@ -375,13 +376,13 @@ add_set_dst_tp(struct rte_flow_action *actions,
 static void
 add_inc_tcp_ack(struct rte_flow_action *actions,
 	uint8_t actions_counter,
-	__rte_unused struct additional_para para)
+	struct additional_para para)
 {
 	static rte_be32_t value[RTE_MAX_LCORE] __rte_cache_aligned;
 	uint32_t ack_value = para.counter;
 
 	/* Fixed value */
-	if (FIXED_VALUES)
+	if (!para.unique_data)
 		ack_value = 1;
 
 	value[para.core_idx] = RTE_BE32(ack_value);
@@ -393,13 +394,13 @@ add_inc_tcp_ack(struct rte_flow_action *actions,
 static void
 add_dec_tcp_ack(struct rte_flow_action *actions,
 	uint8_t actions_counter,
-	__rte_unused struct additional_para para)
+	struct additional_para para)
 {
 	static rte_be32_t value[RTE_MAX_LCORE] __rte_cache_aligned;
 	uint32_t ack_value = para.counter;
 
 	/* Fixed value */
-	if (FIXED_VALUES)
+	if (!para.unique_data)
 		ack_value = 1;
 
 	value[para.core_idx] = RTE_BE32(ack_value);
@@ -411,13 +412,13 @@ add_dec_tcp_ack(struct rte_flow_action *actions,
 static void
 add_inc_tcp_seq(struct rte_flow_action *actions,
 	uint8_t actions_counter,
-	__rte_unused struct additional_para para)
+	struct additional_para para)
 {
 	static rte_be32_t value[RTE_MAX_LCORE] __rte_cache_aligned;
 	uint32_t seq_value = para.counter;
 
 	/* Fixed value */
-	if (FIXED_VALUES)
+	if (!para.unique_data)
 		seq_value = 1;
 
 	value[para.core_idx] = RTE_BE32(seq_value);
@@ -429,13 +430,13 @@ add_inc_tcp_seq(struct rte_flow_action *actions,
 static void
 add_dec_tcp_seq(struct rte_flow_action *actions,
 	uint8_t actions_counter,
-	__rte_unused struct additional_para para)
+	struct additional_para para)
 {
 	static rte_be32_t value[RTE_MAX_LCORE] __rte_cache_aligned;
 	uint32_t seq_value = para.counter;
 
 	/* Fixed value */
-	if (FIXED_VALUES)
+	if (!para.unique_data)
 		seq_value = 1;
 
 	value[para.core_idx] = RTE_BE32(seq_value);
@@ -447,13 +448,13 @@ add_dec_tcp_seq(struct rte_flow_action *actions,
 static void
 add_set_ttl(struct rte_flow_action *actions,
 	uint8_t actions_counter,
-	__rte_unused struct additional_para para)
+	struct additional_para para)
 {
 	static struct rte_flow_action_set_ttl set_ttl[RTE_MAX_LCORE] __rte_cache_aligned;
 	uint32_t ttl_value = para.counter;
 
 	/* Fixed value */
-	if (FIXED_VALUES)
+	if (!para.unique_data)
 		ttl_value = 1;
 
 	/* Set ttl to random value each time */
@@ -476,13 +477,13 @@ add_dec_ttl(struct rte_flow_action *actions,
 static void
 add_set_ipv4_dscp(struct rte_flow_action *actions,
 	uint8_t actions_counter,
-	__rte_unused struct additional_para para)
+	struct additional_para para)
 {
 	static struct rte_flow_action_set_dscp set_dscp[RTE_MAX_LCORE] __rte_cache_aligned;
 	uint32_t dscp_value = para.counter;
 
 	/* Fixed value */
-	if (FIXED_VALUES)
+	if (!para.unique_data)
 		dscp_value = 1;
 
 	/* Set dscp to random value each time */
@@ -497,13 +498,13 @@ add_set_ipv4_dscp(struct rte_flow_action *actions,
 static void
 add_set_ipv6_dscp(struct rte_flow_action *actions,
 	uint8_t actions_counter,
-	__rte_unused struct additional_para para)
+	struct additional_para para)
 {
 	static struct rte_flow_action_set_dscp set_dscp[RTE_MAX_LCORE] __rte_cache_aligned;
 	uint32_t dscp_value = para.counter;
 
 	/* Fixed value */
-	if (FIXED_VALUES)
+	if (!para.unique_data)
 		dscp_value = 1;
 
 	/* Set dscp to random value each time */
@@ -577,7 +578,7 @@ add_ipv4_header(uint8_t **header, uint64_t data,
 		return;
 
 	/* Fixed value */
-	if (FIXED_VALUES)
+	if (!para.unique_data)
 		ip_dst = 1;
 
 	memset(&ipv4_hdr, 0, sizeof(struct rte_ipv4_hdr));
@@ -643,7 +644,7 @@ add_vxlan_header(uint8_t **header, uint64_t data,
 		return;
 
 	/* Fixed value */
-	if (FIXED_VALUES)
+	if (!para.unique_data)
 		vni_value = 1;
 
 	memset(&vxlan_hdr, 0, sizeof(struct rte_vxlan_hdr));
@@ -666,7 +667,7 @@ add_vxlan_gpe_header(uint8_t **header, uint64_t data,
 		return;
 
 	/* Fixed value */
-	if (FIXED_VALUES)
+	if (!para.unique_data)
 		vni_value = 1;
 
 	memset(&vxlan_gpe_hdr, 0, sizeof(struct rte_vxlan_gpe_hdr));
@@ -707,7 +708,7 @@ add_geneve_header(uint8_t **header, uint64_t data,
 		return;
 
 	/* Fixed value */
-	if (FIXED_VALUES)
+	if (!para.unique_data)
 		vni_value = 1;
 
 	memset(&geneve_hdr, 0, sizeof(struct rte_geneve_hdr));
@@ -730,7 +731,7 @@ add_gtp_header(uint8_t **header, uint64_t data,
 		return;
 
 	/* Fixed value */
-	if (FIXED_VALUES)
+	if (!para.unique_data)
 		teid_value = 1;
 
 	memset(&gtp_hdr, 0, sizeof(struct rte_flow_item_gtp));
@@ -849,7 +850,7 @@ add_vxlan_encap(struct rte_flow_action *actions,
 	uint32_t ip_dst = para.counter;
 
 	/* Fixed value */
-	if (FIXED_VALUES)
+	if (!para.unique_data)
 		ip_dst = 1;
 
 	items[0].spec = &item_eth;
@@ -907,7 +908,8 @@ add_meter(struct rte_flow_action *actions,
 void
 fill_actions(struct rte_flow_action *actions, uint64_t *flow_actions,
 	uint32_t counter, uint16_t next_table, uint16_t hairpinq,
-	uint64_t encap_data, uint64_t decap_data, uint8_t core_idx)
+	uint64_t encap_data, uint64_t decap_data, uint8_t core_idx,
+	bool unique_data)
 {
 	struct additional_para additional_para_data;
 	uint8_t actions_counter = 0;
@@ -930,6 +932,7 @@ fill_actions(struct rte_flow_action *actions, uint64_t *flow_actions,
 		.encap_data = encap_data,
 		.decap_data = decap_data,
 		.core_idx = core_idx,
+		.unique_data = unique_data,
 	};
 
 	if (hairpinq != 0) {
diff --git a/app/test-flow-perf/actions_gen.h b/app/test-flow-perf/actions_gen.h
index 77353cfe09..6f2f833496 100644
--- a/app/test-flow-perf/actions_gen.h
+++ b/app/test-flow-perf/actions_gen.h
@@ -19,6 +19,7 @@
 
 void fill_actions(struct rte_flow_action *actions, uint64_t *flow_actions,
 	uint32_t counter, uint16_t next_table, uint16_t hairpinq,
-	uint64_t encap_data, uint64_t decap_data, uint8_t core_idx);
+	uint64_t encap_data, uint64_t decap_data, uint8_t core_idx,
+	bool unique_data);
 
 #endif /* FLOW_PERF_ACTION_GEN */
diff --git a/app/test-flow-perf/config.h b/app/test-flow-perf/config.h
index 3d4696d61a..a14d4e05e1 100644
--- a/app/test-flow-perf/config.h
+++ b/app/test-flow-perf/config.h
@@ -5,7 +5,7 @@
 #define FLOW_ITEM_MASK(_x) (UINT64_C(1) << _x)
 #define FLOW_ACTION_MASK(_x) (UINT64_C(1) << _x)
 #define FLOW_ATTR_MASK(_x) (UINT64_C(1) << _x)
-#define GET_RSS_HF() (ETH_RSS_IP | ETH_RSS_TCP)
+#define GET_RSS_HF() (ETH_RSS_IP)
 
 /* Configuration */
 #define RXQ_NUM 4
@@ -19,12 +19,6 @@
 #define METER_CIR 1250000
 #define DEFAULT_METER_PROF_ID 100
 
-/* This is used for encap/decap & header modify actions.
- * When it's 1: it means all actions have fixed values.
- * When it's 0: it means all actions will have different values.
- */
-#define FIXED_VALUES 1
-
 /* Items/Actions parameters */
 #define JUMP_ACTION_TABLE 2
 #define VLAN_VALUE 1
diff --git a/app/test-flow-perf/flow_gen.c b/app/test-flow-perf/flow_gen.c
index df4af16de8..8f87fac5f6 100644
--- a/app/test-flow-perf/flow_gen.c
+++ b/app/test-flow-perf/flow_gen.c
@@ -46,6 +46,7 @@ generate_flow(uint16_t port_id,
 	uint64_t encap_data,
 	uint64_t decap_data,
 	uint8_t core_idx,
+	bool unique_data,
 	struct rte_flow_error *error)
 {
 	struct rte_flow_attr attr;
@@ -61,7 +62,8 @@ generate_flow(uint16_t port_id,
 
 	fill_actions(actions, flow_actions,
 		outer_ip_src, next_table, hairpinq,
-		encap_data, decap_data, core_idx);
+		encap_data, decap_data, core_idx,
+		unique_data);
 
 	fill_items(items, flow_items, outer_ip_src, core_idx);
 
diff --git a/app/test-flow-perf/flow_gen.h b/app/test-flow-perf/flow_gen.h
index f1d0999af1..dc887fceae 100644
--- a/app/test-flow-perf/flow_gen.h
+++ b/app/test-flow-perf/flow_gen.h
@@ -35,6 +35,7 @@ generate_flow(uint16_t port_id,
 	uint64_t encap_data,
 	uint64_t decap_data,
 	uint8_t core_idx,
+	bool unique_data,
 	struct rte_flow_error *error);
 
 #endif /* FLOW_PERF_FLOW_GEN */
diff --git a/app/test-flow-perf/main.c b/app/test-flow-perf/main.c
index 8b5a11c15e..4054178273 100644
--- a/app/test-flow-perf/main.c
+++ b/app/test-flow-perf/main.c
@@ -61,6 +61,7 @@ static bool dump_iterations;
 static bool delete_flag;
 static bool dump_socket_mem_flag;
 static bool enable_fwd;
+static bool unique_data;
 
 static struct rte_mempool *mbuf_mp;
 static uint32_t nb_lcores;
@@ -131,6 +132,8 @@ usage(char *progname)
 	printf("  --enable-fwd: To enable packets forwarding"
 		" after insertion\n");
 	printf("  --portmask=N: hexadecimal bitmask of ports used\n");
+	printf("  --unique-data: flag to set using unique data for all"
+		" actions that support data, such as header modify and encap actions\n");
 
 	printf("To set flow attributes:\n");
 	printf("  --ingress: set ingress attribute in flows\n");
@@ -567,6 +570,7 @@ args_parse(int argc, char **argv)
 		{ "deletion-rate",              0, 0, 0 },
 		{ "dump-socket-mem",            0, 0, 0 },
 		{ "enable-fwd",                 0, 0, 0 },
+		{ "unique-data",                0, 0, 0 },
 		{ "portmask",                   1, 0, 0 },
 		{ "cores",                      1, 0, 0 },
 		/* Attributes */
@@ -765,6 +769,9 @@ args_parse(int argc, char **argv)
 			if (strcmp(lgopts[opt_idx].name,
 					"dump-iterations") == 0)
 				dump_iterations = true;
+			if (strcmp(lgopts[opt_idx].name,
+					"unique-data") == 0)
+				unique_data = true;
 			if (strcmp(lgopts[opt_idx].name,
 					"deletion-rate") == 0)
 				delete_flag = true;
@@ -1176,7 +1183,7 @@ insert_flows(int port_id, uint8_t core_id)
 		 */
 		flow = generate_flow(port_id, 0, flow_attrs,
 			global_items, global_actions,
-			flow_group, 0, 0, 0, 0, core_id, &error);
+			flow_group, 0, 0, 0, 0, core_id, unique_data, &error);
 
 		if (flow == NULL) {
 			print_flow_error(error);
@@ -1192,7 +1199,7 @@ insert_flows(int port_id, uint8_t core_id)
 			JUMP_ACTION_TABLE, counter,
 			hairpin_queues_num,
 			encap_data, decap_data,
-			core_id, &error);
+			core_id, unique_data, &error);
 
 		if (force_quit)
 			counter = end_counter;
@@ -1863,6 +1870,7 @@ main(int argc, char **argv)
 	delete_flag = false;
 	dump_socket_mem_flag = false;
 	flow_group = DEFAULT_GROUP;
+	unique_data = false;
 
 	signal(SIGINT, signal_handler);
 	signal(SIGTERM, signal_handler);
@@ -1878,7 +1886,6 @@ main(int argc, char **argv)
 	if (nb_lcores <= 1)
 		rte_exit(EXIT_FAILURE, "This app needs at least two cores\n");
 
-
 	printf(":: Flows Count per port: %d\n\n", rules_count);
 
 	if (has_meter())
diff --git a/doc/guides/tools/flow-perf.rst b/doc/guides/tools/flow-perf.rst
index 017e200222..280bf7e0e0 100644
--- a/doc/guides/tools/flow-perf.rst
+++ b/doc/guides/tools/flow-perf.rst
@@ -100,6 +100,11 @@ The command line options are:
 	Set the number of needed cores to insert/delete rte_flow rules.
 	Default cores count is 1.
 
+*       ``--unique-data``
+        Flag to set using unique data for all actions that support data,
+        Such as header modify and encap actions. Default is using fixed
+        data for any action that support data for all flows.
+
 Attributes:
 
 *	``--ingress``
-- 
2.17.1


^ permalink raw reply	[flat|nested] 46+ messages in thread

* [dpdk-dev] [PATCH v4 3/7] app/flow-perf: fix naming of CPU used structured data
  2021-03-14  9:54           ` [dpdk-dev] [PATCH v3 0/7] Enhancements and fixes for flow-perf Wisam Jaddo
  2021-03-14  9:54             ` [dpdk-dev] [PATCH v4 1/7] app/flow-perf: start using more generic wrapper for cycles Wisam Jaddo
  2021-03-14  9:54             ` [dpdk-dev] [PATCH v4 2/7] app/flow-perf: add new option to use unique data on the fly Wisam Jaddo
@ 2021-03-14  9:54             ` Wisam Jaddo
  2021-03-14  9:54             ` [dpdk-dev] [PATCH v4 4/7] app/flow-perf: fix report total stats for masked ports Wisam Jaddo
                               ` (4 subsequent siblings)
  7 siblings, 0 replies; 46+ messages in thread
From: Wisam Jaddo @ 2021-03-14  9:54 UTC (permalink / raw)
  To: arybchenko, thomas, akozyrev, rasland, dev; +Cc: dongzhou

create_flow and create_meter are not correct names since those
are records that contain creation and deletion, which makes
them more of a record for such data.

Fixes: d8099d7ecbd0 ("app/flow-perf: split dump functions")
Cc: dongzhou@nvidia.com

Signed-off-by: Wisam Jaddo <wisamm@nvidia.com>
Acked-by: Alexander Kozyrev <akozyrev@nvidia.com>
---
 app/test-flow-perf/main.c | 16 ++++++++--------
 1 file changed, 8 insertions(+), 8 deletions(-)

diff --git a/app/test-flow-perf/main.c b/app/test-flow-perf/main.c
index 4054178273..01607881df 100644
--- a/app/test-flow-perf/main.c
+++ b/app/test-flow-perf/main.c
@@ -105,8 +105,8 @@ struct used_cpu_time {
 struct multi_cores_pool {
 	uint32_t cores_count;
 	uint32_t rules_count;
-	struct used_cpu_time create_meter;
-	struct used_cpu_time create_flow;
+	struct used_cpu_time meters_record;
+	struct used_cpu_time flows_record;
 	int64_t last_alloc[RTE_MAX_LCORE];
 	int64_t current_alloc[RTE_MAX_LCORE];
 } __rte_cache_aligned;
@@ -1013,10 +1013,10 @@ meters_handler(int port_id, uint8_t core_id, uint8_t ops)
 		cpu_time_used, insertion_rate);
 
 	if (ops == METER_CREATE)
-		mc_pool.create_meter.insertion[port_id][core_id]
+		mc_pool.meters_record.insertion[port_id][core_id]
 			= cpu_time_used;
 	else
-		mc_pool.create_meter.deletion[port_id][core_id]
+		mc_pool.meters_record.deletion[port_id][core_id]
 			= cpu_time_used;
 }
 
@@ -1134,7 +1134,7 @@ destroy_flows(int port_id, uint8_t core_id, struct rte_flow **flows_list)
 	printf(":: Port %d :: Core %d :: The time for deleting %d rules is %f seconds\n",
 		port_id, core_id, rules_count_per_core, cpu_time_used);
 
-	mc_pool.create_flow.deletion[port_id][core_id] = cpu_time_used;
+	mc_pool.flows_record.deletion[port_id][core_id] = cpu_time_used;
 }
 
 static struct rte_flow **
@@ -1241,7 +1241,7 @@ insert_flows(int port_id, uint8_t core_id)
 	printf(":: Port %d :: Core %d :: The time for creating %d in rules %f seconds\n",
 		port_id, core_id, rules_count_per_core, cpu_time_used);
 
-	mc_pool.create_flow.insertion[port_id][core_id] = cpu_time_used;
+	mc_pool.flows_record.insertion[port_id][core_id] = cpu_time_used;
 	return flows_list;
 }
 
@@ -1439,9 +1439,9 @@ run_rte_flow_handler_cores(void *data __rte_unused)
 	RTE_ETH_FOREACH_DEV(port) {
 		if (has_meter())
 			dump_used_cpu_time("Meters:",
-				port, &mc_pool.create_meter);
+				port, &mc_pool.meters_record);
 		dump_used_cpu_time("Flows:",
-			port, &mc_pool.create_flow);
+			port, &mc_pool.flows_record);
 		dump_used_mem(port);
 	}
 
-- 
2.17.1


^ permalink raw reply	[flat|nested] 46+ messages in thread

* [dpdk-dev] [PATCH v4 4/7] app/flow-perf: fix report total stats for masked ports
  2021-03-14  9:54           ` [dpdk-dev] [PATCH v3 0/7] Enhancements and fixes for flow-perf Wisam Jaddo
                               ` (2 preceding siblings ...)
  2021-03-14  9:54             ` [dpdk-dev] [PATCH v4 3/7] app/flow-perf: fix naming of CPU used structured data Wisam Jaddo
@ 2021-03-14  9:54             ` Wisam Jaddo
  2021-03-14  9:54             ` [dpdk-dev] [PATCH v4 5/7] app/flow-perf: fix the incremental IPv6 src set Wisam Jaddo
                               ` (3 subsequent siblings)
  7 siblings, 0 replies; 46+ messages in thread
From: Wisam Jaddo @ 2021-03-14  9:54 UTC (permalink / raw)
  To: arybchenko, thomas, akozyrev, rasland, dev; +Cc: dongzhou, stable

Take into consideration that the user may call portmask for
any run, thus the app should always check if port is needed
to collect and report or not.

Fixes: 070316d01d3e ("app/flow-perf: add multi-core rule insertion and deletion")
Fixes: d8099d7ecbd0 ("app/flow-perf: split dump functions")
Cc: dongzhou@nvidia.com
Cc: stable@dpdk.org

Signed-off-by: Wisam Jaddo <wisamm@nvidia.com>
Acked-by: Alexander Kozyrev <akozyrev@nvidia.com>
---
 app/test-flow-perf/main.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/app/test-flow-perf/main.c b/app/test-flow-perf/main.c
index 01607881df..e32714131c 100644
--- a/app/test-flow-perf/main.c
+++ b/app/test-flow-perf/main.c
@@ -1437,6 +1437,9 @@ run_rte_flow_handler_cores(void *data __rte_unused)
 	rte_eal_mp_wait_lcore();
 
 	RTE_ETH_FOREACH_DEV(port) {
+		/* If port outside portmask */
+		if (!((ports_mask >> port) & 0x1))
+			continue;
 		if (has_meter())
 			dump_used_cpu_time("Meters:",
 				port, &mc_pool.meters_record);
-- 
2.17.1


^ permalink raw reply	[flat|nested] 46+ messages in thread

* [dpdk-dev] [PATCH v4 5/7] app/flow-perf: fix the incremental IPv6 src set
  2021-03-14  9:54           ` [dpdk-dev] [PATCH v3 0/7] Enhancements and fixes for flow-perf Wisam Jaddo
                               ` (3 preceding siblings ...)
  2021-03-14  9:54             ` [dpdk-dev] [PATCH v4 4/7] app/flow-perf: fix report total stats for masked ports Wisam Jaddo
@ 2021-03-14  9:54             ` Wisam Jaddo
  2021-03-14  9:54             ` [dpdk-dev] [PATCH v4 6/7] app/flow-perf: add first flow latency support Wisam Jaddo
                               ` (2 subsequent siblings)
  7 siblings, 0 replies; 46+ messages in thread
From: Wisam Jaddo @ 2021-03-14  9:54 UTC (permalink / raw)
  To: arybchenko, thomas, akozyrev, rasland, dev; +Cc: wisamm, stable

Currently the memset() will not set a correct src ip that represent
the incremental value of the counter.

This commit will fix this and each flow will have correct IPv6.src
that it's incremental from previous flow and equal to the decimal
values.

Fixes: bf3688f1e816 ("app/flow-perf: add insertion rate calculation")
Cc: wisamm@mellanox.com
Cc: stable@dpdk.org

Signed-off-by: Wisam Jaddo <wisamm@nvidia.com>
Acked-by: Alexander Kozyrev <akozyrev@nvidia.com>
---
 app/test-flow-perf/items_gen.c | 13 +++++++------
 1 file changed, 7 insertions(+), 6 deletions(-)

diff --git a/app/test-flow-perf/items_gen.c b/app/test-flow-perf/items_gen.c
index ccebc08b39..a73de9031f 100644
--- a/app/test-flow-perf/items_gen.c
+++ b/app/test-flow-perf/items_gen.c
@@ -72,14 +72,15 @@ add_ipv6(struct rte_flow_item *items,
 	static struct rte_flow_item_ipv6 ipv6_specs[RTE_MAX_LCORE] __rte_cache_aligned;
 	static struct rte_flow_item_ipv6 ipv6_masks[RTE_MAX_LCORE] __rte_cache_aligned;
 	uint8_t ti = para.core_idx;
+	uint8_t i;
 
 	/** Set ipv6 src **/
-	memset(&ipv6_specs[ti].hdr.src_addr, para.src_ip,
-		sizeof(ipv6_specs->hdr.src_addr) / 2);
-
-	/** Full mask **/
-	memset(&ipv6_masks[ti].hdr.src_addr, 0xff,
-		sizeof(ipv6_specs->hdr.src_addr));
+	for (i = 0; i < 16; i++) {
+		/* Currently src_ip is limited to 32 bit */
+		if (i < 4)
+			ipv6_specs[ti].hdr.src_addr[15 - i] = para.src_ip >> (i * 8);
+		ipv6_masks[ti].hdr.src_addr[15 - i] = 0xff;
+	}
 
 	items[items_counter].type = RTE_FLOW_ITEM_TYPE_IPV6;
 	items[items_counter].spec = &ipv6_specs[ti];
-- 
2.17.1


^ permalink raw reply	[flat|nested] 46+ messages in thread

* [dpdk-dev] [PATCH v4 6/7] app/flow-perf: add first flow latency support
  2021-03-14  9:54           ` [dpdk-dev] [PATCH v3 0/7] Enhancements and fixes for flow-perf Wisam Jaddo
                               ` (4 preceding siblings ...)
  2021-03-14  9:54             ` [dpdk-dev] [PATCH v4 5/7] app/flow-perf: fix the incremental IPv6 src set Wisam Jaddo
@ 2021-03-14  9:54             ` Wisam Jaddo
  2021-03-14  9:54             ` [dpdk-dev] [PATCH v4 7/7] app/flow-perf: fix setting decap data for decap actions Wisam Jaddo
  2021-04-12 14:33             ` [dpdk-dev] [PATCH v3 0/7] Enhancements and fixes for flow-perf Thomas Monjalon
  7 siblings, 0 replies; 46+ messages in thread
From: Wisam Jaddo @ 2021-03-14  9:54 UTC (permalink / raw)
  To: arybchenko, thomas, akozyrev, rasland, dev

Starting from this commit the app will always
report the first flow latency.

This is useful in debugging to check the first
flow insertion before any caching effect.

Signed-off-by: Wisam Jaddo <wisamm@nvidia.com>
Acked-by: Alexander Kozyrev <akozyrev@nvidia.com>
---
 app/test-flow-perf/main.c | 11 +++++++++++
 1 file changed, 11 insertions(+)

diff --git a/app/test-flow-perf/main.c b/app/test-flow-perf/main.c
index e32714131c..d33b00a89e 100644
--- a/app/test-flow-perf/main.c
+++ b/app/test-flow-perf/main.c
@@ -1143,6 +1143,7 @@ insert_flows(int port_id, uint8_t core_id)
 	struct rte_flow **flows_list;
 	struct rte_flow_error error;
 	clock_t start_batch, end_batch;
+	double first_flow_latency;
 	double cpu_time_used;
 	double insertion_rate;
 	double cpu_time_per_batch[MAX_BATCHES_COUNT] = { 0 };
@@ -1201,6 +1202,16 @@ insert_flows(int port_id, uint8_t core_id)
 			encap_data, decap_data,
 			core_id, unique_data, &error);
 
+		if (!counter) {
+			first_flow_latency = (double) (rte_get_timer_cycles() - start_batch);
+			first_flow_latency /= rte_get_timer_hz();
+			/* In millisecond */
+			first_flow_latency *= 1000;
+			printf(":: First Flow Latency :: Port %d :: First flow "
+				"installed in %f milliseconds\n",
+				port_id, first_flow_latency);
+		}
+
 		if (force_quit)
 			counter = end_counter;
 
-- 
2.17.1


^ permalink raw reply	[flat|nested] 46+ messages in thread

* [dpdk-dev] [PATCH v4 7/7] app/flow-perf: fix setting decap data for decap actions
  2021-03-14  9:54           ` [dpdk-dev] [PATCH v3 0/7] Enhancements and fixes for flow-perf Wisam Jaddo
                               ` (5 preceding siblings ...)
  2021-03-14  9:54             ` [dpdk-dev] [PATCH v4 6/7] app/flow-perf: add first flow latency support Wisam Jaddo
@ 2021-03-14  9:54             ` Wisam Jaddo
  2021-04-12 14:33             ` [dpdk-dev] [PATCH v3 0/7] Enhancements and fixes for flow-perf Thomas Monjalon
  7 siblings, 0 replies; 46+ messages in thread
From: Wisam Jaddo @ 2021-03-14  9:54 UTC (permalink / raw)
  To: arybchenko, thomas, akozyrev, rasland, dev; +Cc: stable

When using decap actions it's been set to the data to decap
into the encap_data instead of decap_data, as a results we end
up with bad encap and decap data in many cases.

Fixes: 0c8f1f4ab90e ("app/flow-perf: support raw encap/decap actions")
Cc: stable@dpdk.org

Signed-off-by: Wisam Jaddo <wisamm@nvidia.com>
Acked-by: Alexander Kozyrev <akozyrev@nvidia.com>
---
 app/test-flow-perf/main.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/app/test-flow-perf/main.c b/app/test-flow-perf/main.c
index d33b00a89e..97a4d4ac63 100644
--- a/app/test-flow-perf/main.c
+++ b/app/test-flow-perf/main.c
@@ -730,7 +730,7 @@ args_parse(int argc, char **argv)
 					for (i = 0; i < RTE_DIM(flow_options); i++) {
 						if (strcmp(flow_options[i].str, token) == 0) {
 							printf("%s,", token);
-							encap_data |= flow_options[i].mask;
+							decap_data |= flow_options[i].mask;
 							break;
 						}
 						/* Reached last item with no match */
-- 
2.17.1


^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [dpdk-dev] [PATCH v3 0/7] Enhancements and fixes for flow-perf
  2021-03-14  9:54           ` [dpdk-dev] [PATCH v3 0/7] Enhancements and fixes for flow-perf Wisam Jaddo
                               ` (6 preceding siblings ...)
  2021-03-14  9:54             ` [dpdk-dev] [PATCH v4 7/7] app/flow-perf: fix setting decap data for decap actions Wisam Jaddo
@ 2021-04-12 14:33             ` Thomas Monjalon
  7 siblings, 0 replies; 46+ messages in thread
From: Thomas Monjalon @ 2021-04-12 14:33 UTC (permalink / raw)
  To: Wisam Jaddo; +Cc: arybchenko, akozyrev, rasland, dev

14/03/2021 10:54, Wisam Jaddo:
> v4:
> * Fix warrning of 100 char long line.
> * Add more discription in cover letter.
> 
> Wisam Jaddo (7):
>   app/flow-perf: start using more generic wrapper for cycles
>   app/flow-perf: add new option to use unique data on the fly
>   app/flow-perf: fix naming of CPU used structured data
>   app/flow-perf: fix report total stats for masked ports
>   app/flow-perf: fix the incremental IPv6 src set
>   app/flow-perf: add first flow latency support
>   app/flow-perf: fix setting decap data for decap actions

v4 applied, thanks





^ permalink raw reply	[flat|nested] 46+ messages in thread

end of thread, other threads:[~2021-04-12 14:33 UTC | newest]

Thread overview: 46+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-03-07  9:11 [dpdk-dev] [PATCH 0/5] New flow-perf fixes Wisam Jaddo
2021-03-07  9:11 ` [dpdk-dev] [PATCH 1/5] app/flow-perf: start using more generic wrapper for cycles Wisam Jaddo
2021-03-10 13:45   ` [dpdk-dev] [PATCH v2 0/7] Enhancements and fixes for flow-perf Wisam Jaddo
2021-03-10 13:45     ` [dpdk-dev] [PATCH v2 1/7] app/flow-perf: start using more generic wrapper for cycles Wisam Jaddo
2021-03-10 13:45     ` [dpdk-dev] [PATCH v2 2/7] app/flow-perf: add new option to use unique data on the fly Wisam Jaddo
2021-03-10 13:45     ` [dpdk-dev] [PATCH v2 3/7] app/flow-perf: fix naming of CPU used structured data Wisam Jaddo
2021-03-10 13:45     ` [dpdk-dev] [PATCH v2 4/7] app/flow-perf: fix report total stats for masked ports Wisam Jaddo
2021-03-10 13:45     ` [dpdk-dev] [PATCH v2 5/7] app/flow-perf: fix the incremental IPv6 src set Wisam Jaddo
2021-03-10 13:45     ` [dpdk-dev] [PATCH v2 6/7] app/flow-perf: add first flow latency support Wisam Jaddo
2021-03-10 13:48   ` [dpdk-dev] [PATCH v2 0/7] Enhancements and fixes for flow-perf Wisam Jaddo
2021-03-10 13:48     ` [dpdk-dev] [PATCH v2 1/7] app/flow-perf: start using more generic wrapper for cycles Wisam Jaddo
2021-03-10 13:53       ` [dpdk-dev] [PATCH v3 0/7] Enhancements and fixes for flow-perf Wisam Jaddo
2021-03-10 13:53         ` [dpdk-dev] [PATCH v3 1/7] app/flow-perf: start using more generic wrapper for cycles Wisam Jaddo
2021-03-10 13:53         ` [dpdk-dev] [PATCH v3 2/7] app/flow-perf: add new option to use unique data on the fly Wisam Jaddo
2021-03-10 13:53         ` [dpdk-dev] [PATCH v3 3/7] app/flow-perf: fix naming of CPU used structured data Wisam Jaddo
2021-03-10 13:53         ` [dpdk-dev] [PATCH v3 4/7] app/flow-perf: fix report total stats for masked ports Wisam Jaddo
2021-03-10 13:53         ` [dpdk-dev] [PATCH v3 5/7] app/flow-perf: fix the incremental IPv6 src set Wisam Jaddo
2021-03-10 13:53         ` [dpdk-dev] [PATCH v3 6/7] app/flow-perf: add first flow latency support Wisam Jaddo
2021-03-10 13:55       ` [dpdk-dev] [PATCH v3 0/7] Enhancements and fixes for flow-perf Wisam Jaddo
2021-03-10 13:55         ` [dpdk-dev] [PATCH v3 1/7] app/flow-perf: start using more generic wrapper for cycles Wisam Jaddo
2021-03-14  9:54           ` [dpdk-dev] [PATCH v3 0/7] Enhancements and fixes for flow-perf Wisam Jaddo
2021-03-14  9:54             ` [dpdk-dev] [PATCH v4 1/7] app/flow-perf: start using more generic wrapper for cycles Wisam Jaddo
2021-03-14  9:54             ` [dpdk-dev] [PATCH v4 2/7] app/flow-perf: add new option to use unique data on the fly Wisam Jaddo
2021-03-14  9:54             ` [dpdk-dev] [PATCH v4 3/7] app/flow-perf: fix naming of CPU used structured data Wisam Jaddo
2021-03-14  9:54             ` [dpdk-dev] [PATCH v4 4/7] app/flow-perf: fix report total stats for masked ports Wisam Jaddo
2021-03-14  9:54             ` [dpdk-dev] [PATCH v4 5/7] app/flow-perf: fix the incremental IPv6 src set Wisam Jaddo
2021-03-14  9:54             ` [dpdk-dev] [PATCH v4 6/7] app/flow-perf: add first flow latency support Wisam Jaddo
2021-03-14  9:54             ` [dpdk-dev] [PATCH v4 7/7] app/flow-perf: fix setting decap data for decap actions Wisam Jaddo
2021-04-12 14:33             ` [dpdk-dev] [PATCH v3 0/7] Enhancements and fixes for flow-perf Thomas Monjalon
2021-03-10 13:55         ` [dpdk-dev] [PATCH v3 2/7] app/flow-perf: add new option to use unique data on the fly Wisam Jaddo
2021-03-10 13:55         ` [dpdk-dev] [PATCH v3 3/7] app/flow-perf: fix naming of CPU used structured data Wisam Jaddo
2021-03-10 13:55         ` [dpdk-dev] [PATCH v3 4/7] app/flow-perf: fix report total stats for masked ports Wisam Jaddo
2021-03-10 13:55         ` [dpdk-dev] [PATCH v3 5/7] app/flow-perf: fix the incremental IPv6 src set Wisam Jaddo
2021-03-10 13:55         ` [dpdk-dev] [PATCH v3 6/7] app/flow-perf: add first flow latency support Wisam Jaddo
2021-03-10 13:55         ` [dpdk-dev] [PATCH v3 7/7] app/flow-perf: fix setting decap data for decap actions Wisam Jaddo
2021-03-10 21:54         ` [dpdk-dev] [PATCH v3 0/7] Enhancements and fixes for flow-perf Alexander Kozyrev
2021-03-10 13:48     ` [dpdk-dev] [PATCH v2 2/7] app/flow-perf: add new option to use unique data on the fly Wisam Jaddo
2021-03-10 13:48     ` [dpdk-dev] [PATCH v2 3/7] app/flow-perf: fix naming of CPU used structured data Wisam Jaddo
2021-03-10 13:48     ` [dpdk-dev] [PATCH v2 4/7] app/flow-perf: fix report total stats for masked ports Wisam Jaddo
2021-03-10 13:48     ` [dpdk-dev] [PATCH v2 5/7] app/flow-perf: fix the incremental IPv6 src set Wisam Jaddo
2021-03-10 13:48     ` [dpdk-dev] [PATCH v2 6/7] app/flow-perf: add first flow latency support Wisam Jaddo
2021-03-10 13:48     ` [dpdk-dev] [PATCH v2 7/7] app/flow-perf: fix setting decap data for decap actions Wisam Jaddo
2021-03-07  9:11 ` [dpdk-dev] [PATCH 2/5] app/flow-perf: add new option to use unique data on the fly Wisam Jaddo
2021-03-07  9:12 ` [dpdk-dev] [PATCH 3/5] app/flow-perf: fix naming of CPU used structured data Wisam Jaddo
2021-03-07  9:12 ` [dpdk-dev] [PATCH 4/5] app/flow-perf: fix report total stats for masked ports Wisam Jaddo
2021-03-07  9:12 ` [dpdk-dev] [PATCH 5/5] app/flow-perf: fix the incremental IPv6 src set Wisam Jaddo

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).