DPDK patches and discussions
 help / color / mirror / Atom feed
From: Joshua Washington <joshwash@google.com>
To: Aman Singh <aman.deep.singh@intel.com>,
	Yuying Zhang <yuying.zhang@intel.com>
Cc: dev@dpdk.org, Joshua Washington <joshwash@google.com>,
	Rushil Gupta <rushilg@google.com>
Subject: [PATCH v5] app/testpmd: txonly multiflow port change support
Date: Fri, 21 Apr 2023 16:20:22 -0700	[thread overview]
Message-ID: <20230421232022.342081-1-joshwash@google.com> (raw)
In-Reply-To: <20230412181619.496342-1-joshwash@google.com>

Google cloud routes traffic using IP addresses without the support of MAC
addresses, so changing source IP address for txonly-multi-flow can have
negative performance implications for net/gve when using testpmd. This
patch updates txonly multiflow mode to modify source ports instead of
source IP addresses.

The change can be tested with the following command:
dpdk-testpmd -- --forward-mode=txonly --txonly-multi-flow \
    --tx-ip=<SRC>,<DST>

Signed-off-by: Joshua Washington <joshwash@google.com>
Reviewed-by: Rushil Gupta <rushilg@google.com>
---
 app/test-pmd/txonly.c | 39 +++++++++++++++++++++++----------------
 1 file changed, 23 insertions(+), 16 deletions(-)

diff --git a/app/test-pmd/txonly.c b/app/test-pmd/txonly.c
index b3d6873104..f79e0e5d0b 100644
--- a/app/test-pmd/txonly.c
+++ b/app/test-pmd/txonly.c
@@ -56,7 +56,7 @@ uint32_t tx_ip_dst_addr = (198U << 24) | (18 << 16) | (0 << 8) | 2;
 #define IP_DEFTTL  64   /* from RFC 1340. */
 
 static struct rte_ipv4_hdr pkt_ip_hdr; /**< IP header of transmitted packets. */
-RTE_DEFINE_PER_LCORE(uint8_t, _ip_var); /**< IP address variation */
+RTE_DEFINE_PER_LCORE(uint8_t, _src_var); /**< Source port variation */
 static struct rte_udp_hdr pkt_udp_hdr; /**< UDP header of tx packets. */
 
 static uint64_t timestamp_mask; /**< Timestamp dynamic flag mask */
@@ -230,28 +230,35 @@ pkt_burst_prepare(struct rte_mbuf *pkt, struct rte_mempool *mbp,
 	copy_buf_to_pkt(eth_hdr, sizeof(*eth_hdr), pkt, 0);
 	copy_buf_to_pkt(&pkt_ip_hdr, sizeof(pkt_ip_hdr), pkt,
 			sizeof(struct rte_ether_hdr));
+	copy_buf_to_pkt(&pkt_udp_hdr, sizeof(pkt_udp_hdr), pkt,
+			sizeof(struct rte_ether_hdr) +
+			sizeof(struct rte_ipv4_hdr));
 	if (txonly_multi_flow) {
-		uint8_t  ip_var = RTE_PER_LCORE(_ip_var);
-		struct rte_ipv4_hdr *ip_hdr;
-		uint32_t addr;
+		uint16_t src_var = RTE_PER_LCORE(_src_var);
+		struct rte_udp_hdr *udp_hdr;
+		uint16_t port;
 
-		ip_hdr = rte_pktmbuf_mtod_offset(pkt,
-				struct rte_ipv4_hdr *,
-				sizeof(struct rte_ether_hdr));
+		udp_hdr = rte_pktmbuf_mtod_offset(pkt,
+				struct rte_udp_hdr *,
+				sizeof(struct rte_ether_hdr) +
+				sizeof(struct rte_ipv4_hdr));
 		/*
-		 * Generate multiple flows by varying IP src addr. This
-		 * enables packets are well distributed by RSS in
+		 * Generate multiple flows by varying UDP source port.
+		 * This enables packets are well distributed by RSS in
 		 * receiver side if any and txonly mode can be a decent
 		 * packet generator for developer's quick performance
 		 * regression test.
+		 *
+		 * Only ports in the range 49152 (0xC000) and 65535 (0xFFFF)
+		 * will be used, with the least significant byte representing
+		 * the lcore ID. As such, the most significant byte will cycle
+		 * through 0xC0 and 0xFF.
 		 */
-		addr = (tx_ip_dst_addr | (ip_var++ << 8)) + rte_lcore_id();
-		ip_hdr->src_addr = rte_cpu_to_be_32(addr);
-		RTE_PER_LCORE(_ip_var) = ip_var;
+		port = ((((src_var++) % (0xFF - 0xC0) + 0xC0) & 0xFF) << 8)
+				+ rte_lcore_id();
+		udp_hdr->src_port = rte_cpu_to_be_16(port);
+		RTE_PER_LCORE(_src_var) = src_var;
 	}
-	copy_buf_to_pkt(&pkt_udp_hdr, sizeof(pkt_udp_hdr), pkt,
-			sizeof(struct rte_ether_hdr) +
-			sizeof(struct rte_ipv4_hdr));
 
 	if (unlikely(tx_pkt_split == TX_PKT_SPLIT_RND) || txonly_multi_flow)
 		update_pkt_header(pkt, pkt_len);
@@ -393,7 +400,7 @@ pkt_burst_transmit(struct fwd_stream *fs)
 	nb_tx = common_fwd_stream_transmit(fs, pkts_burst, nb_pkt);
 
 	if (txonly_multi_flow)
-		RTE_PER_LCORE(_ip_var) -= nb_pkt - nb_tx;
+		RTE_PER_LCORE(_src_var) -= nb_pkt - nb_tx;
 
 	if (unlikely(nb_tx < nb_pkt)) {
 		if (verbose_level > 0 && fs->fwd_dropped == 0)
-- 
2.40.0.634.g4ca3ef3211-goog


  parent reply	other threads:[~2023-04-21 23:20 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-04-11 20:17 [PATCH v2] " Joshua Washington
2023-04-11 20:24 ` [PATCH v3] " Joshua Washington
2023-04-12 18:16   ` [PATCH v4] " Joshua Washington
2023-04-19 12:21     ` Singh, Aman Deep
2023-04-19 14:38       ` Stephen Hemminger
2023-04-21 23:20     ` Joshua Washington [this message]
2023-04-24 17:55       ` [PATCH v5] " Joshua Washington
2023-04-25  6:56         ` David Marchand
2023-04-26  7:24       ` Singh, Aman Deep
2023-05-15 18:11         ` Joshua Washington
2023-05-15 22:26       ` Ferruh Yigit
2023-05-16 17:32         ` Joshua Washington
2023-05-17 10:10           ` Ferruh Yigit
2023-05-17 10:34       ` Ferruh Yigit
2023-05-19  0:22       ` [PATCH v6] " Joshua Washington
2023-05-19 11:23         ` Ferruh Yigit

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20230421232022.342081-1-joshwash@google.com \
    --to=joshwash@google.com \
    --cc=aman.deep.singh@intel.com \
    --cc=dev@dpdk.org \
    --cc=rushilg@google.com \
    --cc=yuying.zhang@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).