From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <dev-bounces@dpdk.org>
Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124])
	by inbox.dpdk.org (Postfix) with ESMTP id 864DEA0542;
	Fri, 28 Oct 2022 10:27:45 +0200 (CEST)
Received: from [217.70.189.124] (localhost [127.0.0.1])
	by mails.dpdk.org (Postfix) with ESMTP id 147A6400D5;
	Fri, 28 Oct 2022 10:27:45 +0200 (CEST)
Received: from mail-pl1-f173.google.com (mail-pl1-f173.google.com
 [209.85.214.173])
 by mails.dpdk.org (Postfix) with ESMTP id B42DB40041
 for <dev@dpdk.org>; Fri, 28 Oct 2022 10:27:43 +0200 (CEST)
Received: by mail-pl1-f173.google.com with SMTP id g24so4261492plq.3
 for <dev@dpdk.org>; Fri, 28 Oct 2022 01:27:43 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112;
 h=content-transfer-encoding:mime-version:references:in-reply-to
 :message-id:date:subject:cc:to:from:from:to:cc:subject:date
 :message-id:reply-to;
 bh=05DFg3vE15yNJSOlKCgnlrYPtKhMQQNKh7JYpmY5NHo=;
 b=CSTxOCpt6orRw9udd2rxEyM5kkTj1BwTckWU+0Svk4JGIUbA1TrA5tyaZkZXCogpDa
 ta2gH3LIfbU0hTTRrMt4uba6JrkTKmxCdLaBaU3pFxHUMs1T6TQcLHYWSiFVfZo3rwMd
 1OPv9yX5pD+lteJk4JRC2l0ahiAQyxICNAUvXGkdwxfZttkv32SEsKbiBA7l9NvMqgRv
 JW+Hla/ie7460I2r95tgGFuDb7HX8ZF6nTC21JF7MYW4BdD+LKNuDyCMeqhT1Irae8if
 w1cLAMY56wlfDCiev/0wTWvofqCopntgKZzH5KaseON8W26ac+8UMaDpQNAhS9eQ5R9m
 sYiQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20210112;
 h=content-transfer-encoding:mime-version:references:in-reply-to
 :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
 :subject:date:message-id:reply-to;
 bh=05DFg3vE15yNJSOlKCgnlrYPtKhMQQNKh7JYpmY5NHo=;
 b=ma2USk8GfaaN8UYs8esZtMnLNbTToXPusS7QUdqtzlIRfOw0vjEcqWoNpwB90h/Kny
 2ltgADO2NXaZhn8SLWdIZEx1PZ+jxl/iIfmEYJ20MrfFiPybiuzXQ5UAobyIPxwCfopW
 j/MRcOHO/4bMv5uqehN+D4vF1CFRWj7ig9V/5NKsVQHgPMofPTuzQPwi37y3Sc0papmJ
 cdeFT3V1pQuTRyf9OzyZlZ27uRJn8Doa7w2z2Ctxti5fXbzHMOp2H4iyNvZOTRbSJC3g
 7+J0QhvPOoeMHCsDu8CtPmJBFDG/Zy8jubT0A0Xp6NEoWwdx9tskM48yWeynyYsZPkDg
 W6gg==
X-Gm-Message-State: ACrzQf28Wqgr8yBFVFqp47EN1JWup/ymHl0GeaxiHgE4EhGCe16UiMZX
 x96ZuBXslyqP+MMafUSgfd4=
X-Google-Smtp-Source: AMsMyM5Qw3/Cozseq+ZMJ4GjjEWoCWA60c6AUmtvVtxpSDefJ6TcnK6EBGKXqUrcjFthFIm87jEQWg==
X-Received: by 2002:a17:902:8549:b0:178:6399:3e0f with SMTP id
 d9-20020a170902854900b0017863993e0fmr54830417plo.35.1666945662735; 
 Fri, 28 Oct 2022 01:27:42 -0700 (PDT)
Received: from kparameshwa-a02.vmware.com.com ([49.206.8.107])
 by smtp.gmail.com with ESMTPSA id
 x35-20020a634863000000b00460c67afbd5sm2314798pgk.7.2022.10.28.01.27.40
 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256);
 Fri, 28 Oct 2022 01:27:41 -0700 (PDT)
From: Kumara Parameshwaran <kumaraparamesh92@gmail.com>
X-Google-Original-From: Kumara Parameshwaran <kumaraparmesh92@gmail.com>
To: jiayu.hu@intel.com
Cc: dev@dpdk.org,
	Kumara Parameshwaran <kumaraparamesh92@gmail.com>
Subject: [PATCH v3] gro : fix reordering of packets in GRO library
Date: Fri, 28 Oct 2022 13:57:36 +0530
Message-Id: <20221028082736.6259-1-kumaraparmesh92@gmail.com>
X-Mailer: git-send-email 2.32.0 (Apple Git-132)
In-Reply-To: <20221013101854.95244-1-kumaraparmesh92@gmail.com>
References: <20221013101854.95244-1-kumaraparmesh92@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: dev@dpdk.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: DPDK patches and discussions <dev.dpdk.org>
List-Unsubscribe: <https://mails.dpdk.org/options/dev>,
 <mailto:dev-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://mails.dpdk.org/archives/dev/>
List-Post: <mailto:dev@dpdk.org>
List-Help: <mailto:dev-request@dpdk.org?subject=help>
List-Subscribe: <https://mails.dpdk.org/listinfo/dev>,
 <mailto:dev-request@dpdk.org?subject=subscribe>
Errors-To: dev-bounces@dpdk.org

From: Kumara Parameshwaran <kumaraparamesh92@gmail.com>

When a TCP packet contains flags like PSH it is returned
immediately to the application though there might be packets of
the same flow in the GRO table. If PSH flag is set on a segment
packets up to the segment should be delivered immediately. But the
current implementation delivers the last arrived packet with PSH flag
set causing re-ordering

With this patch, if a packet does not contain only ACK flag and if
there are no previous packets for the flow the packet would be returned
immediately, else will be merged with the previous segment and the
flag on the last segment will be set on the entire segment.
This is the behaviour with linux stack as well.

Signed-off-by: Kumara Parameshwaran <kumaraparamesh92@gmail.com>
---
v1:
	If the received packet is not a pure ACK packet, we check if
	there are any previous packets in the flow, if present we indulge
	the received packet also in the coalescing logic and update the flags
	of the last recived packet to the entire segment which would avoid
	re-ordering.

	Lets say a case where P1(PSH), P2(ACK), P3(ACK)  are received in burst mode,
	P1 contains PSH flag and since it does not contain any prior packets in the flow
	we copy it to unprocess_packets and P2(ACK) and P3(ACK) are merged together.
	In the existing case the  P2,P3 would be delivered as single segment first and the
	unprocess_packets will be copied later which will cause reordering. With the patch
	copy the unprocess packets first and then the packets from the GRO table.

	Testing done
	The csum test-pmd was modifited to support the following
	GET request of 10MB from client to server via test-pmd (static arp entries added in client
	and server). Enable GRO and TSO in test-pmd where the packets recived from the client mac
	would be sent to server mac and vice versa.
	In above testing, without the patch the client observerd re-ordering of 25 packets
	and with the patch there were no packet re-ordering observerd.

v2: 
	Fix warnings in commit and comment.
	Do not consider packet as candidate to merge if it contains SYN/RST flag.

v3:
	Fix warnings.

 lib/gro/gro_tcp4.c | 44 +++++++++++++++++++++++++++++++++++++-------
 lib/gro/rte_gro.c  | 18 +++++++++---------
 2 files changed, 46 insertions(+), 16 deletions(-)

diff --git a/lib/gro/gro_tcp4.c b/lib/gro/gro_tcp4.c
index 8f5e800250..2ce0c1391c 100644
--- a/lib/gro/gro_tcp4.c
+++ b/lib/gro/gro_tcp4.c
@@ -188,6 +188,19 @@ update_header(struct gro_tcp4_item *item)
 			pkt->l2_len);
 }
 
+static inline void
+update_tcp_hdr_flags(struct rte_tcp_hdr *tcp_hdr, struct rte_mbuf *pkt)
+{
+	struct rte_ether_hdr *eth_hdr;
+	struct rte_ipv4_hdr *ipv4_hdr;
+	struct rte_tcp_hdr *merged_tcp_hdr;
+
+	eth_hdr = rte_pktmbuf_mtod(pkt, struct rte_ether_hdr *);
+	ipv4_hdr = (struct rte_ipv4_hdr *)((char *)eth_hdr + pkt->l2_len);
+	merged_tcp_hdr = (struct rte_tcp_hdr *)((char *)ipv4_hdr + pkt->l3_len);
+	merged_tcp_hdr->tcp_flags |= tcp_hdr->tcp_flags;
+}
+
 int32_t
 gro_tcp4_reassemble(struct rte_mbuf *pkt,
 		struct gro_tcp4_tbl *tbl,
@@ -206,6 +219,7 @@ gro_tcp4_reassemble(struct rte_mbuf *pkt,
 	uint32_t i, max_flow_num, remaining_flow_num;
 	int cmp;
 	uint8_t find;
+	uint32_t start_idx;
 
 	/*
 	 * Don't process the packet whose TCP header length is greater
@@ -219,12 +233,6 @@ gro_tcp4_reassemble(struct rte_mbuf *pkt,
 	tcp_hdr = (struct rte_tcp_hdr *)((char *)ipv4_hdr + pkt->l3_len);
 	hdr_len = pkt->l2_len + pkt->l3_len + pkt->l4_len;
 
-	/*
-	 * Don't process the packet which has FIN, SYN, RST, PSH, URG, ECE
-	 * or CWR set.
-	 */
-	if (tcp_hdr->tcp_flags != RTE_TCP_ACK_FLAG)
-		return -1;
 	/*
 	 * Don't process the packet whose payload length is less than or
 	 * equal to 0.
@@ -263,12 +271,30 @@ gro_tcp4_reassemble(struct rte_mbuf *pkt,
 		if (tbl->flows[i].start_index != INVALID_ARRAY_INDEX) {
 			if (is_same_tcp4_flow(tbl->flows[i].key, key)) {
 				find = 1;
+				start_idx = tbl->flows[i].start_index;
 				break;
 			}
 			remaining_flow_num--;
 		}
 	}
 
+	if (tcp_hdr->tcp_flags != RTE_TCP_ACK_FLAG) {
+		/*
+		 * Check and try merging the current TCP segment with the previous
+		 * TCP segment if the TCP header does not contain RST and SYN flag
+		 * There are cases where the last segment is sent with FIN|PSH|ACK
+		 * which should also be considered for merging with previous segments.
+		 */
+		if (find && !(tcp_hdr->tcp_flags & (RTE_TCP_RST_FLAG|RTE_TCP_SYN_FLAG)))
+			/*
+			 * Since PSH flag is set, start time will be set to 0 so it will be flushed
+			 * immediately.
+			 */
+			tbl->items[start_idx].start_time = 0;
+		else
+			return -1;
+	}
+
 	/*
 	 * Fail to find a matched flow. Insert a new flow and store the
 	 * packet into the flow.
@@ -303,8 +329,12 @@ gro_tcp4_reassemble(struct rte_mbuf *pkt,
 				is_atomic);
 		if (cmp) {
 			if (merge_two_tcp4_packets(&(tbl->items[cur_idx]),
-						pkt, cmp, sent_seq, ip_id, 0))
+						pkt, cmp, sent_seq, ip_id, 0)) {
+				if (tbl->items[cur_idx].start_time == 0)
+					update_tcp_hdr_flags(tcp_hdr, tbl->items[cur_idx].firstseg);
 				return 1;
+			}
+
 			/*
 			 * Fail to merge the two packets, as the packet
 			 * length is greater than the max value. Store
diff --git a/lib/gro/rte_gro.c b/lib/gro/rte_gro.c
index e35399fd42..87c5502dce 100644
--- a/lib/gro/rte_gro.c
+++ b/lib/gro/rte_gro.c
@@ -283,10 +283,17 @@ rte_gro_reassemble_burst(struct rte_mbuf **pkts,
 	if ((nb_after_gro < nb_pkts)
 		 || (unprocess_num < nb_pkts)) {
 		i = 0;
+		/* Copy unprocessed packets */
+		if (unprocess_num > 0) {
+			memcpy(&pkts[i], unprocess_pkts,
+					sizeof(struct rte_mbuf *) *
+					unprocess_num);
+			i = unprocess_num;
+		}
 		/* Flush all packets from the tables */
 		if (do_vxlan_tcp_gro) {
-			i = gro_vxlan_tcp4_tbl_timeout_flush(&vxlan_tcp_tbl,
-					0, pkts, nb_pkts);
+			i += gro_vxlan_tcp4_tbl_timeout_flush(&vxlan_tcp_tbl,
+					0, &pkts[i], nb_pkts - i);
 		}
 
 		if (do_vxlan_udp_gro) {
@@ -304,13 +311,6 @@ rte_gro_reassemble_burst(struct rte_mbuf **pkts,
 			i += gro_udp4_tbl_timeout_flush(&udp_tbl, 0,
 					&pkts[i], nb_pkts - i);
 		}
-		/* Copy unprocessed packets */
-		if (unprocess_num > 0) {
-			memcpy(&pkts[i], unprocess_pkts,
-					sizeof(struct rte_mbuf *) *
-					unprocess_num);
-		}
-		nb_after_gro = i + unprocess_num;
 	}
 
 	return nb_after_gro;
-- 
2.25.1