From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <dev-bounces@dpdk.org>
Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124])
	by inbox.dpdk.org (Postfix) with ESMTP id 3107342BE2;
	Tue, 30 May 2023 12:03:49 +0200 (CEST)
Received: from mails.dpdk.org (localhost [127.0.0.1])
	by mails.dpdk.org (Postfix) with ESMTP id D287A42D31;
	Tue, 30 May 2023 12:03:06 +0200 (CEST)
Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com
 [67.231.156.173])
 by mails.dpdk.org (Postfix) with ESMTP id 1D8C442D4A
 for <dev@dpdk.org>; Tue, 30 May 2023 12:03:05 +0200 (CEST)
Received: from pps.filterd (m0045851.ppops.net [127.0.0.1])
 by mx0b-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id
 34U8uFtr006350; Tue, 30 May 2023 03:03:04 -0700
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com;
 h=from : to : cc :
 subject : date : message-id : in-reply-to : references : mime-version :
 content-transfer-encoding : content-type; s=pfpt0220;
 bh=Mo6wjnZelKYa4Kgc5Qt+1frNtdCcooKdNyerBDZ+mwI=;
 b=dy7JRSdMjcA7UFZBpS58xPsZCleVTvgQagX+uEp1eJlEFqG/vPILzgPS0JO2tL+y46Un
 N9MDNDj6e0fjTliszTCJ1bGJkMIDhiPOtO4lZ9tPLp1iJGFbaGXBE3sEcpc1n2iAqfqK
 s9VxNHvCbTfUep9j146+lso+AY0O3AYOPaisvyF566Cy2vFcmlPRgjvBN3Mm9ZInqj6v
 /fHAVq5UOlXkAmpcxIe0kyUzoV8wywPcMZx7UF0zgsP/aJyxQKko6tZPeF7SVAPBMBc2
 ViikjIX/lHDHLEHh1T6SdPXnhe75yReKGGEkWYbAaLaV1tIPPTXl1cmVhuqPoFO7z1gt eg== 
Received: from dc5-exch02.marvell.com ([199.233.59.182])
 by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3quhcm7typ-2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT);
 Tue, 30 May 2023 03:03:04 -0700
Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH02.marvell.com
 (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.48;
 Tue, 30 May 2023 03:03:02 -0700
Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com
 (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.48 via Frontend
 Transport; Tue, 30 May 2023 03:03:02 -0700
Received: from BG-LT92004.corp.innovium.com (unknown [10.193.69.246])
 by maili.marvell.com (Postfix) with ESMTP id E34F73F7053;
 Tue, 30 May 2023 03:02:58 -0700 (PDT)
From: Anoob Joseph <anoobj@marvell.com>
To: Thomas Monjalon <thomas@monjalon.net>, Akhil Goyal <gakhil@marvell.com>,
 Jerin Jacob <jerinj@marvell.com>, Konstantin Ananyev
 <konstantin.v.ananyev@yandex.ru>
CC: Volodymyr Fialko <vfialko@marvell.com>, Hemant Agrawal
 <hemant.agrawal@nxp.com>, =?UTF-8?q?Mattias=20R=C3=B6nnblom?=
 <mattias.ronnblom@ericsson.com>,
 Kiran Kumar K <kirankumark@marvell.com>, <dev@dpdk.org>,
 Olivier Matz <olivier.matz@6wind.com>, Stephen Hemminger
 <stephen@networkplumber.org>
Subject: [PATCH v6 14/21] test/pdcp: add in-order delivery cases
Date: Tue, 30 May 2023 15:31:51 +0530
Message-ID: <20230530100158.1428-15-anoobj@marvell.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230530100158.1428-1-anoobj@marvell.com>
References: <20230527085910.972-1-anoobj@marvell.com>
 <20230530100158.1428-1-anoobj@marvell.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-Proofpoint-GUID: en9FCL2N6mgokvcnzjg8LAiw8v2RMZQx
X-Proofpoint-ORIG-GUID: en9FCL2N6mgokvcnzjg8LAiw8v2RMZQx
X-Proofpoint-Virus-Version: vendor=baseguard
 engine=ICAP:2.0.254,Aquarius:18.0.957,Hydra:6.0.573,FMLib:17.11.176.26
 definitions=2023-05-30_06,2023-05-29_02,2023-05-22_02
X-BeenThere: dev@dpdk.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: DPDK patches and discussions <dev.dpdk.org>
List-Unsubscribe: <https://mails.dpdk.org/options/dev>,
 <mailto:dev-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://mails.dpdk.org/archives/dev/>
List-Post: <mailto:dev@dpdk.org>
List-Help: <mailto:dev-request@dpdk.org?subject=help>
List-Subscribe: <https://mails.dpdk.org/listinfo/dev>,
 <mailto:dev-request@dpdk.org?subject=subscribe>
Errors-To: dev-bounces@dpdk.org

From: Volodymyr Fialko <vfialko@marvell.com>

Add test cases to verify behaviour when in-order delivery is enabled and
packets arrive in out-of-order. PDCP library is expected to buffer the
packets and return packets in-order when the missing packet arrives.

Signed-off-by: Anoob Joseph <anoobj@marvell.com>
Signed-off-by: Volodymyr Fialko <vfialko@marvell.com>
---
 app/test/test_pdcp.c | 223 +++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 223 insertions(+)

diff --git a/app/test/test_pdcp.c b/app/test/test_pdcp.c
index cfe2ec6aa9..24d7826bc2 100644
--- a/app/test/test_pdcp.c
+++ b/app/test/test_pdcp.c
@@ -16,6 +16,15 @@
 #define NB_TESTS RTE_DIM(pdcp_test_params)
 #define PDCP_IV_LEN 16
 
+/* Assert that condition is true, or goto the mark */
+#define ASSERT_TRUE_OR_GOTO(cond, mark, ...) do {\
+	if (!(cond)) { \
+		RTE_LOG(ERR, USER1, "Error at: %s:%d\n", __func__, __LINE__); \
+		RTE_LOG(ERR, USER1, __VA_ARGS__); \
+		goto mark; \
+	} \
+} while (0)
+
 /* According to formula(7.2.a Window_Size) */
 #define PDCP_WINDOW_SIZE(sn_size) (1 << (sn_size - 1))
 
@@ -83,6 +92,38 @@ run_test_with_all_known_vec(const void *args)
 	return run_test_foreach_known_vec(test, false);
 }
 
+static int
+run_test_with_all_known_vec_until_first_pass(const void *args)
+{
+	test_with_conf_t test = args;
+
+	return run_test_foreach_known_vec(test, true);
+}
+
+static inline uint32_t
+pdcp_sn_mask_get(enum rte_security_pdcp_sn_size sn_size)
+{
+	return (1 << sn_size) - 1;
+}
+
+static inline uint32_t
+pdcp_sn_from_count_get(uint32_t count, enum rte_security_pdcp_sn_size sn_size)
+{
+	return (count & pdcp_sn_mask_get(sn_size));
+}
+
+static inline uint32_t
+pdcp_hfn_mask_get(enum rte_security_pdcp_sn_size sn_size)
+{
+	return ~pdcp_sn_mask_get(sn_size);
+}
+
+static inline uint32_t
+pdcp_hfn_from_count_get(uint32_t count, enum rte_security_pdcp_sn_size sn_size)
+{
+	return (count & pdcp_hfn_mask_get(sn_size)) >> sn_size;
+}
+
 static inline int
 pdcp_hdr_size_get(enum rte_security_pdcp_sn_size sn_size)
 {
@@ -416,6 +457,7 @@ create_test_conf_from_index(const int index, struct pdcp_test_conf *conf)
 
 	conf->entity.sess_mpool = ts_params->sess_pool;
 	conf->entity.cop_pool = ts_params->cop_pool;
+	conf->entity.ctrl_pdu_pool = ts_params->mbuf_pool;
 	conf->entity.pdcp_xfrm.bearer = pdcp_test_bearer[index];
 	conf->entity.pdcp_xfrm.en_ordering = 0;
 	conf->entity.pdcp_xfrm.remove_duplicates = 0;
@@ -868,6 +910,7 @@ test_sn_range_type(enum sn_range_type type, struct pdcp_test_conf *conf)
 
 	/* Configure Uplink to generate expected, encrypted packet */
 	pdcp_sn_to_raw_set(conf->input, new_sn, conf->entity.pdcp_xfrm.sn_size);
+	conf->entity.out_of_order_delivery = true;
 	conf->entity.reverse_iv_direction = true;
 	conf->entity.pdcp_xfrm.hfn = new_hfn;
 	conf->entity.sn = new_sn;
@@ -915,6 +958,171 @@ test_sn_minus_outside(struct pdcp_test_conf *t_conf)
 	return test_sn_range_type(SN_RANGE_MINUS_OUTSIDE, t_conf);
 }
 
+static struct rte_mbuf *
+generate_packet_for_dl_with_sn(struct pdcp_test_conf ul_conf, uint32_t count)
+{
+	enum rte_security_pdcp_sn_size sn_size = ul_conf.entity.pdcp_xfrm.sn_size;
+	int ret;
+
+	ul_conf.entity.pdcp_xfrm.hfn = pdcp_hfn_from_count_get(count, sn_size);
+	ul_conf.entity.sn = pdcp_sn_from_count_get(count, sn_size);
+	ul_conf.entity.out_of_order_delivery = true;
+	ul_conf.entity.reverse_iv_direction = true;
+	ul_conf.output_len = 0;
+
+	ret = test_attempt_single(&ul_conf);
+	if (ret != TEST_SUCCESS)
+		return NULL;
+
+	return mbuf_from_data_create(ul_conf.output, ul_conf.output_len);
+}
+
+static bool
+array_asc_sorted_check(struct rte_mbuf *m[], uint32_t len, enum rte_security_pdcp_sn_size sn_size)
+{
+	uint32_t i;
+
+	if (len < 2)
+		return true;
+
+	for (i = 0; i < (len - 1); i++) {
+		if (pdcp_sn_from_raw_get(rte_pktmbuf_mtod(m[i], void *), sn_size) >
+		    pdcp_sn_from_raw_get(rte_pktmbuf_mtod(m[i + 1], void *), sn_size))
+			return false;
+	}
+
+	return true;
+}
+
+static int
+test_reorder_gap_fill(struct pdcp_test_conf *ul_conf)
+{
+	const enum rte_security_pdcp_sn_size sn_size = ul_conf->entity.pdcp_xfrm.sn_size;
+	struct rte_mbuf *m0 = NULL, *m1 = NULL, *out_mb[2] = {0};
+	uint16_t nb_success = 0, nb_err = 0;
+	struct rte_pdcp_entity *pdcp_entity;
+	struct pdcp_test_conf dl_conf;
+	int ret = TEST_FAILED, nb_out;
+	uint8_t cdev_id;
+
+	const int start_count = 0;
+
+	if (ul_conf->entity.pdcp_xfrm.pkt_dir == RTE_SECURITY_PDCP_DOWNLINK)
+		return TEST_SKIPPED;
+
+	/* Create configuration for actual testing */
+	uplink_to_downlink_convert(ul_conf, &dl_conf);
+	dl_conf.entity.pdcp_xfrm.hfn = pdcp_hfn_from_count_get(start_count, sn_size);
+	dl_conf.entity.sn = pdcp_sn_from_count_get(start_count, sn_size);
+
+	pdcp_entity = test_entity_create(&dl_conf, &ret);
+	if (pdcp_entity == NULL)
+		return ret;
+
+	cdev_id = dl_conf.entity.dev_id;
+
+	/* Send packet with SN > RX_DELIV to create a gap */
+	m1 = generate_packet_for_dl_with_sn(*ul_conf, start_count + 1);
+	ASSERT_TRUE_OR_GOTO(m1 != NULL, exit, "Could not allocate buffer for packet\n");
+
+	/* Buffered packets after insert [NULL, m1] */
+	nb_success = test_process_packets(pdcp_entity, cdev_id, &m1, 1, out_mb, &nb_err);
+	ASSERT_TRUE_OR_GOTO(nb_err == 0, exit, "Error occurred during packet process\n");
+	ASSERT_TRUE_OR_GOTO(nb_success == 0, exit, "Packet was not buffered as expected\n");
+	m1 = NULL; /* Packet was moved to PDCP lib */
+
+	/* Generate packet to fill the existing gap */
+	m0 = generate_packet_for_dl_with_sn(*ul_conf, start_count);
+	ASSERT_TRUE_OR_GOTO(m0 != NULL, exit, "Could not allocate buffer for packet\n");
+
+	/*
+	 * Buffered packets after insert [m0, m1]
+	 * Gap filled, all packets should be returned
+	 */
+	nb_success = test_process_packets(pdcp_entity, cdev_id, &m0, 1, out_mb, &nb_err);
+	ASSERT_TRUE_OR_GOTO(nb_err == 0, exit, "Error occurred during packet process\n");
+	ASSERT_TRUE_OR_GOTO(nb_success == 2, exit,
+			"Packet count mismatch (received: %i, expected: 2)\n", nb_success);
+	m0 = NULL; /* Packet was moved to out_mb */
+
+	/* Check that packets in correct order */
+	ASSERT_TRUE_OR_GOTO(array_asc_sorted_check(out_mb, nb_success, sn_size), exit,
+			"Error occurred during packet drain\n");
+
+	ret = TEST_SUCCESS;
+exit:
+	rte_pktmbuf_free(m0);
+	rte_pktmbuf_free(m1);
+	rte_pktmbuf_free_bulk(out_mb, nb_success);
+	nb_out = rte_pdcp_entity_release(pdcp_entity, out_mb);
+	rte_pktmbuf_free_bulk(out_mb, nb_out);
+	return ret;
+}
+
+static int
+test_reorder_buffer_full_window_size_sn_12(const struct pdcp_test_conf *ul_conf)
+{
+	const enum rte_security_pdcp_sn_size sn_size = ul_conf->entity.pdcp_xfrm.sn_size;
+	const uint32_t window_size = PDCP_WINDOW_SIZE(sn_size);
+	struct rte_mbuf *m1 = NULL, **out_mb = NULL;
+	uint16_t nb_success = 0, nb_err = 0;
+	struct rte_pdcp_entity *pdcp_entity;
+	struct pdcp_test_conf dl_conf;
+	const int rx_deliv = 0;
+	int ret = TEST_FAILED;
+	size_t i, nb_out;
+	uint8_t cdev_id;
+
+	if (ul_conf->entity.pdcp_xfrm.pkt_dir == RTE_SECURITY_PDCP_DOWNLINK ||
+		sn_size != RTE_SECURITY_PDCP_SN_SIZE_12)
+		return TEST_SKIPPED;
+
+	/* Create configuration for actual testing */
+	uplink_to_downlink_convert(ul_conf, &dl_conf);
+	dl_conf.entity.pdcp_xfrm.hfn = pdcp_hfn_from_count_get(rx_deliv, sn_size);
+	dl_conf.entity.sn = pdcp_sn_from_count_get(rx_deliv, sn_size);
+
+	pdcp_entity = test_entity_create(&dl_conf, &ret);
+	if (pdcp_entity == NULL)
+		return ret;
+
+	ASSERT_TRUE_OR_GOTO(pdcp_entity->max_pkt_cache >= window_size, exit,
+			"PDCP max packet cache is too small");
+	cdev_id = dl_conf.entity.dev_id;
+	out_mb = rte_zmalloc(NULL, pdcp_entity->max_pkt_cache * sizeof(uintptr_t), 0);
+	ASSERT_TRUE_OR_GOTO(out_mb != NULL, exit,
+			"Could not allocate buffer for holding out_mb buffers\n");
+
+	/* Send packets with SN > RX_DELIV to create a gap */
+	for (i = rx_deliv + 1; i < window_size; i++) {
+		m1 = generate_packet_for_dl_with_sn(*ul_conf, i);
+		ASSERT_TRUE_OR_GOTO(m1 != NULL, exit, "Could not allocate buffer for packet\n");
+		/* Buffered packets after insert [NULL, m1] */
+		nb_success = test_process_packets(pdcp_entity, cdev_id, &m1, 1, out_mb, &nb_err);
+		ASSERT_TRUE_OR_GOTO(nb_err == 0, exit, "Error occurred during packet buffering\n");
+		ASSERT_TRUE_OR_GOTO(nb_success == 0, exit, "Packet was not buffered as expected\n");
+	}
+
+	m1 = generate_packet_for_dl_with_sn(*ul_conf, rx_deliv);
+	ASSERT_TRUE_OR_GOTO(m1 != NULL, exit, "Could not allocate buffer for packet\n");
+	/* Insert missing packet */
+	nb_success = test_process_packets(pdcp_entity, cdev_id, &m1, 1, out_mb, &nb_err);
+	ASSERT_TRUE_OR_GOTO(nb_err == 0, exit, "Error occurred during packet buffering\n");
+	ASSERT_TRUE_OR_GOTO(nb_success == window_size, exit,
+			"Packet count mismatch (received: %i, expected: %i)\n",
+			nb_success, window_size);
+	m1 = NULL;
+
+	ret = TEST_SUCCESS;
+exit:
+	rte_pktmbuf_free(m1);
+	rte_pktmbuf_free_bulk(out_mb, nb_success);
+	nb_out = rte_pdcp_entity_release(pdcp_entity, out_mb);
+	rte_pktmbuf_free_bulk(out_mb, nb_out);
+	rte_free(out_mb);
+	return ret;
+}
+
 static int
 test_combined(struct pdcp_test_conf *ul_conf)
 {
@@ -971,10 +1179,25 @@ static struct unit_test_suite hfn_sn_test_cases  = {
 	}
 };
 
+static struct unit_test_suite reorder_test_cases  = {
+	.suite_name = "PDCP reorder",
+	.unit_test_cases = {
+		TEST_CASE_NAMED_WITH_DATA("test_reorder_gap_fill",
+			ut_setup_pdcp, ut_teardown_pdcp,
+			run_test_with_all_known_vec, test_reorder_gap_fill),
+		TEST_CASE_NAMED_WITH_DATA("test_reorder_buffer_full_window_size_sn_12",
+			ut_setup_pdcp, ut_teardown_pdcp,
+			run_test_with_all_known_vec_until_first_pass,
+			test_reorder_buffer_full_window_size_sn_12),
+		TEST_CASES_END() /**< NULL terminate unit test array */
+	}
+};
+
 struct unit_test_suite *test_suites[] = {
 	NULL, /* Place holder for known_vector_cases */
 	&combined_mode_cases,
 	&hfn_sn_test_cases,
+	&reorder_test_cases,
 	NULL /* End of suites list */
 };
 
-- 
2.25.1