From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <dev-bounces@dpdk.org>
Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124])
	by inbox.dpdk.org (Postfix) with ESMTP id E355B43C03;
	Tue, 27 Feb 2024 06:42:21 +0100 (CET)
Received: from mails.dpdk.org (localhost [127.0.0.1])
	by mails.dpdk.org (Postfix) with ESMTP id 20352402B2;
	Tue, 27 Feb 2024 06:42:13 +0100 (CET)
Received: from linux.microsoft.com (linux.microsoft.com [13.77.154.182])
 by mails.dpdk.org (Postfix) with ESMTP id B0146402B2
 for <dev@dpdk.org>; Tue, 27 Feb 2024 06:41:41 +0100 (CET)
Received: by linux.microsoft.com (Postfix, from userid 1086)
 id CEA5020B74C3; Mon, 26 Feb 2024 21:41:40 -0800 (PST)
DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com CEA5020B74C3
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.microsoft.com;
 s=default; t=1709012500;
 bh=3zEzVolRJmbWvBEOd3sMBl19ZxVX27jSMrbHklt/Aag=;
 h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
 b=hFH2+CLSJTmjz5pzWh2Ok7cDr1NT8wisu2wsfpqNiT42YWby4OgUBEhOWVEYnuv6U
 YEGm0NbP/vRMCXrMABIoTIRPAM9T9sAukjry7LFusIIBVRMdcIQ7ILlLQGtdNv787A
 3mmuecXbVJTpFaxbLnBhi1LZyhkwF/pAyT/B0+FI=
From: Tyler Retzlaff <roretzla@linux.microsoft.com>
To: dev@dpdk.org
Cc: Ajit Khaparde <ajit.khaparde@broadcom.com>,
 Andrew Boyer <andrew.boyer@amd.com>,
 Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>,
 Bruce Richardson <bruce.richardson@intel.com>,
 Chenbo Xia <chenbox@nvidia.com>, Chengwen Feng <fengchengwen@huawei.com>,
 Dariusz Sosnowski <dsosnowski@nvidia.com>,
 David Christensen <drc@linux.vnet.ibm.com>,
 Hyong Youb Kim <hyonkim@cisco.com>, Jerin Jacob <jerinj@marvell.com>,
 Jie Hai <haijie1@huawei.com>, Jingjing Wu <jingjing.wu@intel.com>,
 John Daley <johndale@cisco.com>, Kevin Laatz <kevin.laatz@intel.com>,
 Kiran Kumar K <kirankumark@marvell.com>,
 Konstantin Ananyev <konstantin.v.ananyev@yandex.ru>,
 Maciej Czekaj <mczekaj@marvell.com>, Matan Azrad <matan@nvidia.com>,
 Maxime Coquelin <maxime.coquelin@redhat.com>,
 Nithin Dabilpuram <ndabilpuram@marvell.com>, Ori Kam <orika@nvidia.com>,
 Ruifeng Wang <ruifeng.wang@arm.com>, Satha Rao <skoteshwar@marvell.com>,
 Somnath Kotur <somnath.kotur@broadcom.com>,
 Suanming Mou <suanmingm@nvidia.com>, Sunil Kumar Kori <skori@marvell.com>,
 Viacheslav Ovsiienko <viacheslavo@nvidia.com>,
 Yisen Zhuang <yisen.zhuang@huawei.com>,
 Yuying Zhang <Yuying.Zhang@intel.com>, mb@smartsharesystems.com,
 Tyler Retzlaff <roretzla@linux.microsoft.com>
Subject: [PATCH v6 03/23] common/idpf: use mbuf descriptor accessors
Date: Mon, 26 Feb 2024 21:41:19 -0800
Message-Id: <1709012499-12813-4-git-send-email-roretzla@linux.microsoft.com>
X-Mailer: git-send-email 1.8.3.1
In-Reply-To: <1709012499-12813-1-git-send-email-roretzla@linux.microsoft.com>
References: <1706657173-26166-1-git-send-email-roretzla@linux.microsoft.com>
 <1709012499-12813-1-git-send-email-roretzla@linux.microsoft.com>
X-BeenThere: dev@dpdk.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: DPDK patches and discussions <dev.dpdk.org>
List-Unsubscribe: <https://mails.dpdk.org/options/dev>,
 <mailto:dev-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://mails.dpdk.org/archives/dev/>
List-Post: <mailto:dev@dpdk.org>
List-Help: <mailto:dev-request@dpdk.org?subject=help>
List-Subscribe: <https://mails.dpdk.org/listinfo/dev>,
 <mailto:dev-request@dpdk.org?subject=subscribe>
Errors-To: dev-bounces@dpdk.org

RTE_MARKER typedefs are a GCC extension unsupported by MSVC. Use
new rte_mbuf_rearm_data and rte_mbuf_rx_descriptor_fields1 accessors
that provide a compatible type pointer without using the marker fields.

Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
---
 drivers/common/idpf/idpf_common_rxtx.c        |  4 +-
 drivers/common/idpf/idpf_common_rxtx_avx512.c | 73 +++++++--------------------
 2 files changed, 18 insertions(+), 59 deletions(-)

diff --git a/drivers/common/idpf/idpf_common_rxtx.c b/drivers/common/idpf/idpf_common_rxtx.c
index 83b131e..62ddf2e 100644
--- a/drivers/common/idpf/idpf_common_rxtx.c
+++ b/drivers/common/idpf/idpf_common_rxtx.c
@@ -1595,7 +1595,6 @@
 static inline int
 idpf_rxq_vec_setup_default(struct idpf_rx_queue *rxq)
 {
-	uintptr_t p;
 	struct rte_mbuf mb_def = { .buf_addr = 0 }; /* zeroed mbuf */
 
 	mb_def.nb_segs = 1;
@@ -1605,8 +1604,7 @@
 
 	/* prevent compiler reordering: rearm_data covers previous fields */
 	rte_compiler_barrier();
-	p = (uintptr_t)&mb_def.rearm_data;
-	rxq->mbuf_initializer = *(uint64_t *)p;
+	rxq->mbuf_initializer = *rte_mbuf_rearm_data(&mb_def);
 	return 0;
 }
 
diff --git a/drivers/common/idpf/idpf_common_rxtx_avx512.c b/drivers/common/idpf/idpf_common_rxtx_avx512.c
index f65e8d5..f9e2939 100644
--- a/drivers/common/idpf/idpf_common_rxtx_avx512.c
+++ b/drivers/common/idpf/idpf_common_rxtx_avx512.c
@@ -307,19 +307,6 @@
 					/* octet 15~14, low 16 bits pkt_len */
 			 0xFFFFFFFF     /* pkt_type set as unknown */
 			);
-	/**
-	 * compile-time check the shuffle layout is correct.
-	 * NOTE: the first field (lowest address) is given last in set_epi
-	 * calls above.
-	 */
-	RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, pkt_len) !=
-			 offsetof(struct rte_mbuf, rx_descriptor_fields1) + 4);
-	RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, data_len) !=
-			 offsetof(struct rte_mbuf, rx_descriptor_fields1) + 8);
-	RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, vlan_tci) !=
-			 offsetof(struct rte_mbuf, rx_descriptor_fields1) + 10);
-	RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, hash) !=
-			 offsetof(struct rte_mbuf, rx_descriptor_fields1) + 12);
 
 	uint16_t i, received;
 
@@ -455,13 +442,7 @@
 		 * add in the previously computed rx_descriptor fields to
 		 * make a single 256-bit write per mbuf
 		 */
-		/* check the structure matches expectations */
-		RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, ol_flags) !=
-				 offsetof(struct rte_mbuf, rearm_data) + 8);
-		RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, rearm_data) !=
-				 RTE_ALIGN(offsetof(struct rte_mbuf,
-						    rearm_data),
-						    16));
+
 		/* build up data and do writes */
 		__m256i rearm0, rearm1, rearm2, rearm3, rearm4, rearm5,
 			rearm6, rearm7;
@@ -476,13 +457,13 @@
 		rearm0 = _mm256_permute2f128_si256(mbuf_init, mb0_1, 0x20);
 
 		/* write to mbuf */
-		_mm256_storeu_si256((__m256i *)&rx_pkts[i + 6]->rearm_data,
+		_mm256_storeu_si256((__m256i *)rte_mbuf_rearm_data(rx_pkts[i + 6]),
 				    rearm6);
-		_mm256_storeu_si256((__m256i *)&rx_pkts[i + 4]->rearm_data,
+		_mm256_storeu_si256((__m256i *)rte_mbuf_rearm_data(rx_pkts[i + 4]),
 				    rearm4);
-		_mm256_storeu_si256((__m256i *)&rx_pkts[i + 2]->rearm_data,
+		_mm256_storeu_si256((__m256i *)rte_mbuf_rearm_data(rx_pkts[i + 2]),
 				    rearm2);
-		_mm256_storeu_si256((__m256i *)&rx_pkts[i + 0]->rearm_data,
+		_mm256_storeu_si256((__m256i *)rte_mbuf_rearm_data(rx_pkts[i + 0]),
 				    rearm0);
 
 		rearm7 = _mm256_blend_epi32(mbuf_init, mb6_7, 0xF0);
@@ -491,13 +472,13 @@
 		rearm1 = _mm256_blend_epi32(mbuf_init, mb0_1, 0xF0);
 
 		/* again write to mbufs */
-		_mm256_storeu_si256((__m256i *)&rx_pkts[i + 7]->rearm_data,
+		_mm256_storeu_si256((__m256i *)rte_mbuf_rearm_data(rx_pkts[i + 7]),
 				    rearm7);
-		_mm256_storeu_si256((__m256i *)&rx_pkts[i + 5]->rearm_data,
+		_mm256_storeu_si256((__m256i *)rte_mbuf_rearm_data(rx_pkts[i + 5]),
 				    rearm5);
-		_mm256_storeu_si256((__m256i *)&rx_pkts[i + 3]->rearm_data,
+		_mm256_storeu_si256((__m256i *)rte_mbuf_rearm_data(rx_pkts[i + 3]),
 				    rearm3);
-		_mm256_storeu_si256((__m256i *)&rx_pkts[i + 1]->rearm_data,
+		_mm256_storeu_si256((__m256i *)rte_mbuf_rearm_data(rx_pkts[i + 1]),
 				    rearm1);
 
 		/* perform dd_check */
@@ -768,19 +749,6 @@
 					/* octet 15~14, low 16 bits pkt_len */
 			 0xFFFFFFFF     /* pkt_type set as unknown */
 			);
-	/**
-	 * compile-time check the above crc and shuffle layout is correct.
-	 * NOTE: the first field (lowest address) is given last in set_epi
-	 * calls above.
-	 */
-	RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, pkt_len) !=
-			 offsetof(struct rte_mbuf, rx_descriptor_fields1) + 4);
-	RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, data_len) !=
-			 offsetof(struct rte_mbuf, rx_descriptor_fields1) + 8);
-	RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, vlan_tci) !=
-			 offsetof(struct rte_mbuf, rx_descriptor_fields1) + 10);
-	RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, hash) !=
-			 offsetof(struct rte_mbuf, rx_descriptor_fields1) + 12);
 
 	uint16_t i, received;
 
@@ -915,13 +883,6 @@
 		 * add in the previously computed rx_descriptor fields to
 		 * make a single 256-bit write per mbuf
 		 */
-		/* check the structure matches expectations */
-		RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, ol_flags) !=
-				 offsetof(struct rte_mbuf, rearm_data) + 8);
-		RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, rearm_data) !=
-				 RTE_ALIGN(offsetof(struct rte_mbuf,
-						    rearm_data),
-						    16));
 				/* build up data and do writes */
 		__m256i rearm0, rearm1, rearm2, rearm3, rearm4, rearm5,
 			rearm6, rearm7;
@@ -936,13 +897,13 @@
 		rearm0 = _mm256_permute2f128_si256(mbuf_init, mb0_1, 0x20);
 
 		/* write to mbuf */
-		_mm256_storeu_si256((__m256i *)&rx_pkts[i + 6]->rearm_data,
+		_mm256_storeu_si256((__m256i *)rte_mbuf_rearm_data(rx_pkts[i + 6]),
 				    rearm6);
-		_mm256_storeu_si256((__m256i *)&rx_pkts[i + 4]->rearm_data,
+		_mm256_storeu_si256((__m256i *)rte_mbuf_rearm_data(rx_pkts[i + 4]),
 				    rearm4);
-		_mm256_storeu_si256((__m256i *)&rx_pkts[i + 2]->rearm_data,
+		_mm256_storeu_si256((__m256i *)rte_mbuf_rearm_data(rx_pkts[i + 2]),
 				    rearm2);
-		_mm256_storeu_si256((__m256i *)&rx_pkts[i + 0]->rearm_data,
+		_mm256_storeu_si256((__m256i *)rte_mbuf_rearm_data(rx_pkts[i + 0]),
 				    rearm0);
 
 		rearm7 = _mm256_blend_epi32(mbuf_init, mb6_7, 0xF0);
@@ -951,13 +912,13 @@
 		rearm1 = _mm256_blend_epi32(mbuf_init, mb0_1, 0xF0);
 
 		/* again write to mbufs */
-		_mm256_storeu_si256((__m256i *)&rx_pkts[i + 7]->rearm_data,
+		_mm256_storeu_si256((__m256i *)rte_mbuf_rearm_data(rx_pkts[i + 7]),
 				    rearm7);
-		_mm256_storeu_si256((__m256i *)&rx_pkts[i + 5]->rearm_data,
+		_mm256_storeu_si256((__m256i *)rte_mbuf_rearm_data(rx_pkts[i + 5]),
 				    rearm5);
-		_mm256_storeu_si256((__m256i *)&rx_pkts[i + 3]->rearm_data,
+		_mm256_storeu_si256((__m256i *)rte_mbuf_rearm_data(rx_pkts[i + 3]),
 				    rearm3);
-		_mm256_storeu_si256((__m256i *)&rx_pkts[i + 1]->rearm_data,
+		_mm256_storeu_si256((__m256i *)rte_mbuf_rearm_data(rx_pkts[i + 1]),
 				    rearm1);
 
 		const __mmask8 dd_mask = _mm512_cmpeq_epi64_mask(
-- 
1.8.3.1