From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C890D454AA; Wed, 19 Jun 2024 17:01:49 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B9F88427DE; Wed, 19 Jun 2024 17:01:49 +0200 (CEST) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by mails.dpdk.org (Postfix) with ESMTP id 82CB5427DD for ; Wed, 19 Jun 2024 17:01:48 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1718809308; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=hHmC/P1z3Bh2JhUEtuSK1a7PllGCGbdP7TKGZQe0Lz0=; b=KKjtsrpIG54YOpsGvzMQuIz59IvtU4zbE5EEgWo/5wA6VmFHQSSfxlm6XO276hQqHPHIbD lss1vbJZEj3g03rIXWp6AWEVyqU4Sr1TpXA+mJaHTMtbZRf0e16itJOcIh6lu8QrL+H/D7 MnXkyW2HbhBMRp+h9O/FGKQI+419abs= Received: from mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-179-pkZOli7eNTSLsGNAqgWRUg-1; Wed, 19 Jun 2024 11:01:44 -0400 X-MC-Unique: pkZOli7eNTSLsGNAqgWRUg-1 Received: from mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.12]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 2FEA819560AE; Wed, 19 Jun 2024 15:01:42 +0000 (UTC) Received: from dmarchan.redhat.com (unknown [10.45.225.89]) by mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id C4C9B19560AE; Wed, 19 Jun 2024 15:01:39 +0000 (UTC) From: David Marchand To: dev@dpdk.org Cc: roretzla@linux.microsoft.com, =?UTF-8?q?Morten=20Br=C3=B8rup?= , Stephen Hemminger , Thomas Monjalon Subject: [PATCH v12 2/4] mbuf: remove marker fields Date: Wed, 19 Jun 2024 17:01:24 +0200 Message-ID: <20240619150126.1037902-3-david.marchand@redhat.com> In-Reply-To: <20240619150126.1037902-1-david.marchand@redhat.com> References: <1706657173-26166-1-git-send-email-roretzla@linux.microsoft.com> <20240619150126.1037902-1-david.marchand@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.0 on 10.30.177.12 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Tyler Retzlaff RTE_MARKER typedefs are a GCC extension unsupported by MSVC. Remove RTE_MARKER fields from rte_mbuf struct. Maintain alignment of fields after removed cacheline1 marker by placing C11 alignas(RTE_CACHE_LINE_MIN_SIZE). Provide new rearm_data and rx_descriptor_fields1 fields in anonymous unions as single element arrays of with types matching the original markers to maintain API compatibility. This change breaks the API for cacheline{0,1} fields that have been removed from rte_mbuf but it does not break the ABI, to address the false positives of the removed (but 0 size fields) provide the minimum libabigail.abignore for type = rte_mbuf. Signed-off-by: Tyler Retzlaff Reviewed-by: Morten Brørup Acked-by: Stephen Hemminger --- Changes since v11: - moved libabigail suppression, - moved RN update to API change, - updated one comment in rte_mbuf_core.h referring to cacheline0, - removed (unrelated) doxygen updates, --- devtools/libabigail.abignore | 6 + doc/guides/rel_notes/release_24_07.rst | 3 + lib/mbuf/rte_mbuf.h | 4 +- lib/mbuf/rte_mbuf_core.h | 188 +++++++++++++------------ 4 files changed, 109 insertions(+), 92 deletions(-) diff --git a/devtools/libabigail.abignore b/devtools/libabigail.abignore index 32a2ea309e..96b16059a8 100644 --- a/devtools/libabigail.abignore +++ b/devtools/libabigail.abignore @@ -33,6 +33,12 @@ ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; ; Temporary exceptions till next major ABI version ; ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +[suppress_type] + name = rte_mbuf + type_kind = struct + has_size_change = no + has_data_member = {cacheline0, rearm_data, rx_descriptor_fields1, cacheline1} + [suppress_type] name = rte_pipeline_table_entry diff --git a/doc/guides/rel_notes/release_24_07.rst b/doc/guides/rel_notes/release_24_07.rst index ccd0f8e598..7c88de381b 100644 --- a/doc/guides/rel_notes/release_24_07.rst +++ b/doc/guides/rel_notes/release_24_07.rst @@ -178,6 +178,9 @@ API Changes Also, make sure to start the actual text at the margin. ======================================================= +* mbuf: ``RTE_MARKER`` fields ``cacheline0`` and ``cacheline1`` + have been removed from ``struct rte_mbuf``. + ABI Changes ----------- diff --git a/lib/mbuf/rte_mbuf.h b/lib/mbuf/rte_mbuf.h index 286b32b788..4c4722e002 100644 --- a/lib/mbuf/rte_mbuf.h +++ b/lib/mbuf/rte_mbuf.h @@ -108,7 +108,7 @@ int rte_get_tx_ol_flag_list(uint64_t mask, char *buf, size_t buflen); static inline void rte_mbuf_prefetch_part1(struct rte_mbuf *m) { - rte_prefetch0(&m->cacheline0); + rte_prefetch0(m); } /** @@ -126,7 +126,7 @@ static inline void rte_mbuf_prefetch_part2(struct rte_mbuf *m) { #if RTE_CACHE_LINE_SIZE == 64 - rte_prefetch0(&m->cacheline1); + rte_prefetch0(RTE_PTR_ADD(m, RTE_CACHE_LINE_MIN_SIZE)); #else RTE_SET_USED(m); #endif diff --git a/lib/mbuf/rte_mbuf_core.h b/lib/mbuf/rte_mbuf_core.h index 9f580769cf..a0df265b5d 100644 --- a/lib/mbuf/rte_mbuf_core.h +++ b/lib/mbuf/rte_mbuf_core.h @@ -465,8 +465,6 @@ enum { * The generic rte_mbuf, containing a packet mbuf. */ struct __rte_cache_aligned rte_mbuf { - RTE_MARKER cacheline0; - void *buf_addr; /**< Virtual address of segment buffer. */ #if RTE_IOVA_IN_MBUF /** @@ -474,7 +472,7 @@ struct __rte_cache_aligned rte_mbuf { * This field is undefined if the build is configured to use only * virtual address as IOVA (i.e. RTE_IOVA_IN_MBUF is 0). * Force alignment to 8-bytes, so as to ensure we have the exact - * same mbuf cacheline0 layout for 32-bit and 64-bit. This makes + * layout for the first cache line for 32-bit and 64-bit. This makes * working on vector drivers easier. */ alignas(sizeof(rte_iova_t)) rte_iova_t buf_iova; @@ -488,127 +486,137 @@ struct __rte_cache_aligned rte_mbuf { #endif /* next 8 bytes are initialised on RX descriptor rearm */ - RTE_MARKER64 rearm_data; - uint16_t data_off; - - /** - * Reference counter. Its size should at least equal to the size - * of port field (16 bits), to support zero-copy broadcast. - * It should only be accessed using the following functions: - * rte_mbuf_refcnt_update(), rte_mbuf_refcnt_read(), and - * rte_mbuf_refcnt_set(). The functionality of these functions (atomic, - * or non-atomic) is controlled by the RTE_MBUF_REFCNT_ATOMIC flag. - */ - RTE_ATOMIC(uint16_t) refcnt; + union { + uint64_t rearm_data[1]; + __extension__ + struct { + uint16_t data_off; + + /** + * Reference counter. Its size should at least equal to the size + * of port field (16 bits), to support zero-copy broadcast. + * It should only be accessed using the following functions: + * rte_mbuf_refcnt_update(), rte_mbuf_refcnt_read(), and + * rte_mbuf_refcnt_set(). The functionality of these functions (atomic, + * or non-atomic) is controlled by the RTE_MBUF_REFCNT_ATOMIC flag. + */ + RTE_ATOMIC(uint16_t) refcnt; - /** - * Number of segments. Only valid for the first segment of an mbuf - * chain. - */ - uint16_t nb_segs; + /** + * Number of segments. Only valid for the first segment of an mbuf + * chain. + */ + uint16_t nb_segs; - /** Input port (16 bits to support more than 256 virtual ports). - * The event eth Tx adapter uses this field to specify the output port. - */ - uint16_t port; + /** Input port (16 bits to support more than 256 virtual ports). + * The event eth Tx adapter uses this field to specify the output port. + */ + uint16_t port; + }; + }; uint64_t ol_flags; /**< Offload features. */ - /* remaining bytes are set on RX when pulling packet from descriptor */ - RTE_MARKER rx_descriptor_fields1; - - /* - * The packet type, which is the combination of outer/inner L2, L3, L4 - * and tunnel types. The packet_type is about data really present in the - * mbuf. Example: if vlan stripping is enabled, a received vlan packet - * would have RTE_PTYPE_L2_ETHER and not RTE_PTYPE_L2_VLAN because the - * vlan is stripped from the data. - */ + /* remaining 24 bytes are set on RX when pulling packet from descriptor */ union { - uint32_t packet_type; /**< L2/L3/L4 and tunnel information. */ + /* void * type of the array elements is retained for driver compatibility. */ + void *rx_descriptor_fields1[24 / sizeof(void *)]; __extension__ struct { - uint8_t l2_type:4; /**< (Outer) L2 type. */ - uint8_t l3_type:4; /**< (Outer) L3 type. */ - uint8_t l4_type:4; /**< (Outer) L4 type. */ - uint8_t tun_type:4; /**< Tunnel type. */ + /* + * The packet type, which is the combination of outer/inner L2, L3, L4 + * and tunnel types. The packet_type is about data really present in the + * mbuf. Example: if vlan stripping is enabled, a received vlan packet + * would have RTE_PTYPE_L2_ETHER and not RTE_PTYPE_L2_VLAN because the + * vlan is stripped from the data. + */ union { - uint8_t inner_esp_next_proto; - /**< ESP next protocol type, valid if - * RTE_PTYPE_TUNNEL_ESP tunnel type is set - * on both Tx and Rx. - */ + uint32_t packet_type; /**< L2/L3/L4 and tunnel information. */ __extension__ struct { - uint8_t inner_l2_type:4; - /**< Inner L2 type. */ - uint8_t inner_l3_type:4; - /**< Inner L3 type. */ + uint8_t l2_type:4; /**< (Outer) L2 type. */ + uint8_t l3_type:4; /**< (Outer) L3 type. */ + uint8_t l4_type:4; /**< (Outer) L4 type. */ + uint8_t tun_type:4; /**< Tunnel type. */ + union { + uint8_t inner_esp_next_proto; + /**< ESP next protocol type, valid if + * RTE_PTYPE_TUNNEL_ESP tunnel type is set + * on both Tx and Rx. + */ + __extension__ + struct { + uint8_t inner_l2_type:4; + /**< Inner L2 type. */ + uint8_t inner_l3_type:4; + /**< Inner L3 type. */ + }; + }; + uint8_t inner_l4_type:4; /**< Inner L4 type. */ }; }; - uint8_t inner_l4_type:4; /**< Inner L4 type. */ - }; - }; - uint32_t pkt_len; /**< Total pkt len: sum of all segments. */ - uint16_t data_len; /**< Amount of data in segment buffer. */ - /** VLAN TCI (CPU order), valid if RTE_MBUF_F_RX_VLAN is set. */ - uint16_t vlan_tci; + uint32_t pkt_len; /**< Total pkt len: sum of all segments. */ + uint16_t data_len; /**< Amount of data in segment buffer. */ + /** VLAN TCI (CPU order), valid if RTE_MBUF_F_RX_VLAN is set. */ + uint16_t vlan_tci; - union { - union { - uint32_t rss; /**< RSS hash result if RSS enabled */ - struct { + union { union { + uint32_t rss; /**< RSS hash result if RSS enabled */ struct { - uint16_t hash; - uint16_t id; - }; - uint32_t lo; - /**< Second 4 flexible bytes */ - }; - uint32_t hi; - /**< First 4 flexible bytes or FD ID, dependent - * on RTE_MBUF_F_RX_FDIR_* flag in ol_flags. - */ - } fdir; /**< Filter identifier if FDIR enabled */ - struct rte_mbuf_sched sched; - /**< Hierarchical scheduler : 8 bytes */ - struct { - uint32_t reserved1; - uint16_t reserved2; - uint16_t txq; - /**< The event eth Tx adapter uses this field - * to store Tx queue id. - * @see rte_event_eth_tx_adapter_txq_set() - */ - } txadapter; /**< Eventdev ethdev Tx adapter */ - uint32_t usr; - /**< User defined tags. See rte_distributor_process() */ - } hash; /**< hash information */ - }; + union { + struct { + uint16_t hash; + uint16_t id; + }; + uint32_t lo; + /**< Second 4 flexible bytes */ + }; + uint32_t hi; + /**< First 4 flexible bytes or FD ID, dependent + * on RTE_MBUF_F_RX_FDIR_* flag in ol_flags. + */ + } fdir; /**< Filter identifier if FDIR enabled */ + struct rte_mbuf_sched sched; + /**< Hierarchical scheduler : 8 bytes */ + struct { + uint32_t reserved1; + uint16_t reserved2; + uint16_t txq; + /**< The event eth Tx adapter uses this field + * to store Tx queue id. + * @see rte_event_eth_tx_adapter_txq_set() + */ + } txadapter; /**< Eventdev ethdev Tx adapter */ + uint32_t usr; + /**< User defined tags. See rte_distributor_process() */ + } hash; /**< hash information */ + }; - /** Outer VLAN TCI (CPU order), valid if RTE_MBUF_F_RX_QINQ is set. */ - uint16_t vlan_tci_outer; + /** Outer VLAN TCI (CPU order), valid if RTE_MBUF_F_RX_QINQ is set. */ + uint16_t vlan_tci_outer; - uint16_t buf_len; /**< Length of segment buffer. */ + uint16_t buf_len; /**< Length of segment buffer. */ + }; + }; struct rte_mempool *pool; /**< Pool from which mbuf was allocated. */ /* second cache line - fields only used in slow path or on TX */ - alignas(RTE_CACHE_LINE_MIN_SIZE) RTE_MARKER cacheline1; - #if RTE_IOVA_IN_MBUF /** * Next segment of scattered packet. Must be NULL in the last * segment or in case of non-segmented packet. */ + alignas(RTE_CACHE_LINE_MIN_SIZE) struct rte_mbuf *next; #else /** * Reserved for dynamic fields * when the next pointer is in first cache line (i.e. RTE_IOVA_IN_MBUF is 0). */ + alignas(RTE_CACHE_LINE_MIN_SIZE) uint64_t dynfield2; #endif -- 2.45.1