From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 0474243D05; Wed, 20 Mar 2024 16:42:54 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 1C9E942E3D; Wed, 20 Mar 2024 16:39:15 +0100 (CET) Received: from linux.microsoft.com (linux.microsoft.com [13.77.154.182]) by mails.dpdk.org (Postfix) with ESMTP id D521142D6A for ; Wed, 20 Mar 2024 16:38:32 +0100 (CET) Received: by linux.microsoft.com (Postfix, from userid 1086) id 98B6C20B92B3; Wed, 20 Mar 2024 08:38:21 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com 98B6C20B92B3 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.microsoft.com; s=default; t=1710949103; bh=rAmdMIlDzVz94bIaSSmaf8XCASEToTiF/vG1x9WsSPE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=MhJMcH7HfNwBPwwcrX4Rqe05J+p84B0y4/W/1jV3/np43ynEtPDC1gR6FTL/w0cyC 3KPJRrfey6b1it0n3yMxr+dpIiEaFmCnPl+pDJys+eDZiBqyFPtpiCGY636IDy3mp0 Y3guHgozxwCKowuwpMdsXVtSOeVDc7E5cjKP/Q3w= From: Tyler Retzlaff To: dev@dpdk.org Cc: =?UTF-8?q?Mattias=20R=C3=B6nnblom?= , "Min Hu (Connor)" , =?UTF-8?q?Morten=20Br=C3=B8rup?= , Abdullah Sevincer , Ajit Khaparde , Akhil Goyal , Alok Prasad , Amit Bernstein , Anatoly Burakov , Andrew Boyer , Andrew Rybchenko , Ankur Dwivedi , Anoob Joseph , Ashish Gupta , Ashwin Sekhar T K , Bruce Richardson , Byron Marohn , Chaoyong He , Chas Williams , Chenbo Xia , Chengwen Feng , Conor Walsh , Cristian Dumitrescu , Dariusz Sosnowski , David Hunt , Devendra Singh Rawat , Ed Czeck , Evgeny Schemeilin , Fan Zhang , Gagandeep Singh , Guoyang Zhou , Harman Kalra , Harry van Haaren , Hemant Agrawal , Honnappa Nagarahalli , Hyong Youb Kim , Jakub Grajciar , Jerin Jacob , Jian Wang , Jiawen Wu , Jie Hai , Jingjing Wu , John Daley , John Miller , Joyce Kong , Junfeng Guo , Kai Ji , Kevin Laatz , Kiran Kumar K , Konstantin Ananyev , Lee Daly , Liang Ma , Liron Himi , Long Li , Maciej Czekaj , Matan Azrad , Matt Peters , Maxime Coquelin , Michael Shamis , Nagadheeraj Rottela , Nicolas Chautru , Nithin Dabilpuram , Ori Kam , Pablo de Lara , Pavan Nikhilesh , Peter Mccarthy , Radu Nicolau , Rahul Lakkireddy , Rakesh Kudurumalla , Raveendra Padasalagi , Reshma Pattan , Ron Beider , Ruifeng Wang , Sachin Saxena , Selwin Sebastian , Shai Brandes , Shepard Siegel , Shijith Thotton , Sivaprasad Tummala , Somnath Kotur , Srikanth Yalavarthi , Stephen Hemminger , Steven Webster , Suanming Mou , Sunil Kumar Kori , Sunil Uttarwar , Sunila Sahu , Tejasree Kondoj , Viacheslav Ovsiienko , Vikas Gupta , Volodymyr Fialko , Wajeeh Atrash , Wisam Jaddo , Xiaoyun Wang , Yipeng Wang , Yisen Zhuang , Yuying Zhang , Zhangfei Gao , Zhirun Yan , Ziyang Xuan , Tyler Retzlaff Subject: [PATCH 40/83] event/sw: move alignment attribute on types Date: Wed, 20 Mar 2024 08:37:33 -0700 Message-Id: <1710949096-5786-41-git-send-email-roretzla@linux.microsoft.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1710949096-5786-1-git-send-email-roretzla@linux.microsoft.com> References: <1710949096-5786-1-git-send-email-roretzla@linux.microsoft.com> X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Move location of __rte_aligned(a) to new conventional location. The new placement between {struct,union} and the tag allows the desired alignment to be imparted on the type regardless of the toolchain being used for both C and C++. Additionally, it avoids confusion by Doxygen when generating documentation. Signed-off-by: Tyler Retzlaff --- drivers/event/sw/event_ring.h | 2 +- drivers/event/sw/iq_chunk.h | 4 ++-- drivers/event/sw/sw_evdev.h | 18 +++++++++--------- 3 files changed, 12 insertions(+), 12 deletions(-) diff --git a/drivers/event/sw/event_ring.h b/drivers/event/sw/event_ring.h index 2b86ca9..29db267 100644 --- a/drivers/event/sw/event_ring.h +++ b/drivers/event/sw/event_ring.h @@ -27,7 +27,7 @@ struct rob_ring { uint32_t size; uint32_t write_idx; uint32_t read_idx; - void *ring[0] __rte_cache_aligned; + alignas(RTE_CACHE_LINE_SIZE) void *ring[0]; }; static inline struct rob_ring * diff --git a/drivers/event/sw/iq_chunk.h b/drivers/event/sw/iq_chunk.h index 31d013e..7a7a878 100644 --- a/drivers/event/sw/iq_chunk.h +++ b/drivers/event/sw/iq_chunk.h @@ -11,10 +11,10 @@ #define IQ_ROB_NAMESIZE 12 -struct sw_queue_chunk { +struct __rte_cache_aligned sw_queue_chunk { struct rte_event events[SW_EVS_PER_Q_CHUNK]; struct sw_queue_chunk *next; -} __rte_cache_aligned; +}; static __rte_always_inline bool iq_empty(struct sw_iq *iq) diff --git a/drivers/event/sw/sw_evdev.h b/drivers/event/sw/sw_evdev.h index c7b943a..c6e649c 100644 --- a/drivers/event/sw/sw_evdev.h +++ b/drivers/event/sw/sw_evdev.h @@ -170,14 +170,14 @@ struct sw_port { int16_t num_ordered_qids; /** Ring and buffer for pulling events from workers for scheduling */ - struct rte_event_ring *rx_worker_ring __rte_cache_aligned; + alignas(RTE_CACHE_LINE_SIZE) struct rte_event_ring *rx_worker_ring; /** Ring and buffer for pushing packets to workers after scheduling */ struct rte_event_ring *cq_worker_ring; /* hole */ /* num releases yet to be completed on this port */ - uint16_t outstanding_releases __rte_cache_aligned; + alignas(RTE_CACHE_LINE_SIZE) uint16_t outstanding_releases; uint16_t inflight_max; /* app requested max inflights for this port */ uint16_t inflight_credits; /* num credits this port has right now */ uint8_t implicit_release; /* release events before dequeuing */ @@ -191,7 +191,7 @@ struct sw_port { /* bucket values in 4s for shorter reporting */ /* History list structs, containing info on pkts egressed to worker */ - uint16_t hist_head __rte_cache_aligned; + alignas(RTE_CACHE_LINE_SIZE) uint16_t hist_head; uint16_t hist_tail; uint16_t inflights; struct sw_hist_list_entry hist_list[SW_PORT_HIST_LIST]; @@ -221,7 +221,7 @@ struct sw_evdev { uint32_t xstats_count_mode_queue; /* Minimum burst size*/ - uint32_t sched_min_burst_size __rte_cache_aligned; + alignas(RTE_CACHE_LINE_SIZE) uint32_t sched_min_burst_size; /* Port dequeue burst size*/ uint32_t sched_deq_burst_size; /* Refill pp buffers only once per scheduler call*/ @@ -231,9 +231,9 @@ struct sw_evdev { uint32_t sched_min_burst; /* Contains all ports - load balanced and directed */ - struct sw_port ports[SW_PORTS_MAX] __rte_cache_aligned; + alignas(RTE_CACHE_LINE_SIZE) struct sw_port ports[SW_PORTS_MAX]; - rte_atomic32_t inflights __rte_cache_aligned; + alignas(RTE_CACHE_LINE_SIZE) rte_atomic32_t inflights; /* * max events in this instance. Cached here for performance. @@ -242,18 +242,18 @@ struct sw_evdev { uint32_t nb_events_limit; /* Internal queues - one per logical queue */ - struct sw_qid qids[RTE_EVENT_MAX_QUEUES_PER_DEV] __rte_cache_aligned; + alignas(RTE_CACHE_LINE_SIZE) struct sw_qid qids[RTE_EVENT_MAX_QUEUES_PER_DEV]; struct sw_queue_chunk *chunk_list_head; struct sw_queue_chunk *chunks; /* Cache how many packets are in each cq */ - uint16_t cq_ring_space[SW_PORTS_MAX] __rte_cache_aligned; + alignas(RTE_CACHE_LINE_SIZE) uint16_t cq_ring_space[SW_PORTS_MAX]; /* Array of pointers to load-balanced QIDs sorted by priority level */ struct sw_qid *qids_prioritized[RTE_EVENT_MAX_QUEUES_PER_DEV]; /* Stats */ - struct sw_point_stats stats __rte_cache_aligned; + alignas(RTE_CACHE_LINE_SIZE) struct sw_point_stats stats; uint64_t sched_called; int32_t sched_quanta; uint64_t sched_no_iq_enqueues; -- 1.8.3.1