From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id B0E5EA00BE; Tue, 7 Jul 2020 04:28:15 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 94B271DCB9; Tue, 7 Jul 2020 04:28:15 +0200 (CEST) Received: from EUR05-AM6-obe.outbound.protection.outlook.com (mail-am6eur05on2060.outbound.protection.outlook.com [40.107.22.60]) by dpdk.org (Postfix) with ESMTP id E3FB01DC61 for ; Tue, 7 Jul 2020 04:28:13 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; s=selector2-armh-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=fbRQt7813+zVNSshkZsMb6tuKNC6gOX6AFUoKjxuGGs=; b=lKKvhvAtHe47stpiWgJWmBY7peqdOZHDZ1Uh9LjYKvo2uRs1GclEw0Wm2sO7+uqV2+UJPsLS/hAbQXVpcapFmvqBFAKDKydShetLJ4znfzF6ByjAGpBIQmEgknmI3LwdemN+52tI+L+EbuDNIYkxe99UPRP7/ObxX34Hc0DUmF0= Received: from AM6P192CA0058.EURP192.PROD.OUTLOOK.COM (2603:10a6:209:82::35) by DB7PR08MB3610.eurprd08.prod.outlook.com (2603:10a6:10:44::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3153.28; Tue, 7 Jul 2020 02:28:12 +0000 Received: from AM5EUR03FT011.eop-EUR03.prod.protection.outlook.com (2603:10a6:209:82:cafe::ab) by AM6P192CA0058.outlook.office365.com (2603:10a6:209:82::35) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3153.23 via Frontend Transport; Tue, 7 Jul 2020 02:28:12 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123) smtp.mailfrom=arm.com; dpdk.org; dkim=pass (signature was verified) header.d=armh.onmicrosoft.com;dpdk.org; dmarc=bestguesspass action=none header.from=arm.com; Received-SPF: Pass (protection.outlook.com: domain of arm.com designates 63.35.35.123 as permitted sender) receiver=protection.outlook.com; client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com; Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by AM5EUR03FT011.mail.protection.outlook.com (10.152.16.152) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3153.24 via Frontend Transport; Tue, 7 Jul 2020 02:28:11 +0000 Received: ("Tessian outbound b8ad5ab47c8c:v62"); Tue, 07 Jul 2020 02:28:11 +0000 X-CR-MTA-TID: 64aa7808 Received: from f117848027cc.3 by 64aa7808-outbound-1.mta.getcheckrecipient.com id D9620416-267A-47CC-9517-FA880CCAEE7D.1; Tue, 07 Jul 2020 02:28:06 +0000 Received: from EUR04-VI1-obe.outbound.protection.outlook.com by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id f117848027cc.3 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384); Tue, 07 Jul 2020 02:28:06 +0000 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=g1rq90zDetPGi7/tAA1VSWNMPsdd6qBe8oTl73+/p7OlY4i5rUlzGp7qzrYiTv83KGmPk+fL/c/1hHPYG7ZeqRyJAmjVFDgMuKgTZGg8xCzISB8ZOqbW70SPOeg30uGOEjertzdSjNR8nrahQzOl9OhpoWpXL1DDof+m0hw8eDhy+JK6nwk7sMT3jsGwgX8fWROliKQNaGM9NmPLchBt8wSqr6g03s/5s/QcKaay73M39TC2f/QSF0aId1EEdrRFBiimoisoYV3OgBcvTFsKAzF/E+B8NGySKiYGy4X2MVPOwrE3X4W4XV+fJ4U5S77e3rOEGZN7VxFJkwA3Ilu21Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=fbRQt7813+zVNSshkZsMb6tuKNC6gOX6AFUoKjxuGGs=; b=Ogu6XHf5G8jO87eGqclPkjo8h77QVlfBxCXXajxdL42a3ViRsrO1JKTpHWE8xKyK9WXZ74bjOiknLh4y/R/Rmfn83p/2MH8a6ill8Eq+ySD6zgUZ6DsXchVF0Ssi2tXKxVh1bhPPmQEGo9MfAG9ckeOC6bvi6rQR6kjJoARDUd32T/imGT/wcTMxSq9dhSwPlMuIFPVlI5BX1WNrsXmekpiCR/z0Yg5GnulbvGWOMGKTHSqzzesrrVpGFFHtLlqxYETwNtKN3lvxDWj7MeKnHQmLKr4MWApI4VbSKDlXXJgk95yWP5t49PFFaBago/y/TTMyvd6npc0xhMt3XSa4fQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass header.d=arm.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; s=selector2-armh-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=fbRQt7813+zVNSshkZsMb6tuKNC6gOX6AFUoKjxuGGs=; b=lKKvhvAtHe47stpiWgJWmBY7peqdOZHDZ1Uh9LjYKvo2uRs1GclEw0Wm2sO7+uqV2+UJPsLS/hAbQXVpcapFmvqBFAKDKydShetLJ4znfzF6ByjAGpBIQmEgknmI3LwdemN+52tI+L+EbuDNIYkxe99UPRP7/ObxX34Hc0DUmF0= Received: from VE1PR08MB4640.eurprd08.prod.outlook.com (2603:10a6:802:b2::11) by VI1PR08MB3456.eurprd08.prod.outlook.com (2603:10a6:803:7e::25) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3153.27; Tue, 7 Jul 2020 02:28:04 +0000 Received: from VE1PR08MB4640.eurprd08.prod.outlook.com ([fe80::c2e:9ccb:a690:6863]) by VE1PR08MB4640.eurprd08.prod.outlook.com ([fe80::c2e:9ccb:a690:6863%6]) with mapi id 15.20.3153.029; Tue, 7 Jul 2020 02:28:04 +0000 From: Phil Yang To: Joyce Kong , "maxime.coquelin@redhat.com" , "jerinj@marvell.com" , "zhihong.wang@intel.com" , "xiaolong.ye@intel.com" , "beilei.xing@intel.com" , "jia.guo@intel.com" , "john.mcnamara@intel.com" , "matan@mellanox.com" , "shahafs@mellanox.com" , "viacheslavo@mellanox.com" , Honnappa Nagarahalli , Ruifeng Wang CC: "dev@dpdk.org" , nd Thread-Topic: [PATCH v2 6/6] net/mlx5: replace restrict keyword with rte restrict Thread-Index: AQHWU2ocAQ7eeNHFx06cnllLSbWYkaj7ZXmA Date: Tue, 7 Jul 2020 02:28:04 +0000 Message-ID: References: <20200611033248.39049-1-joyce.kong@arm.com> <20200706074930.54299-1-joyce.kong@arm.com> <20200706074930.54299-7-joyce.kong@arm.com> In-Reply-To: <20200706074930.54299-7-joyce.kong@arm.com> Accept-Language: zh-CN, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-ts-tracking-id: 76ccafa5-e973-44d6-9a2d-ed9d5da26ad3.0 x-checkrecipientchecked: true Authentication-Results-Original: arm.com; dkim=none (message not signed) header.d=none;arm.com; dmarc=none action=none header.from=arm.com; x-originating-ip: [203.126.0.112] x-ms-publictraffictype: Email X-MS-Office365-Filtering-HT: Tenant X-MS-Office365-Filtering-Correlation-Id: cfae4fa4-f7e1-4588-0f57-08d8221d651f x-ms-traffictypediagnostic: VI1PR08MB3456:|DB7PR08MB3610: x-ld-processed: f34e5979-57d9-4aaa-ad4d-b122a662184d,ExtAddr x-ms-exchange-transport-forked: True X-Microsoft-Antispam-PRVS: x-checkrecipientrouted: true nodisclaimer: true x-ms-oob-tlc-oobclassifiers: OLM:6790;OLM:6790; x-forefront-prvs: 0457F11EAF X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam-Untrusted: BCL:0; X-Microsoft-Antispam-Message-Info-Original: ciIYnGOGf2ZCrX4oRbqNw2xRubN9qaKlFpXZyiFn7RJCO2FCoA8kLQRBmTeZqUmW/sixGEyOeqeHh5ioctfofdMemr2kVDFTGRKtLlXdMmtGji60K9QI12qZ4zWx+K1gRzDB68fW48h/zI8Mk5ENMVJA+SVs4r2UsmeyXzILY7VpIlMELZf1RZuWDBTVNIvH77F76GYvcAkZocZLySMWkz3Zh6tZtaFnWz7okw6R9vV8q5m3jmPv2EKwcclL2TgUpXdFPYu95997SESWAdsvUv2qiIV/bfNmzJkBPtpQKNvT5ni8KFzg/VDc9rL4Wi7cweVPDPohxeLTH+WDSNskqPa5eKxSwPMson3N1DKuZQAB94OHOujfn6e+Y2Pt5pRJ X-Forefront-Antispam-Report-Untrusted: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:VE1PR08MB4640.eurprd08.prod.outlook.com; PTR:; CAT:NONE; SFTY:; SFS:(4636009)(366004)(39860400002)(396003)(376002)(136003)(346002)(33656002)(71200400001)(30864003)(186003)(8936002)(4326008)(7696005)(26005)(6506007)(53546011)(86362001)(64756008)(478600001)(66446008)(66556008)(5660300002)(52536014)(6636002)(66476007)(66946007)(76116006)(54906003)(110136005)(316002)(55016002)(9686003)(7416002)(83380400001)(8676002)(2906002)(921003)(579004); DIR:OUT; SFP:1101; x-ms-exchange-antispam-messagedata: +/2APLIVu5Sw7y6TXU+Y8d3kNeqoDrdqX6lRvTzEFskxhcg8uWhwGK39UGJqgzP5LeVLyb0/9RZrXORCv3pgBOjxYdoAEvpo487aI+846Cyx4Msbgubi5KxHGTbWHsTEC8YUdqJt29cGl5fxDtLmHhxDmTna205TYZ29JVQlDKM2+7//WWTIkjWJdwd2DP9Z/uprt8wX2uK4NISK0gXnzqwrr0463BQPE+/99kx7avffA4Lxxa33j1PWj93d4Xamncefh0B+eJ/8wmYpvYzatnDCMZT75cgIwE3dQo2/plMtjI616qqWcWl+u+ulk/2GSp4z1wC2Fr+7+JNWUgFcwbcCNqWgd+0lFWQkblVhuu+dflGOa15m9Ost964gPXNiSONckcUeXW6Xx4EJw2GNadxmCkEQUSVkc7dH4H58DI8pauZL6J/bPDNzdr6yhiy+k2f/6e1RKym+/2wx/ZEcCCDX+y42ZxzUMCJ/hMykIKo= Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR08MB3456 Original-Authentication-Results: arm.com; dkim=none (message not signed) header.d=none;arm.com; dmarc=none action=none header.from=arm.com; X-EOPAttributedMessage: 0 X-MS-Exchange-Transport-CrossTenantHeadersStripped: AM5EUR03FT011.eop-EUR03.prod.protection.outlook.com X-Forefront-Antispam-Report: CIP:63.35.35.123; CTRY:IE; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:64aa7808-outbound-1.mta.getcheckrecipient.com; PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com; CAT:NONE; SFTY:; SFS:(4636009)(136003)(346002)(376002)(396003)(39860400002)(46966005)(47076004)(26005)(33656002)(53546011)(6506007)(52536014)(4326008)(9686003)(82740400003)(2906002)(336012)(86362001)(8676002)(7696005)(81166007)(82310400002)(55016002)(5660300002)(8936002)(30864003)(478600001)(186003)(316002)(6636002)(36906005)(83380400001)(356005)(70586007)(70206006)(54906003)(110136005)(921003); DIR:OUT; SFP:1101; X-MS-Office365-Filtering-Correlation-Id-Prvs: 03e0202f-55ce-47d4-d0bc-08d8221d609c X-Forefront-PRVS: 0457F11EAF X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: VbkplRb8PuvMWuxkzNZWvHBljo2jQEU52dH8hLGmNZcRXY61CD3ZtHrRB4WoY5osGcIwlajPXxbCZkctmuw6hWEDmBxnZsQNAQqGWbl5G/Sa6GgCQKI8OKro08Jl/NZEQ82x/LMNMPcdtKlscu1FekMHBlEQSMLVl/wVRYU0YFfptE8Dku/fZ8TLsvfCSRjrGMYb37o2Zn+wTyazfx4PcWXy4OWnzrjk2owMilzem19dLKglrV86KLxmLGNFjrFAMK7bwcc/jTqDksBnU9mZZrGdkGc0lhxyjXSZ1yTAOr7oL65Uk9Y1Sm9rDGTAehixNP/tEDqK4ceM4m4z6eX9UMOIbhiBGnyZgezF+dFl1bT6GLDPfXVpPfI3b0WjGd2+DpOagOvWWNpdN20aEtIAxjb9FK6FwjbX1jqZc8WpmzA= X-OriginatorOrg: arm.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Jul 2020 02:28:11.7915 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: cfae4fa4-f7e1-4588-0f57-08d8221d651f X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d; Ip=[63.35.35.123]; Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com] X-MS-Exchange-CrossTenant-AuthSource: AM5EUR03FT011.eop-EUR03.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB7PR08MB3610 Subject: Re: [dpdk-dev] [PATCH v2 6/6] net/mlx5: replace restrict keyword with rte restrict X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" > -----Original Message----- > From: Joyce Kong > Sent: Monday, July 6, 2020 3:50 PM > To: maxime.coquelin@redhat.com; jerinj@marvell.com; > zhihong.wang@intel.com; xiaolong.ye@intel.com; beilei.xing@intel.com; > jia.guo@intel.com; john.mcnamara@intel.com; matan@mellanox.com; > shahafs@mellanox.com; viacheslavo@mellanox.com; Honnappa Nagarahalli > ; Phil Yang ; > Ruifeng Wang > Cc: dev@dpdk.org; nd > Subject: [PATCH v2 6/6] net/mlx5: replace restrict keyword with rte restr= ict >=20 > The 'restrict' keyword is recognized in C99, which might have > some issues with old compilers. It is better to use the wrapper > '__rte_restrict' which can be supported by all compilers for > restricted pointers. >=20 > Signed-off-by: Joyce Kong Reviewed-by: Phil Yang > --- > drivers/net/mlx5/mlx5_rxtx.c | 208 +++++++++++++++++------------------ > 1 file changed, 104 insertions(+), 104 deletions(-) >=20 > diff --git a/drivers/net/mlx5/mlx5_rxtx.c b/drivers/net/mlx5/mlx5_rxtx.c > index e4106bf0a..894f441f3 100644 > --- a/drivers/net/mlx5/mlx5_rxtx.c > +++ b/drivers/net/mlx5/mlx5_rxtx.c > @@ -113,13 +113,13 @@ mlx5_queue_state_modify(struct rte_eth_dev > *dev, > struct mlx5_mp_arg_queue_state_modify *sm); >=20 > static inline void > -mlx5_lro_update_tcp_hdr(struct rte_tcp_hdr *restrict tcp, > - volatile struct mlx5_cqe *restrict cqe, > +mlx5_lro_update_tcp_hdr(struct rte_tcp_hdr *__rte_restrict tcp, > + volatile struct mlx5_cqe *__rte_restrict cqe, > uint32_t phcsum); >=20 > static inline void > -mlx5_lro_update_hdr(uint8_t *restrict padd, > - volatile struct mlx5_cqe *restrict cqe, > +mlx5_lro_update_hdr(uint8_t *__rte_restrict padd, > + volatile struct mlx5_cqe *__rte_restrict cqe, > uint32_t len); >=20 > uint32_t mlx5_ptype_table[] __rte_cache_aligned =3D { > @@ -374,7 +374,7 @@ mlx5_set_swp_types_table(void) > * Software Parser flags are set by pointer. > */ > static __rte_always_inline uint32_t > -txq_mbuf_to_swp(struct mlx5_txq_local *restrict loc, > +txq_mbuf_to_swp(struct mlx5_txq_local *__rte_restrict loc, > uint8_t *swp_flags, > unsigned int olx) > { > @@ -747,7 +747,7 @@ check_err_cqe_seen(volatile struct mlx5_err_cqe > *err_cqe) > * the error completion entry is handled successfully. > */ > static int > -mlx5_tx_error_cqe_handle(struct mlx5_txq_data *restrict txq, > +mlx5_tx_error_cqe_handle(struct mlx5_txq_data *__rte_restrict txq, > volatile struct mlx5_err_cqe *err_cqe) > { > if (err_cqe->syndrome !=3D MLX5_CQE_SYNDROME_WR_FLUSH_ERR) > { > @@ -1508,8 +1508,8 @@ mlx5_rx_burst(void *dpdk_rxq, struct rte_mbuf > **pkts, uint16_t pkts_n) > * The L3 pseudo-header checksum. > */ > static inline void > -mlx5_lro_update_tcp_hdr(struct rte_tcp_hdr *restrict tcp, > - volatile struct mlx5_cqe *restrict cqe, > +mlx5_lro_update_tcp_hdr(struct rte_tcp_hdr *__rte_restrict tcp, > + volatile struct mlx5_cqe *__rte_restrict cqe, > uint32_t phcsum) > { > uint8_t l4_type =3D (rte_be_to_cpu_16(cqe->hdr_type_etc) & > @@ -1550,8 +1550,8 @@ mlx5_lro_update_tcp_hdr(struct rte_tcp_hdr > *restrict tcp, > * The packet length. > */ > static inline void > -mlx5_lro_update_hdr(uint8_t *restrict padd, > - volatile struct mlx5_cqe *restrict cqe, > +mlx5_lro_update_hdr(uint8_t *__rte_restrict padd, > + volatile struct mlx5_cqe *__rte_restrict cqe, > uint32_t len) > { > union { > @@ -1965,7 +1965,7 @@ mlx5_check_vec_rx_support(struct rte_eth_dev > *dev __rte_unused) > * compile time and may be used for optimization. > */ > static __rte_always_inline void > -mlx5_tx_free_mbuf(struct rte_mbuf **restrict pkts, > +mlx5_tx_free_mbuf(struct rte_mbuf **__rte_restrict pkts, > unsigned int pkts_n, > unsigned int olx __rte_unused) > { > @@ -2070,7 +2070,7 @@ mlx5_tx_free_mbuf(struct rte_mbuf **restrict > pkts, > * compile time and may be used for optimization. > */ > static __rte_always_inline void > -mlx5_tx_free_elts(struct mlx5_txq_data *restrict txq, > +mlx5_tx_free_elts(struct mlx5_txq_data *__rte_restrict txq, > uint16_t tail, > unsigned int olx __rte_unused) > { > @@ -2111,8 +2111,8 @@ mlx5_tx_free_elts(struct mlx5_txq_data *restrict > txq, > * compile time and may be used for optimization. > */ > static __rte_always_inline void > -mlx5_tx_copy_elts(struct mlx5_txq_data *restrict txq, > - struct rte_mbuf **restrict pkts, > +mlx5_tx_copy_elts(struct mlx5_txq_data *__rte_restrict txq, > + struct rte_mbuf **__rte_restrict pkts, > unsigned int pkts_n, > unsigned int olx __rte_unused) > { > @@ -2148,7 +2148,7 @@ mlx5_tx_copy_elts(struct mlx5_txq_data *restrict > txq, > * compile time and may be used for optimization. > */ > static __rte_always_inline void > -mlx5_tx_comp_flush(struct mlx5_txq_data *restrict txq, > +mlx5_tx_comp_flush(struct mlx5_txq_data *__rte_restrict txq, > volatile struct mlx5_cqe *last_cqe, > unsigned int olx __rte_unused) > { > @@ -2179,7 +2179,7 @@ mlx5_tx_comp_flush(struct mlx5_txq_data > *restrict txq, > * routine smaller, simple and faster - from experiments. > */ > static void > -mlx5_tx_handle_completion(struct mlx5_txq_data *restrict txq, > +mlx5_tx_handle_completion(struct mlx5_txq_data *__rte_restrict txq, > unsigned int olx __rte_unused) > { > unsigned int count =3D MLX5_TX_COMP_MAX_CQE; > @@ -2268,8 +2268,8 @@ mlx5_tx_handle_completion(struct mlx5_txq_data > *restrict txq, > * compile time and may be used for optimization. > */ > static __rte_always_inline void > -mlx5_tx_request_completion(struct mlx5_txq_data *restrict txq, > - struct mlx5_txq_local *restrict loc, > +mlx5_tx_request_completion(struct mlx5_txq_data *__rte_restrict txq, > + struct mlx5_txq_local *__rte_restrict loc, > unsigned int olx) > { > uint16_t head =3D txq->elts_head; > @@ -2316,7 +2316,7 @@ mlx5_tx_request_completion(struct > mlx5_txq_data *restrict txq, > int > mlx5_tx_descriptor_status(void *tx_queue, uint16_t offset) > { > - struct mlx5_txq_data *restrict txq =3D tx_queue; > + struct mlx5_txq_data *__rte_restrict txq =3D tx_queue; > uint16_t used; >=20 > mlx5_tx_handle_completion(txq, 0); > @@ -2347,14 +2347,14 @@ mlx5_tx_descriptor_status(void *tx_queue, > uint16_t offset) > * compile time and may be used for optimization. > */ > static __rte_always_inline void > -mlx5_tx_cseg_init(struct mlx5_txq_data *restrict txq, > - struct mlx5_txq_local *restrict loc __rte_unused, > - struct mlx5_wqe *restrict wqe, > +mlx5_tx_cseg_init(struct mlx5_txq_data *__rte_restrict txq, > + struct mlx5_txq_local *__rte_restrict loc __rte_unused, > + struct mlx5_wqe *__rte_restrict wqe, > unsigned int ds, > unsigned int opcode, > unsigned int olx __rte_unused) > { > - struct mlx5_wqe_cseg *restrict cs =3D &wqe->cseg; > + struct mlx5_wqe_cseg *__rte_restrict cs =3D &wqe->cseg; >=20 > /* For legacy MPW replace the EMPW by TSO with modifier. */ > if (MLX5_TXOFF_CONFIG(MPW) && opcode =3D=3D > MLX5_OPCODE_ENHANCED_MPSW) > @@ -2382,12 +2382,12 @@ mlx5_tx_cseg_init(struct mlx5_txq_data *restrict > txq, > * compile time and may be used for optimization. > */ > static __rte_always_inline void > -mlx5_tx_eseg_none(struct mlx5_txq_data *restrict txq __rte_unused, > - struct mlx5_txq_local *restrict loc, > - struct mlx5_wqe *restrict wqe, > +mlx5_tx_eseg_none(struct mlx5_txq_data *__rte_restrict txq > __rte_unused, > + struct mlx5_txq_local *__rte_restrict loc, > + struct mlx5_wqe *__rte_restrict wqe, > unsigned int olx) > { > - struct mlx5_wqe_eseg *restrict es =3D &wqe->eseg; > + struct mlx5_wqe_eseg *__rte_restrict es =3D &wqe->eseg; > uint32_t csum; >=20 > /* > @@ -2440,13 +2440,13 @@ mlx5_tx_eseg_none(struct mlx5_txq_data > *restrict txq __rte_unused, > * compile time and may be used for optimization. > */ > static __rte_always_inline void > -mlx5_tx_eseg_dmin(struct mlx5_txq_data *restrict txq __rte_unused, > - struct mlx5_txq_local *restrict loc, > - struct mlx5_wqe *restrict wqe, > +mlx5_tx_eseg_dmin(struct mlx5_txq_data *__rte_restrict txq > __rte_unused, > + struct mlx5_txq_local *__rte_restrict loc, > + struct mlx5_wqe *__rte_restrict wqe, > unsigned int vlan, > unsigned int olx) > { > - struct mlx5_wqe_eseg *restrict es =3D &wqe->eseg; > + struct mlx5_wqe_eseg *__rte_restrict es =3D &wqe->eseg; > uint32_t csum; > uint8_t *psrc, *pdst; >=20 > @@ -2524,15 +2524,15 @@ mlx5_tx_eseg_dmin(struct mlx5_txq_data > *restrict txq __rte_unused, > * Pointer to the next Data Segment (aligned and wrapped around). > */ > static __rte_always_inline struct mlx5_wqe_dseg * > -mlx5_tx_eseg_data(struct mlx5_txq_data *restrict txq, > - struct mlx5_txq_local *restrict loc, > - struct mlx5_wqe *restrict wqe, > +mlx5_tx_eseg_data(struct mlx5_txq_data *__rte_restrict txq, > + struct mlx5_txq_local *__rte_restrict loc, > + struct mlx5_wqe *__rte_restrict wqe, > unsigned int vlan, > unsigned int inlen, > unsigned int tso, > unsigned int olx) > { > - struct mlx5_wqe_eseg *restrict es =3D &wqe->eseg; > + struct mlx5_wqe_eseg *__rte_restrict es =3D &wqe->eseg; > uint32_t csum; > uint8_t *psrc, *pdst; > unsigned int part; > @@ -2650,7 +2650,7 @@ mlx5_tx_eseg_data(struct mlx5_txq_data *restrict > txq, > */ > static __rte_always_inline unsigned int > mlx5_tx_mseg_memcpy(uint8_t *pdst, > - struct mlx5_txq_local *restrict loc, > + struct mlx5_txq_local *__rte_restrict loc, > unsigned int len, > unsigned int must, > unsigned int olx __rte_unused) > @@ -2747,15 +2747,15 @@ mlx5_tx_mseg_memcpy(uint8_t *pdst, > * wrapping check on its own). > */ > static __rte_always_inline struct mlx5_wqe_dseg * > -mlx5_tx_eseg_mdat(struct mlx5_txq_data *restrict txq, > - struct mlx5_txq_local *restrict loc, > - struct mlx5_wqe *restrict wqe, > +mlx5_tx_eseg_mdat(struct mlx5_txq_data *__rte_restrict txq, > + struct mlx5_txq_local *__rte_restrict loc, > + struct mlx5_wqe *__rte_restrict wqe, > unsigned int vlan, > unsigned int inlen, > unsigned int tso, > unsigned int olx) > { > - struct mlx5_wqe_eseg *restrict es =3D &wqe->eseg; > + struct mlx5_wqe_eseg *__rte_restrict es =3D &wqe->eseg; > uint32_t csum; > uint8_t *pdst; > unsigned int part, tlen =3D 0; > @@ -2851,9 +2851,9 @@ mlx5_tx_eseg_mdat(struct mlx5_txq_data *restrict > txq, > * compile time and may be used for optimization. > */ > static __rte_always_inline void > -mlx5_tx_dseg_ptr(struct mlx5_txq_data *restrict txq, > - struct mlx5_txq_local *restrict loc, > - struct mlx5_wqe_dseg *restrict dseg, > +mlx5_tx_dseg_ptr(struct mlx5_txq_data *__rte_restrict txq, > + struct mlx5_txq_local *__rte_restrict loc, > + struct mlx5_wqe_dseg *__rte_restrict dseg, > uint8_t *buf, > unsigned int len, > unsigned int olx __rte_unused) > @@ -2885,9 +2885,9 @@ mlx5_tx_dseg_ptr(struct mlx5_txq_data *restrict > txq, > * compile time and may be used for optimization. > */ > static __rte_always_inline void > -mlx5_tx_dseg_iptr(struct mlx5_txq_data *restrict txq, > - struct mlx5_txq_local *restrict loc, > - struct mlx5_wqe_dseg *restrict dseg, > +mlx5_tx_dseg_iptr(struct mlx5_txq_data *__rte_restrict txq, > + struct mlx5_txq_local *__rte_restrict loc, > + struct mlx5_wqe_dseg *__rte_restrict dseg, > uint8_t *buf, > unsigned int len, > unsigned int olx __rte_unused) > @@ -2961,9 +2961,9 @@ mlx5_tx_dseg_iptr(struct mlx5_txq_data *restrict > txq, > * last packet in the eMPW session. > */ > static __rte_always_inline struct mlx5_wqe_dseg * > -mlx5_tx_dseg_empw(struct mlx5_txq_data *restrict txq, > - struct mlx5_txq_local *restrict loc __rte_unused, > - struct mlx5_wqe_dseg *restrict dseg, > +mlx5_tx_dseg_empw(struct mlx5_txq_data *__rte_restrict txq, > + struct mlx5_txq_local *__rte_restrict loc __rte_unused, > + struct mlx5_wqe_dseg *__rte_restrict dseg, > uint8_t *buf, > unsigned int len, > unsigned int olx __rte_unused) > @@ -3024,9 +3024,9 @@ mlx5_tx_dseg_empw(struct mlx5_txq_data > *restrict txq, > * Ring buffer wraparound check is needed. > */ > static __rte_always_inline struct mlx5_wqe_dseg * > -mlx5_tx_dseg_vlan(struct mlx5_txq_data *restrict txq, > - struct mlx5_txq_local *restrict loc __rte_unused, > - struct mlx5_wqe_dseg *restrict dseg, > +mlx5_tx_dseg_vlan(struct mlx5_txq_data *__rte_restrict txq, > + struct mlx5_txq_local *__rte_restrict loc __rte_unused, > + struct mlx5_wqe_dseg *__rte_restrict dseg, > uint8_t *buf, > unsigned int len, > unsigned int olx __rte_unused) > @@ -3112,15 +3112,15 @@ mlx5_tx_dseg_vlan(struct mlx5_txq_data > *restrict txq, > * Actual size of built WQE in segments. > */ > static __rte_always_inline unsigned int > -mlx5_tx_mseg_build(struct mlx5_txq_data *restrict txq, > - struct mlx5_txq_local *restrict loc, > - struct mlx5_wqe *restrict wqe, > +mlx5_tx_mseg_build(struct mlx5_txq_data *__rte_restrict txq, > + struct mlx5_txq_local *__rte_restrict loc, > + struct mlx5_wqe *__rte_restrict wqe, > unsigned int vlan, > unsigned int inlen, > unsigned int tso, > unsigned int olx __rte_unused) > { > - struct mlx5_wqe_dseg *restrict dseg; > + struct mlx5_wqe_dseg *__rte_restrict dseg; > unsigned int ds; >=20 > MLX5_ASSERT((rte_pktmbuf_pkt_len(loc->mbuf) + vlan) >=3D inlen); > @@ -3225,11 +3225,11 @@ mlx5_tx_mseg_build(struct mlx5_txq_data > *restrict txq, > * Local context variables partially updated. > */ > static __rte_always_inline enum mlx5_txcmp_code > -mlx5_tx_packet_multi_tso(struct mlx5_txq_data *restrict txq, > - struct mlx5_txq_local *restrict loc, > +mlx5_tx_packet_multi_tso(struct mlx5_txq_data *__rte_restrict txq, > + struct mlx5_txq_local *__rte_restrict loc, > unsigned int olx) > { > - struct mlx5_wqe *restrict wqe; > + struct mlx5_wqe *__rte_restrict wqe; > unsigned int ds, dlen, inlen, ntcp, vlan =3D 0; >=20 > /* > @@ -3314,12 +3314,12 @@ mlx5_tx_packet_multi_tso(struct mlx5_txq_data > *restrict txq, > * Local context variables partially updated. > */ > static __rte_always_inline enum mlx5_txcmp_code > -mlx5_tx_packet_multi_send(struct mlx5_txq_data *restrict txq, > - struct mlx5_txq_local *restrict loc, > +mlx5_tx_packet_multi_send(struct mlx5_txq_data *__rte_restrict txq, > + struct mlx5_txq_local *__rte_restrict loc, > unsigned int olx) > { > - struct mlx5_wqe_dseg *restrict dseg; > - struct mlx5_wqe *restrict wqe; > + struct mlx5_wqe_dseg *__rte_restrict dseg; > + struct mlx5_wqe *__rte_restrict wqe; > unsigned int ds, nseg; >=20 > MLX5_ASSERT(NB_SEGS(loc->mbuf) > 1); > @@ -3422,11 +3422,11 @@ mlx5_tx_packet_multi_send(struct > mlx5_txq_data *restrict txq, > * Local context variables partially updated. > */ > static __rte_always_inline enum mlx5_txcmp_code > -mlx5_tx_packet_multi_inline(struct mlx5_txq_data *restrict txq, > - struct mlx5_txq_local *restrict loc, > +mlx5_tx_packet_multi_inline(struct mlx5_txq_data *__rte_restrict txq, > + struct mlx5_txq_local *__rte_restrict loc, > unsigned int olx) > { > - struct mlx5_wqe *restrict wqe; > + struct mlx5_wqe *__rte_restrict wqe; > unsigned int ds, inlen, dlen, vlan =3D 0; >=20 > MLX5_ASSERT(MLX5_TXOFF_CONFIG(INLINE)); > @@ -3587,10 +3587,10 @@ mlx5_tx_packet_multi_inline(struct > mlx5_txq_data *restrict txq, > * Local context variables updated. > */ > static __rte_always_inline enum mlx5_txcmp_code > -mlx5_tx_burst_mseg(struct mlx5_txq_data *restrict txq, > - struct rte_mbuf **restrict pkts, > +mlx5_tx_burst_mseg(struct mlx5_txq_data *__rte_restrict txq, > + struct rte_mbuf **__rte_restrict pkts, > unsigned int pkts_n, > - struct mlx5_txq_local *restrict loc, > + struct mlx5_txq_local *__rte_restrict loc, > unsigned int olx) > { > MLX5_ASSERT(loc->elts_free && loc->wqe_free); > @@ -3676,10 +3676,10 @@ mlx5_tx_burst_mseg(struct mlx5_txq_data > *restrict txq, > * Local context variables updated. > */ > static __rte_always_inline enum mlx5_txcmp_code > -mlx5_tx_burst_tso(struct mlx5_txq_data *restrict txq, > - struct rte_mbuf **restrict pkts, > +mlx5_tx_burst_tso(struct mlx5_txq_data *__rte_restrict txq, > + struct rte_mbuf **__rte_restrict pkts, > unsigned int pkts_n, > - struct mlx5_txq_local *restrict loc, > + struct mlx5_txq_local *__rte_restrict loc, > unsigned int olx) > { > MLX5_ASSERT(loc->elts_free && loc->wqe_free); > @@ -3687,8 +3687,8 @@ mlx5_tx_burst_tso(struct mlx5_txq_data *restrict > txq, > pkts +=3D loc->pkts_sent + 1; > pkts_n -=3D loc->pkts_sent; > for (;;) { > - struct mlx5_wqe_dseg *restrict dseg; > - struct mlx5_wqe *restrict wqe; > + struct mlx5_wqe_dseg *__rte_restrict dseg; > + struct mlx5_wqe *__rte_restrict wqe; > unsigned int ds, dlen, hlen, ntcp, vlan =3D 0; > uint8_t *dptr; >=20 > @@ -3800,8 +3800,8 @@ mlx5_tx_burst_tso(struct mlx5_txq_data *restrict > txq, > * MLX5_TXCMP_CODE_EMPW - single-segment packet, use MPW. > */ > static __rte_always_inline enum mlx5_txcmp_code > -mlx5_tx_able_to_empw(struct mlx5_txq_data *restrict txq, > - struct mlx5_txq_local *restrict loc, > +mlx5_tx_able_to_empw(struct mlx5_txq_data *__rte_restrict txq, > + struct mlx5_txq_local *__rte_restrict loc, > unsigned int olx, > bool newp) > { > @@ -3855,9 +3855,9 @@ mlx5_tx_able_to_empw(struct mlx5_txq_data > *restrict txq, > * false - no match, eMPW should be restarted. > */ > static __rte_always_inline bool > -mlx5_tx_match_empw(struct mlx5_txq_data *restrict txq __rte_unused, > - struct mlx5_wqe_eseg *restrict es, > - struct mlx5_txq_local *restrict loc, > +mlx5_tx_match_empw(struct mlx5_txq_data *__rte_restrict txq > __rte_unused, > + struct mlx5_wqe_eseg *__rte_restrict es, > + struct mlx5_txq_local *__rte_restrict loc, > uint32_t dlen, > unsigned int olx) > { > @@ -3909,8 +3909,8 @@ mlx5_tx_match_empw(struct mlx5_txq_data > *restrict txq __rte_unused, > * false - no match, eMPW should be restarted. > */ > static __rte_always_inline void > -mlx5_tx_sdone_empw(struct mlx5_txq_data *restrict txq, > - struct mlx5_txq_local *restrict loc, > +mlx5_tx_sdone_empw(struct mlx5_txq_data *__rte_restrict txq, > + struct mlx5_txq_local *__rte_restrict loc, > unsigned int ds, > unsigned int slen, > unsigned int olx __rte_unused) > @@ -3954,11 +3954,11 @@ mlx5_tx_sdone_empw(struct mlx5_txq_data > *restrict txq, > * false - no match, eMPW should be restarted. > */ > static __rte_always_inline void > -mlx5_tx_idone_empw(struct mlx5_txq_data *restrict txq, > - struct mlx5_txq_local *restrict loc, > +mlx5_tx_idone_empw(struct mlx5_txq_data *__rte_restrict txq, > + struct mlx5_txq_local *__rte_restrict loc, > unsigned int len, > unsigned int slen, > - struct mlx5_wqe *restrict wqem, > + struct mlx5_wqe *__rte_restrict wqem, > unsigned int olx __rte_unused) > { > struct mlx5_wqe_dseg *dseg =3D &wqem->dseg[0]; > @@ -4042,10 +4042,10 @@ mlx5_tx_idone_empw(struct mlx5_txq_data > *restrict txq, > * No VLAN insertion is supported. > */ > static __rte_always_inline enum mlx5_txcmp_code > -mlx5_tx_burst_empw_simple(struct mlx5_txq_data *restrict txq, > - struct rte_mbuf **restrict pkts, > +mlx5_tx_burst_empw_simple(struct mlx5_txq_data *__rte_restrict txq, > + struct rte_mbuf **__rte_restrict pkts, > unsigned int pkts_n, > - struct mlx5_txq_local *restrict loc, > + struct mlx5_txq_local *__rte_restrict loc, > unsigned int olx) > { > /* > @@ -4061,8 +4061,8 @@ mlx5_tx_burst_empw_simple(struct > mlx5_txq_data *restrict txq, > pkts +=3D loc->pkts_sent + 1; > pkts_n -=3D loc->pkts_sent; > for (;;) { > - struct mlx5_wqe_dseg *restrict dseg; > - struct mlx5_wqe_eseg *restrict eseg; > + struct mlx5_wqe_dseg *__rte_restrict dseg; > + struct mlx5_wqe_eseg *__rte_restrict eseg; > enum mlx5_txcmp_code ret; > unsigned int part, loop; > unsigned int slen =3D 0; > @@ -4208,10 +4208,10 @@ mlx5_tx_burst_empw_simple(struct > mlx5_txq_data *restrict txq, > * with inlining, optionally supports VLAN insertion. > */ > static __rte_always_inline enum mlx5_txcmp_code > -mlx5_tx_burst_empw_inline(struct mlx5_txq_data *restrict txq, > - struct rte_mbuf **restrict pkts, > +mlx5_tx_burst_empw_inline(struct mlx5_txq_data *__rte_restrict txq, > + struct rte_mbuf **__rte_restrict pkts, > unsigned int pkts_n, > - struct mlx5_txq_local *restrict loc, > + struct mlx5_txq_local *__rte_restrict loc, > unsigned int olx) > { > /* > @@ -4227,8 +4227,8 @@ mlx5_tx_burst_empw_inline(struct mlx5_txq_data > *restrict txq, > pkts +=3D loc->pkts_sent + 1; > pkts_n -=3D loc->pkts_sent; > for (;;) { > - struct mlx5_wqe_dseg *restrict dseg; > - struct mlx5_wqe *restrict wqem; > + struct mlx5_wqe_dseg *__rte_restrict dseg; > + struct mlx5_wqe *__rte_restrict wqem; > enum mlx5_txcmp_code ret; > unsigned int room, part, nlim; > unsigned int slen =3D 0; > @@ -4489,10 +4489,10 @@ mlx5_tx_burst_empw_inline(struct > mlx5_txq_data *restrict txq, > * Data inlining and VLAN insertion are supported. > */ > static __rte_always_inline enum mlx5_txcmp_code > -mlx5_tx_burst_single_send(struct mlx5_txq_data *restrict txq, > - struct rte_mbuf **restrict pkts, > +mlx5_tx_burst_single_send(struct mlx5_txq_data *__rte_restrict txq, > + struct rte_mbuf **__rte_restrict pkts, > unsigned int pkts_n, > - struct mlx5_txq_local *restrict loc, > + struct mlx5_txq_local *__rte_restrict loc, > unsigned int olx) > { > /* > @@ -4504,7 +4504,7 @@ mlx5_tx_burst_single_send(struct mlx5_txq_data > *restrict txq, > pkts +=3D loc->pkts_sent + 1; > pkts_n -=3D loc->pkts_sent; > for (;;) { > - struct mlx5_wqe *restrict wqe; > + struct mlx5_wqe *__rte_restrict wqe; > enum mlx5_txcmp_code ret; >=20 > MLX5_ASSERT(NB_SEGS(loc->mbuf) =3D=3D 1); > @@ -4602,7 +4602,7 @@ mlx5_tx_burst_single_send(struct mlx5_txq_data > *restrict txq, > * not contain inlined data for eMPW due to > * segment shared for all packets. > */ > - struct mlx5_wqe_dseg *restrict dseg; > + struct mlx5_wqe_dseg *__rte_restrict dseg; > unsigned int ds; > uint8_t *dptr; >=20 > @@ -4765,10 +4765,10 @@ mlx5_tx_burst_single_send(struct > mlx5_txq_data *restrict txq, > } >=20 > static __rte_always_inline enum mlx5_txcmp_code > -mlx5_tx_burst_single(struct mlx5_txq_data *restrict txq, > - struct rte_mbuf **restrict pkts, > +mlx5_tx_burst_single(struct mlx5_txq_data *__rte_restrict txq, > + struct rte_mbuf **__rte_restrict pkts, > unsigned int pkts_n, > - struct mlx5_txq_local *restrict loc, > + struct mlx5_txq_local *__rte_restrict loc, > unsigned int olx) > { > enum mlx5_txcmp_code ret; > @@ -4819,8 +4819,8 @@ mlx5_tx_burst_single(struct mlx5_txq_data > *restrict txq, > * Number of packets successfully transmitted (<=3D pkts_n). > */ > static __rte_always_inline uint16_t > -mlx5_tx_burst_tmpl(struct mlx5_txq_data *restrict txq, > - struct rte_mbuf **restrict pkts, > +mlx5_tx_burst_tmpl(struct mlx5_txq_data *__rte_restrict txq, > + struct rte_mbuf **__rte_restrict pkts, > uint16_t pkts_n, > unsigned int olx) > { > -- > 2.27.0