From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from EUR01-VE1-obe.outbound.protection.outlook.com (mail-ve1eur01on0057.outbound.protection.outlook.com [104.47.1.57]) by dpdk.org (Postfix) with ESMTP id D42EF8DAF for ; Tue, 5 Jun 2018 02:12:49 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Mellanox.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=TPctxjSGrV5ECIscPPiZzHXEvSuPQ8r8hxgJ8IMuDz8=; b=tYAc6JH+6TIeN436V8hQL+Ao8cU9JvtwGihfzNphh2BbIR2/ZUk05oWwfNT53ijpNNxKRj5nnD00ehltpPMCRrO8XwB9Ux/PKm2+6abNxPAEY/zu1EM7kXZ8SKZo4KHjoJ/hTa7GO8JDsbgKiPLv67y2TDBrHqYrN4VZ0laOifg= Received: from mellanox.com (209.116.155.178) by AM5PR0501MB2036.eurprd05.prod.outlook.com (2603:10a6:203:1a::22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.820.11; Tue, 5 Jun 2018 00:12:46 +0000 From: Yongseok Koh To: yliu@fridaylinux.org Cc: stable@dpdk.org, shahafs@mellanox.com, adrien.mazarguil@6wind.com, nelio.laranjeiro@6wind.com Date: Mon, 4 Jun 2018 17:10:49 -0700 Message-Id: <20180605001129.13184-28-yskoh@mellanox.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20180605001129.13184-1-yskoh@mellanox.com> References: <20180605001129.13184-1-yskoh@mellanox.com> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Originating-IP: [209.116.155.178] X-ClientProxiedBy: CO1PR15CA0060.namprd15.prod.outlook.com (2603:10b6:101:1f::28) To AM5PR0501MB2036.eurprd05.prod.outlook.com (2603:10a6:203:1a::22) X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-HT: Tenant X-Microsoft-Antispam: UriScan:; BCL:0; PCL:0; RULEID:(7020095)(4652020)(48565401081)(5600026)(4534165)(4627221)(201703031133081)(201702281549075)(2017052603328)(7153060)(7193020); SRVR:AM5PR0501MB2036; X-Microsoft-Exchange-Diagnostics: 1; AM5PR0501MB2036; 3:lEtJcmnvU640HyoNiZsf+3kC/WpEpOU4nk/lkCkgsIjAKLi4rc7MgeSnGp5wB3XfhwGqOaqf0LfiqgRBzY97j93KkG0Ikj0m/Iy+ut1esT5HiHiEXur3kKGb6jKSJUJUg9N86yTtsocUfgvs6llAQHKbbX5sQv6AbiQ95a9V7hgF8UDpXZPSEH39Loq+VlUYSzU1ttnWzs7AIPXOTY+x980GMJ36W01WmODJWxpi0twhHsBZwQUvc28050+3rorX; 25:ok7iigvTnRB4TK2DoNpx424lFiJVxybYfAAkB9uHMTP8k597Br8TYVANwwIwsK+lnMKZH1BgX9bm0XassUoSRLjYZYUY/UQ2z/hcpBZtOk0cl2L8WvN17FxnMQ62F/cjTKy2bq7dBnbMa4z9QHCDpU0dAuzghw4JA2w1rTvIxN/3FMIKGoXe6NdYuEdh9R/7jM+8ZA65hIS+1vpsx/7zROGN7b/A+jK5LDNmssJtSfGg0EFBNAP5i6JAW4sBKJbDf8laOpGMqvK9HpASXklqaR/eugGkeo0NhaX/+L4TNMImLoSxBpcF9kQzM4/UKL5W3t2WewW5dgPO9k45fNyRzg==; 31:zx2X7kB9Ltb2SUhfYmjMIrRTVltzwnrmHJPexBWErNbFOFz6KVW4ktZh5QqeuKHi2Dh10VgJchQCjDp6PGLRdSa9/90j9AZZnqZZwZqotM+DFO593mvf6yt/Tb4QkteOFE4bjusX70AGsIlwQOOaqvgNmCYvLWHGaqmGc46X6Gb4bNUbxX1Ogi2nX7IS9gmTCCwXSO1oSVlO42VRBAiJGXgYttKcJDqmvbYqo0tH4g8= X-MS-TrafficTypeDiagnostic: AM5PR0501MB2036: Authentication-Results: spf=none (sender IP is ) smtp.mailfrom=yskoh@mellanox.com; X-LD-Processed: a652971c-7d2e-4d9b-a6a4-d149256f461b,ExtAddr X-Microsoft-Exchange-Diagnostics: 1; AM5PR0501MB2036; 20:Re8cF052nKD37PqBEGvx5Ho7BNHnFYlMBZKD7HyI4mHom9x5Qghbu5xnHGr6HRZiXUbWs05cktwm4X7KsFSv4tBFM3SOAoLUqVX2dTnT+oPokIrhQ+jtOX8iAf5TjRkLjYawW5nEcZrJEedtSPM4Hhm05+mqz9Bvy+rz+EJOdPcb1liLtco2K5m23bLkJKas9gEbcrrDHgWxxszAhmnYY7StX3C2lYCSds3leSMwoZmYbs+reaVrU/GhkPQm0nsqlD9qPby0qU+dQbOrp+RbU/VR5kR9Mgbuw3UBHmtW2VMajyx6fwmwWbPFl1uUCfyNhcXu/36qhTX3/jgRb5VyLxE7eCf0CqhwYvYHpu9A1fEAkCff9kwEh1jNOyIZW/z71wyzCyxWj28shW+9b21+cCTejpd3ObzGZbpTagCcZw05vm+j3hSxJLZeuvXNjRGsLm//dWb/Szl/yrv9kMuGAi96ygXfuV2MoOnEICyZeB8Xip2aVOCuzUSaUehuUygF; 4:pkL/yEss1Gta0+3ELNBZxN7ZovDulbw/OHIEAk2xg44tznZ34rqiEV0x051TlGCKzTNT8q4fEuobn+btnj2RE70I1gzK58+JRKKuQY3iPzborR4ozBq547jq0U+xF7F9K/Um19ulqtYFl4jSKic4HVJI7csvtfJLMANAM/9WAwoV/OY0zSwEkW+KY60tYKYYdZHUYdMfOd80BtmfBYod66PiKBQmlyuSuas/QUXTJAMCDx7wp5ZwzUwdkn0Qup7ynnqcwW+GMCJd73h5/P28Xo3F2vmcHGooQKsXj2gymwrRmBOPMsocFkn5UcX+liCBIT7IpgEvnPfFZngkh5nQfCcAM6t/ttu29yQX7GLWgvc= X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-Test: UriScan:(21532816269658)(211171220733660); X-MS-Exchange-SenderADCheck: 1 X-Exchange-Antispam-Report-CFA-Test: BCL:0; PCL:0; RULEID:(8211001083)(6040522)(2401047)(5005006)(8121501046)(93006095)(93001095)(3231254)(944501410)(52105095)(10201501046)(3002001)(6055026)(149027)(150027)(6041310)(20161123560045)(20161123564045)(201703131423095)(201702281528075)(20161123555045)(201703061421075)(201703061406153)(20161123562045)(20161123558120)(6072148)(201708071742011)(7699016); SRVR:AM5PR0501MB2036; BCL:0; PCL:0; RULEID:; SRVR:AM5PR0501MB2036; X-Forefront-PRVS: 0694C54398 X-Forefront-Antispam-Report: SFV:NSPM; SFS:(10009020)(1496009)(39380400002)(366004)(396003)(39860400002)(346002)(376002)(189003)(199004)(478600001)(81156014)(76176011)(7696005)(956004)(2361001)(50226002)(305945005)(2616005)(50466002)(47776003)(486006)(97736004)(5660300001)(2351001)(106356001)(8936002)(186003)(81166006)(52116002)(6116002)(476003)(6916009)(23676004)(386003)(446003)(11346002)(59450400001)(66066001)(26005)(3846002)(8676002)(16526019)(1076002)(25786009)(21086003)(86362001)(16200700003)(2870700001)(575784001)(5890100001)(53936002)(53946003)(4326008)(2906002)(316002)(55016002)(36756003)(105586002)(69596002)(68736007)(7736002)(579004)(569006); DIR:OUT; SFP:1101; SCL:1; SRVR:AM5PR0501MB2036; H:mellanox.com; FPR:; SPF:None; LANG:en; PTR:InfoNoRecords; MX:1; A:1; Received-SPF: None (protection.outlook.com: mellanox.com does not designate permitted sender hosts) X-Microsoft-Exchange-Diagnostics: =?utf-8?B?MTtBTTVQUjA1MDFNQjIwMzY7MjM6UWlUZUNXbEhMUjRmQ0NiMlZKTGI2TmtC?= =?utf-8?B?NXJYRDVndUtMZitiRkpLS1dXVWt2ZFJDekRFRUdndTRZMklNRDFnVHVhUnE4?= =?utf-8?B?dWdBRjMxOU9oOHpTcmgrOUZRU01oNmJuL1c4RWV0SnBaUGZpWVpPdU8vQ0gx?= =?utf-8?B?Umw0UWhwd3VuR0hDNGo1VU9nUkU2V1BwdHJ6NDNBOWM4dUpSbi82NW9JdFEx?= =?utf-8?B?SXpYSTVoR1hVM2lNeU9XenVjdzNaTW9hbnNVbXpSa3NESlF2SS9QRHVBemdt?= =?utf-8?B?emZXUXBreW1kN2kxME9hc0QwZlRLT01vRnlEV0h5YmtyQURZaHhKOHlacGJj?= =?utf-8?B?dEduMlZtNHdndm5JbTlySmJuV1JXZ2ZxUkhtWmxGYkpMZGkzZE5VSG1CUGZT?= =?utf-8?B?c0FTdkFodUI4dDR0WHJ3MW9NMGNEcjFLZVF6L3dMT2drZUNKL3ovTlVaTTRH?= =?utf-8?B?M1I5ZHkvbElScFd4RWtkWmEveFZtelJpZEZuQ1FXZFRBc2REMlMvS3JzdVFX?= =?utf-8?B?VXpaM3VrbC9SR3pPdmR1TXJCcEcxU1JxTXhxQ0xIbjNVa2M2QVBkcEN6MUla?= =?utf-8?B?SFFNbC9iWGNHaGZuYVQ1NTBPQXd1UHlGaU1iRVBXL1dkMXdWUHk0YTV4WW4z?= =?utf-8?B?R2VZdFJWQlU2eUdEd1RWSE1CRWFYdkxwNmpzTzYzYjFuWkRZdGZxZlpocUtI?= =?utf-8?B?Qks1K2cyM2VHWUNmcmROWVVJTWpKMC9QTTVnRm5adnNWMlo3dWVSeUwvRE1l?= =?utf-8?B?ZmpHRVp4NVR2M0EvV3FiMUVmTWFYMXVHbEM2RHNNdzlOMk9GWTA5aTgwb1JI?= =?utf-8?B?SDlDcWMvakVQdEZucW1yVGhmUGJFRWg3dk1IR2F1OXh0S1ZqV2JRQTk5MUI5?= =?utf-8?B?RDVzVzY5NE8xWWk2bXFLc2x1a1o5clB3K3d6bFI1RkdDcDNPRDIyZklqVFVK?= =?utf-8?B?b3NlVEMrVkc1S1ZPTXFIbFBtbnhqZGF2bmlKWFJOMWU0MWdZQnArd0xaMkll?= =?utf-8?B?QmNOOHJ4RzhpNHJ3QVQ3RmsyS0w1WmNJOWVIMW5vUWlSbzdOdUk3N1NIdnM5?= =?utf-8?B?cDNna05uRFRCL25DMkJoZ3g2cDlIamxhU1VoTjNTTUZtOWVOL2sxWkFRTENT?= =?utf-8?B?NThQSHBLTzAzVTE3Um9XbGpoYzBudHF4TXdnczJQRVZhVUFoazRSbE9icVIv?= =?utf-8?B?YS9lb291WlpTb0pyM29WdnA5eVNjRERabDFnSnlxNGU5ajZxdTFONkFaSk5l?= =?utf-8?B?TkNUb3orYmp0Vm1RcmxRQnUweUtPV0UyM2xCZTJSN3JaS3JlbUw3T0ZnMm5t?= =?utf-8?B?UDY3NkJpb1ZJanE0N3BESzVxWnJqa0ZLVkxpMDRRL1prTTJWZWUxUXZ1M1VK?= =?utf-8?B?UHE1dy81WW9Idi8wOHZsS0JBd28vNFFEV3QzRHRUN2w5aGdwRUZPT2ExejAz?= =?utf-8?B?Qk9jOU10d3dNWTRGd2RFOHJRYkJjSmlGTjBYeWx2Z3B6NmljYXhiZGtCbVZm?= =?utf-8?B?ck53OEVIM0pxbmxBRUkrQnJTMTRkZ1pCN1NqRFkvTVl2dEtVeEJhMkFJcGk4?= =?utf-8?B?cXZiVmNFYVVaaFhNWFByN2JJRmttRVMyc1NXbzg2ci80MHRnUGI4MzlERUR2?= =?utf-8?B?UGE0bnlrT21aRmNmRFhPUFpNNjd5dXdWckJDVkIva3RhWVdtMlFZZWsraEtF?= =?utf-8?B?VFdueGNUM0swQ0Qwcm8vK3NvUHdIbXBaS0VVNGVndFg5dG40TFgwQmo1dVF2?= =?utf-8?B?WlZacGR6blZZV21CZ2RwR3NCazNHS1VZY09VT05FaWpHL1ZTZWJYVGxGMmN1?= =?utf-8?B?UHVnOFVES2o2ZDJOZ1B3OVgySU1NNEd6anI5RDZ2QmpJQ2NCdDc0a0RNWkFZ?= =?utf-8?Q?jtBgmQnkYRBRf6sJrKQKA9G4n2VPZ8fG6a?= X-Microsoft-Antispam-Message-Info: SoG/jAMmDPLDCog0jdHX8vK+JFFf9Qvjt3U76wk+U5PtA4AhB5qr2BHBNcf3rFBIKe9C7YgtuwUMW4flbEfo7e//f+lt4YerVSfdWfUGmB3qT5ug0j4xmijafUaT4JAnIXEC1XEk+cWEFFVmELyJ/ZjvR1SX7SjLWBdU9AfAb0q3vapBKbUbVxH8tYL2Ozk9 X-Microsoft-Exchange-Diagnostics: 1; AM5PR0501MB2036; 6:wD5UgthlcIHrqLt/TvfDvnZ10DPWid+h8TefKH9loXku8ShVqiyCCzsROe0BsfOcdcTSb9StOZvVJogrGqOhY/BxAtW8gp56sdU88KGKvMWZD5bietqBqCvrWuV1Al45fJquf9DY8Hv5mmSCSz77kM0XhznLJQCKHWXKzoc3jtYFSFz49gHOgbAiAMrDnlZ1faybIELR4kdz+eqpAYzOHVIzGd7KXY5gy6HoWlyjEud7LsyEqjd/VP9zVwJt0rbabaNyJ5dMS5x/n99tPG/KsVJMCEN4nrUDNvt/CJOh1pyBu3paKYrCP4ppS3jsowd2GWJIy2Q4fH1ZxSdGlmPWG1YsW7Xg0W30K0wmtHkSk6r2LnHCCAeps/TtZpxK7GHUp69JGYIDecK+7SXSrZXFCefdXKmuQcTx6ewrIw9kq0HMCEJd21Fe993pRf9otERQ24VpefQ9v0mUYdcUfIKcag==; 5:dsD9Ra7y+IF3gHdBEA0/BmdadHhnT2ohsveuP6/tybjK5UT+9WayGn1+bq/kftjIkwfG+ivEzp5n+HkRLPPR2fUAUXe2b+/Z1MeZ4t8gkggBOhhT9R29bEtw3prBcUiAWe+hTVB/AXlmBrrlQKRxdRmdcKNjf0sgGdUcfTJq8mQ=; 24:llLcrg+y3JhPYU/a8A59f5wiP11vslDalJhWW+nDmFA2JnnEBoSE+4a3sPJPTXMsmE1qS1FRwMcJ0QWEzFciMGIEtIcerySvmZaDyPXlgB8= SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-Microsoft-Exchange-Diagnostics: 1; AM5PR0501MB2036; 7:oHd0B/M+NjFU/6H2lXEOagLmQ5gKtTv9rHBbT6TF5ChgrZUAmoS7VZJGELHi7Tsr1YObcZVCEPCZPnxX+7iKPXZl5a+dFiJIvS2H8ouB1onsUoz7YhDPYH9K5PkQoAhimSmArwA1PSLry7KRLnQ6w6aEFrkL0l0L/UXtroTl0Upx79tcJMb0PL8wBod3SxclQ+wDhO4aNFpPuOCNtiU3+8ikdKcf1dom3RgNfvCXW4ORCOtM5JY2EzVxnjEzBEcI X-MS-Office365-Filtering-Correlation-Id: c1419112-4a52-45a8-8883-08d5ca79117a X-OriginatorOrg: Mellanox.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 Jun 2018 00:12:46.1982 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: c1419112-4a52-45a8-8883-08d5ca79117a X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: a652971c-7d2e-4d9b-a6a4-d149256f461b X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM5PR0501MB2036 Subject: [dpdk-stable] [PATCH 27/67] net/mlx5: standardize on negative errno values X-BeenThere: stable@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches for DPDK stable branches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 05 Jun 2018 00:12:50 -0000 From: NĂ©lio Laranjeiro [ backported from upstream commit a6d83b6a9209a198fa5a7d2f9cbb37190e256f9c ] Set rte_errno systematically as well. Signed-off-by: Nelio Laranjeiro Acked-by: Adrien Mazarguil --- drivers/net/mlx5/mlx5.c | 88 ++++++----- drivers/net/mlx5/mlx5_ethdev.c | 225 ++++++++++++++++------------ drivers/net/mlx5/mlx5_flow.c | 317 +++++++++++++++++++++++----------------- drivers/net/mlx5/mlx5_mac.c | 33 +++-- drivers/net/mlx5/mlx5_mr.c | 15 +- drivers/net/mlx5/mlx5_rss.c | 50 ++++--- drivers/net/mlx5/mlx5_rxmode.c | 28 +++- drivers/net/mlx5/mlx5_rxq.c | 138 ++++++++++------- drivers/net/mlx5/mlx5_socket.c | 82 +++++++---- drivers/net/mlx5/mlx5_stats.c | 53 +++++-- drivers/net/mlx5/mlx5_trigger.c | 89 ++++++----- drivers/net/mlx5/mlx5_txq.c | 50 ++++--- drivers/net/mlx5/mlx5_vlan.c | 24 +-- 13 files changed, 711 insertions(+), 481 deletions(-) diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c index ebb778826..9319effcb 100644 --- a/drivers/net/mlx5/mlx5.c +++ b/drivers/net/mlx5/mlx5.c @@ -150,7 +150,7 @@ mlx5_getenv_int(const char *name) * A pointer to the callback data. * * @return - * a pointer to the allocate space. + * Allocated buffer, NULL otherwise and rte_errno is set. */ static void * mlx5_alloc_verbs_buf(size_t size, void *data) @@ -172,6 +172,8 @@ mlx5_alloc_verbs_buf(size_t size, void *data) } assert(data != NULL); ret = rte_malloc_socket(__func__, size, alignment, socket); + if (!ret && size) + rte_errno = ENOMEM; DEBUG("Extern alloc size: %lu, align: %lu: %p", size, alignment, ret); return ret; } @@ -405,7 +407,7 @@ mlx5_dev_idx(struct rte_pci_addr *pci_addr) * User data. * * @return - * 0 on success, negative errno value on failure. + * 0 on success, a negative errno value otherwise and rte_errno is set. */ static int mlx5_args_check(const char *key, const char *val, void *opaque) @@ -416,8 +418,9 @@ mlx5_args_check(const char *key, const char *val, void *opaque) errno = 0; tmp = strtoul(val, NULL, 0); if (errno) { + rte_errno = errno; WARN("%s: \"%s\" is not a valid integer", key, val); - return errno; + return -rte_errno; } if (strcmp(MLX5_RXQ_CQE_COMP_EN, key) == 0) { args->cqe_comp = !!tmp; @@ -439,7 +442,8 @@ mlx5_args_check(const char *key, const char *val, void *opaque) args->rx_vec_en = !!tmp; } else { WARN("%s: unknown parameter", key); - return -EINVAL; + rte_errno = EINVAL; + return -rte_errno; } return 0; } @@ -453,7 +457,7 @@ mlx5_args_check(const char *key, const char *val, void *opaque) * Device arguments structure. * * @return - * 0 on success, errno value on failure. + * 0 on success, a negative errno value otherwise and rte_errno is set. */ static int mlx5_args(struct mlx5_args *args, struct rte_devargs *devargs) @@ -485,9 +489,10 @@ mlx5_args(struct mlx5_args *args, struct rte_devargs *devargs) if (rte_kvargs_count(kvlist, params[i])) { ret = rte_kvargs_process(kvlist, params[i], mlx5_args_check, args); - if (ret != 0) { + if (ret) { + rte_errno = EINVAL; rte_kvargs_free(kvlist); - return ret; + return -rte_errno; } } } @@ -513,7 +518,7 @@ static void *uar_base; * Pointer to Ethernet device. * * @return - * 0 on success, errno value on failure. + * 0 on success, a negative errno value otherwise and rte_errno is set. */ static int mlx5_uar_init_primary(struct rte_eth_dev *dev) @@ -522,7 +527,6 @@ mlx5_uar_init_primary(struct rte_eth_dev *dev) void *addr = (void *)0; int i; const struct rte_mem_config *mcfg; - int ret; if (uar_base) { /* UAR address space mapped. */ priv->uar_base = uar_base; @@ -544,8 +548,8 @@ mlx5_uar_init_primary(struct rte_eth_dev *dev) if (addr == MAP_FAILED) { ERROR("Failed to reserve UAR address space, please adjust " "MLX5_UAR_SIZE or try --base-virtaddr"); - ret = ENOMEM; - return ret; + rte_errno = ENOMEM; + return -rte_errno; } /* Accept either same addr or a new addr returned from mmap if target * range occupied. @@ -564,14 +568,13 @@ mlx5_uar_init_primary(struct rte_eth_dev *dev) * Pointer to Ethernet device. * * @return - * 0 on success, errno value on failure. + * 0 on success, a negative errno value otherwise and rte_errno is set. */ static int mlx5_uar_init_secondary(struct rte_eth_dev *dev) { struct priv *priv = dev->data->dev_private; void *addr; - int ret; assert(priv->uar_base); if (uar_base) { /* already reserved. */ @@ -584,15 +587,15 @@ mlx5_uar_init_secondary(struct rte_eth_dev *dev) if (addr == MAP_FAILED) { ERROR("UAR mmap failed: %p size: %llu", priv->uar_base, MLX5_UAR_SIZE); - ret = ENXIO; - return ret; + rte_errno = ENXIO; + return -rte_errno; } if (priv->uar_base != addr) { ERROR("UAR address %p size %llu occupied, please adjust " "MLX5_UAR_OFFSET or try EAL parameter --base-virtaddr", priv->uar_base, MLX5_UAR_SIZE); - ret = ENXIO; - return ret; + rte_errno = ENXIO; + return -rte_errno; } uar_base = addr; /* process local, don't reserve again */ INFO("Reserved UAR address space: %p", addr); @@ -643,13 +646,13 @@ mlx5_args_assign(struct priv *priv, struct mlx5_args *args) * PCI device information. * * @return - * 0 on success, negative errno value on failure. + * 0 on success, a negative errno value otherwise and rte_errno is set. */ static int mlx5_pci_probe(struct rte_pci_driver *pci_drv __rte_unused, struct rte_pci_device *pci_dev) { - struct ibv_device **list; + struct ibv_device **list = NULL; struct ibv_device *ibv_dev; int err = 0; struct ibv_context *attr_ctx = NULL; @@ -669,7 +672,8 @@ mlx5_pci_probe(struct rte_pci_driver *pci_drv __rte_unused, idx = mlx5_dev_idx(&pci_dev->addr); if (idx == -1) { ERROR("this driver cannot support any more adapters"); - return -ENOMEM; + err = ENOMEM; + goto error; } DEBUG("using driver device index %d", idx); /* Save PCI address. */ @@ -677,9 +681,10 @@ mlx5_pci_probe(struct rte_pci_driver *pci_drv __rte_unused, list = ibv_get_device_list(&i); if (list == NULL) { assert(errno); + err = errno; if (errno == ENOSYS) ERROR("cannot list devices, is ib_uverbs loaded?"); - return -errno; + goto error; } assert(i >= 0); /* @@ -715,7 +720,8 @@ mlx5_pci_probe(struct rte_pci_driver *pci_drv __rte_unused, INFO("PCI information matches, using device \"%s\"", list[i]->name); attr_ctx = ibv_open_device(list[i]); - err = errno; + rte_errno = errno; + err = rte_errno; break; } if (attr_ctx == NULL) { @@ -723,13 +729,12 @@ mlx5_pci_probe(struct rte_pci_driver *pci_drv __rte_unused, switch (err) { case 0: ERROR("cannot access device, is mlx5_ib loaded?"); - return -ENODEV; + err = ENODEV; + goto error; case EINVAL: ERROR("cannot use device, are drivers up to date?"); - return -EINVAL; + goto error; } - assert(err > 0); - return -err; } ibv_dev = list[i]; DEBUG("device opened"); @@ -755,8 +760,10 @@ mlx5_pci_probe(struct rte_pci_driver *pci_drv __rte_unused, cqe_comp = 0; else cqe_comp = 1; - if (ibv_query_device_ex(attr_ctx, NULL, &device_attr)) + if (ibv_query_device_ex(attr_ctx, NULL, &device_attr)) { + err = errno; goto error; + } INFO("%u port(s) detected", device_attr.orig_attr.phys_port_cnt); for (i = 0; i < device_attr.orig_attr.phys_port_cnt; i++) { char name[RTE_ETH_NAME_MAX_LEN]; @@ -790,22 +797,19 @@ mlx5_pci_probe(struct rte_pci_driver *pci_drv __rte_unused, eth_dev = rte_eth_dev_attach_secondary(name); if (eth_dev == NULL) { ERROR("can not attach rte ethdev"); - err = ENOMEM; + rte_errno = ENOMEM; + err = rte_errno; goto error; } eth_dev->device = &pci_dev->device; eth_dev->dev_ops = &mlx5_dev_sec_ops; err = mlx5_uar_init_secondary(eth_dev); - if (err < 0) { - err = -err; + if (err) goto error; - } /* Receive command fd from primary process */ err = mlx5_socket_connect(eth_dev); - if (err < 0) { - err = -err; + if (err) goto error; - } /* Remap UAR for Tx queues. */ err = mlx5_tx_uar_remap(eth_dev, err); if (err) @@ -876,6 +880,7 @@ mlx5_pci_probe(struct rte_pci_driver *pci_drv __rte_unused, mlx5_args_assign(priv, &args); if (ibv_query_device_ex(ctx, NULL, &device_attr_ex)) { ERROR("ibv_query_device_ex() failed"); + err = errno; goto port_error; } priv->hw_csum = @@ -996,7 +1001,9 @@ mlx5_pci_probe(struct rte_pci_driver *pci_drv __rte_unused, } #endif /* Get actual MTU if possible. */ - mlx5_get_mtu(eth_dev, &priv->mtu); + err = mlx5_get_mtu(eth_dev, &priv->mtu); + if (err) + goto port_error; DEBUG("port %u MTU is %u", priv->port, priv->mtu); /* * Initialize burst functions to prevent crashes before link-up. @@ -1037,16 +1044,19 @@ mlx5_pci_probe(struct rte_pci_driver *pci_drv __rte_unused, */ /* no port found, complain */ if (!mlx5_dev[idx].ports) { - err = ENODEV; - goto error; + rte_errno = ENODEV; + err = rte_errno; } error: if (attr_ctx) claim_zero(ibv_close_device(attr_ctx)); if (list) ibv_free_device_list(list); - assert(err >= 0); - return -err; + if (err) { + rte_errno = err; + return -rte_errno; + } + return 0; } static const struct rte_pci_id mlx5_pci_id_map[] = { diff --git a/drivers/net/mlx5/mlx5_ethdev.c b/drivers/net/mlx5/mlx5_ethdev.c index 3435bf338..d0be35570 100644 --- a/drivers/net/mlx5/mlx5_ethdev.c +++ b/drivers/net/mlx5/mlx5_ethdev.c @@ -128,7 +128,7 @@ struct ethtool_link_settings { * Interface name output buffer. * * @return - * 0 on success, -1 on failure and errno is set. + * 0 on success, a negative errno value otherwise and rte_errno is set. */ int mlx5_get_ifname(const struct rte_eth_dev *dev, char (*ifname)[IF_NAMESIZE]) @@ -144,8 +144,10 @@ mlx5_get_ifname(const struct rte_eth_dev *dev, char (*ifname)[IF_NAMESIZE]) MKSTR(path, "%s/device/net", priv->ibdev_path); dir = opendir(path); - if (dir == NULL) - return -1; + if (dir == NULL) { + rte_errno = errno; + return -rte_errno; + } } while ((dent = readdir(dir)) != NULL) { char *name = dent->d_name; @@ -195,8 +197,10 @@ mlx5_get_ifname(const struct rte_eth_dev *dev, char (*ifname)[IF_NAMESIZE]) snprintf(match, sizeof(match), "%s", name); } closedir(dir); - if (match[0] == '\0') - return -1; + if (match[0] == '\0') { + rte_errno = ENOENT; + return -rte_errno; + } strncpy(*ifname, match, sizeof(*ifname)); return 0; } @@ -212,20 +216,31 @@ mlx5_get_ifname(const struct rte_eth_dev *dev, char (*ifname)[IF_NAMESIZE]) * Interface request structure output buffer. * * @return - * 0 on success, -1 on failure and errno is set. + * 0 on success, a negative errno value otherwise and rte_errno is set. */ int mlx5_ifreq(const struct rte_eth_dev *dev, int req, struct ifreq *ifr) { int sock = socket(PF_INET, SOCK_DGRAM, IPPROTO_IP); - int ret = -1; + int ret = 0; - if (sock == -1) - return ret; - if (mlx5_get_ifname(dev, &ifr->ifr_name) == 0) - ret = ioctl(sock, req, ifr); + if (sock == -1) { + rte_errno = errno; + return -rte_errno; + } + ret = mlx5_get_ifname(dev, &ifr->ifr_name); + if (ret) + goto error; + ret = ioctl(sock, req, ifr); + if (ret == -1) { + rte_errno = errno; + goto error; + } close(sock); - return ret; + return 0; +error: + close(sock); + return -rte_errno; } /** @@ -237,7 +252,7 @@ mlx5_ifreq(const struct rte_eth_dev *dev, int req, struct ifreq *ifr) * MTU value output buffer. * * @return - * 0 on success, -1 on failure and errno is set. + * 0 on success, a negative errno value otherwise and rte_errno is set. */ int mlx5_get_mtu(struct rte_eth_dev *dev, uint16_t *mtu) @@ -260,7 +275,7 @@ mlx5_get_mtu(struct rte_eth_dev *dev, uint16_t *mtu) * MTU value to set. * * @return - * 0 on success, -1 on failure and errno is set. + * 0 on success, a negative errno value otherwise and rte_errno is set. */ static int mlx5_set_mtu(struct rte_eth_dev *dev, uint16_t mtu) @@ -281,7 +296,7 @@ mlx5_set_mtu(struct rte_eth_dev *dev, uint16_t mtu) * Bitmask for flags to modify. * * @return - * 0 on success, -1 on failure and errno is set. + * 0 on success, a negative errno value otherwise and rte_errno is set. */ int mlx5_set_flags(struct rte_eth_dev *dev, unsigned int keep, unsigned int flags) @@ -303,7 +318,7 @@ mlx5_set_flags(struct rte_eth_dev *dev, unsigned int keep, unsigned int flags) * Pointer to Ethernet device structure. * * @return - * 0 on success, negative errno value on failure. + * 0 on success, a negative errno value otherwise and rte_errno is set. */ int mlx5_dev_configure(struct rte_eth_dev *dev) @@ -316,19 +331,22 @@ mlx5_dev_configure(struct rte_eth_dev *dev) unsigned int reta_idx_n; const uint8_t use_app_rss_key = !!dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key; + int ret = 0; if (use_app_rss_key && (dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key_len != rss_hash_default_key_len)) { /* MLX5 RSS only support 40bytes key. */ - return EINVAL; + rte_errno = EINVAL; + return -rte_errno; } priv->rss_conf.rss_key = rte_realloc(priv->rss_conf.rss_key, rss_hash_default_key_len, 0); if (!priv->rss_conf.rss_key) { ERROR("cannot allocate RSS hash key memory (%u)", rxqs_n); - return ENOMEM; + rte_errno = ENOMEM; + return -rte_errno; } memcpy(priv->rss_conf.rss_key, use_app_rss_key ? @@ -346,7 +364,8 @@ mlx5_dev_configure(struct rte_eth_dev *dev) } if (rxqs_n > priv->ind_table_max_size) { ERROR("cannot handle this many RX queues (%u)", rxqs_n); - return EINVAL; + rte_errno = EINVAL; + return -rte_errno; } if (rxqs_n == priv->rxqs_n) return 0; @@ -359,8 +378,9 @@ mlx5_dev_configure(struct rte_eth_dev *dev) reta_idx_n = (1 << log2above((rxqs_n & (rxqs_n - 1)) ? priv->ind_table_max_size : rxqs_n)); - if (mlx5_rss_reta_index_resize(dev, reta_idx_n)) - return ENOMEM; + ret = mlx5_rss_reta_index_resize(dev, reta_idx_n); + if (ret) + return ret; /* When the number of RX queues is not a power of two, the remaining * table entries are padded with reused WQs and hashes are not spread * uniformly. */ @@ -370,7 +390,6 @@ mlx5_dev_configure(struct rte_eth_dev *dev) j = 0; } return 0; - } /** @@ -478,7 +497,7 @@ mlx5_dev_supported_ptypes_get(struct rte_eth_dev *dev) * Pointer to Ethernet device structure. * * @return - * 0 on success, -1 on error. + * 0 on success, a negative errno value otherwise and rte_errno is set. */ static int mlx5_link_update_unlocked_gset(struct rte_eth_dev *dev) @@ -490,19 +509,22 @@ mlx5_link_update_unlocked_gset(struct rte_eth_dev *dev) struct ifreq ifr; struct rte_eth_link dev_link; int link_speed = 0; + int ret; - if (mlx5_ifreq(dev, SIOCGIFFLAGS, &ifr)) { - WARN("ioctl(SIOCGIFFLAGS) failed: %s", strerror(errno)); - return -1; + ret = mlx5_ifreq(dev, SIOCGIFFLAGS, &ifr); + if (ret) { + WARN("ioctl(SIOCGIFFLAGS) failed: %s", strerror(rte_errno)); + return ret; } memset(&dev_link, 0, sizeof(dev_link)); dev_link.link_status = ((ifr.ifr_flags & IFF_UP) && (ifr.ifr_flags & IFF_RUNNING)); ifr.ifr_data = (void *)&edata; - if (mlx5_ifreq(dev, SIOCETHTOOL, &ifr)) { + ret = mlx5_ifreq(dev, SIOCETHTOOL, &ifr); + if (ret) { WARN("ioctl(SIOCETHTOOL, ETHTOOL_GSET) failed: %s", - strerror(errno)); - return -1; + strerror(rte_errno)); + return ret; } link_speed = ethtool_cmd_speed(&edata); if (link_speed == -1) @@ -532,7 +554,8 @@ mlx5_link_update_unlocked_gset(struct rte_eth_dev *dev) return 0; } /* Link status is still the same. */ - return -1; + rte_errno = EAGAIN; + return -rte_errno; } /** @@ -542,7 +565,7 @@ mlx5_link_update_unlocked_gset(struct rte_eth_dev *dev) * Pointer to Ethernet device structure. * * @return - * 0 on success, -1 on error. + * 0 on success, a negative errno value otherwise and rte_errno is set. */ static int mlx5_link_update_unlocked_gs(struct rte_eth_dev *dev) @@ -552,19 +575,22 @@ mlx5_link_update_unlocked_gs(struct rte_eth_dev *dev) struct ifreq ifr; struct rte_eth_link dev_link; uint64_t sc; + int ret; - if (mlx5_ifreq(dev, SIOCGIFFLAGS, &ifr)) { - WARN("ioctl(SIOCGIFFLAGS) failed: %s", strerror(errno)); - return -1; + ret = mlx5_ifreq(dev, SIOCGIFFLAGS, &ifr); + if (ret) { + WARN("ioctl(SIOCGIFFLAGS) failed: %s", strerror(rte_errno)); + return ret; } memset(&dev_link, 0, sizeof(dev_link)); dev_link.link_status = ((ifr.ifr_flags & IFF_UP) && (ifr.ifr_flags & IFF_RUNNING)); ifr.ifr_data = (void *)&gcmd; - if (mlx5_ifreq(dev, SIOCETHTOOL, &ifr)) { + ret = mlx5_ifreq(dev, SIOCETHTOOL, &ifr); + if (ret) { DEBUG("ioctl(SIOCETHTOOL, ETHTOOL_GLINKSETTINGS) failed: %s", - strerror(errno)); - return -1; + strerror(rte_errno)); + return ret; } gcmd.link_mode_masks_nwords = -gcmd.link_mode_masks_nwords; @@ -575,10 +601,11 @@ mlx5_link_update_unlocked_gs(struct rte_eth_dev *dev) *ecmd = gcmd; ifr.ifr_data = (void *)ecmd; - if (mlx5_ifreq(dev, SIOCETHTOOL, &ifr)) { + ret = mlx5_ifreq(dev, SIOCETHTOOL, &ifr); + if (ret) { DEBUG("ioctl(SIOCETHTOOL, ETHTOOL_GLINKSETTINGS) failed: %s", - strerror(errno)); - return -1; + strerror(rte_errno)); + return ret; } dev_link.link_speed = ecmd->speed; sc = ecmd->link_mode_masks[0] | @@ -628,7 +655,8 @@ mlx5_link_update_unlocked_gs(struct rte_eth_dev *dev) return 0; } /* Link status is still the same. */ - return -1; + rte_errno = EAGAIN; + return -rte_errno; } /** @@ -641,18 +669,21 @@ static void mlx5_link_start(struct rte_eth_dev *dev) { struct priv *priv = dev->data->dev_private; - int err; + int ret; mlx5_select_tx_function(dev); mlx5_select_rx_function(dev); - err = mlx5_traffic_enable(dev); - if (err) + ret = mlx5_traffic_enable(dev); + if (ret) { ERROR("%p: error occurred while configuring control flows: %s", - (void *)dev, strerror(err)); - err = mlx5_flow_start(dev, &priv->flows); - if (err) + (void *)dev, strerror(rte_errno)); + return; + } + ret = mlx5_flow_start(dev, &priv->flows); + if (ret) { ERROR("%p: error occurred while configuring flows: %s", - (void *)dev, strerror(err)); + (void *)dev, strerror(rte_errno)); + } } /** @@ -682,7 +713,7 @@ mlx5_link_stop(struct rte_eth_dev *dev) * Link desired status. * * @return - * 0 on success, negative errno value on failure. + * 0 on success, a negative errno value otherwise and rte_errno is set. */ int mlx5_force_link_status_change(struct rte_eth_dev *dev, int status) @@ -696,7 +727,8 @@ mlx5_force_link_status_change(struct rte_eth_dev *dev, int status) try++; sleep(1); } - return -EAGAIN; + rte_errno = EAGAIN; + return -rte_errno; } /** @@ -708,7 +740,7 @@ mlx5_force_link_status_change(struct rte_eth_dev *dev, int status) * Wait for request completion (ignored). * * @return - * 0 on success, -1 on error. + * 0 on success, a negative errno value otherwise and rte_errno is set. */ int mlx5_link_update(struct rte_eth_dev *dev, int wait_to_complete __rte_unused) @@ -725,10 +757,12 @@ mlx5_link_update(struct rte_eth_dev *dev, int wait_to_complete __rte_unused) ret = mlx5_link_update_unlocked_gset(dev); else ret = mlx5_link_update_unlocked_gs(dev); + if (ret) + return ret; /* If lsc interrupt is disabled, should always be ready for traffic. */ if (!dev->data->dev_conf.intr_conf.lsc) { mlx5_link_start(dev); - return ret; + return 0; } /* Re-select burst callbacks only if link status has been changed. */ if (!ret && dev_link.link_status != dev->data->dev_link.link_status) { @@ -737,7 +771,7 @@ mlx5_link_update(struct rte_eth_dev *dev, int wait_to_complete __rte_unused) else mlx5_link_stop(dev); } - return ret; + return 0; } /** @@ -749,36 +783,32 @@ mlx5_link_update(struct rte_eth_dev *dev, int wait_to_complete __rte_unused) * New MTU. * * @return - * 0 on success, negative errno value on failure. + * 0 on success, a negative errno value otherwise and rte_errno is set. */ int mlx5_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu) { struct priv *priv = dev->data->dev_private; - uint16_t kern_mtu; - int ret = 0; + uint16_t kern_mtu = 0; + int ret; ret = mlx5_get_mtu(dev, &kern_mtu); if (ret) - goto out; + return ret; /* Set kernel interface MTU first. */ ret = mlx5_set_mtu(dev, mtu); if (ret) - goto out; + return ret; ret = mlx5_get_mtu(dev, &kern_mtu); if (ret) - goto out; + return ret; if (kern_mtu == mtu) { priv->mtu = mtu; DEBUG("adapter port %u MTU set to %u", priv->port, mtu); + return 0; } - return 0; -out: - ret = errno; - WARN("cannot set port %u MTU to %u: %s", priv->port, mtu, - strerror(ret)); - assert(ret >= 0); - return -ret; + rte_errno = EAGAIN; + return -rte_errno; } /** @@ -790,7 +820,7 @@ mlx5_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu) * Flow control output buffer. * * @return - * 0 on success, negative errno value on failure. + * 0 on success, a negative errno value otherwise and rte_errno is set. */ int mlx5_dev_get_flow_ctrl(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf) @@ -802,11 +832,11 @@ mlx5_dev_get_flow_ctrl(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf) int ret; ifr.ifr_data = (void *)ðpause; - if (mlx5_ifreq(dev, SIOCETHTOOL, &ifr)) { - ret = errno; + ret = mlx5_ifreq(dev, SIOCETHTOOL, &ifr); + if (ret) { WARN("ioctl(SIOCETHTOOL, ETHTOOL_GPAUSEPARAM) failed: %s", - strerror(ret)); - goto out; + strerror(rte_errno)); + return ret; } fc_conf->autoneg = ethpause.autoneg; if (ethpause.rx_pause && ethpause.tx_pause) @@ -817,10 +847,7 @@ mlx5_dev_get_flow_ctrl(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf) fc_conf->mode = RTE_FC_TX_PAUSE; else fc_conf->mode = RTE_FC_NONE; - ret = 0; -out: - assert(ret >= 0); - return -ret; + return 0; } /** @@ -832,7 +859,7 @@ mlx5_dev_get_flow_ctrl(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf) * Flow control parameters. * * @return - * 0 on success, negative errno value on failure. + * 0 on success, a negative errno value otherwise and rte_errno is set. */ int mlx5_dev_set_flow_ctrl(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf) @@ -856,17 +883,14 @@ mlx5_dev_set_flow_ctrl(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf) ethpause.tx_pause = 1; else ethpause.tx_pause = 0; - if (mlx5_ifreq(dev, SIOCETHTOOL, &ifr)) { - ret = errno; + ret = mlx5_ifreq(dev, SIOCETHTOOL, &ifr); + if (ret) { WARN("ioctl(SIOCETHTOOL, ETHTOOL_SPAUSEPARAM)" " failed: %s", - strerror(ret)); - goto out; + strerror(rte_errno)); + return ret; } - ret = 0; -out: - assert(ret >= 0); - return -ret; + return 0; } /** @@ -878,7 +902,7 @@ mlx5_dev_set_flow_ctrl(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf) * PCI bus address output buffer. * * @return - * 0 on success, -1 on failure and errno is set. + * 0 on success, a negative errno value otherwise and rte_errno is set. */ int mlx5_ibv_device_to_pci_addr(const struct ibv_device *device, @@ -889,8 +913,10 @@ mlx5_ibv_device_to_pci_addr(const struct ibv_device *device, MKSTR(path, "%s/device/uevent", device->ibdev_path); file = fopen(path, "rb"); - if (file == NULL) - return -1; + if (file == NULL) { + rte_errno = errno; + return -rte_errno; + } while (fgets(line, sizeof(line), file) == line) { size_t len = strlen(line); int ret; @@ -926,15 +952,19 @@ mlx5_ibv_device_to_pci_addr(const struct ibv_device *device, * Pointer to Ethernet device. * * @return - * Zero if the callback process can be called immediately. + * Zero if the callback process can be called immediately, negative errno + * value otherwise and rte_errno is set. */ static int mlx5_link_status_update(struct rte_eth_dev *dev) { struct priv *priv = dev->data->dev_private; struct rte_eth_link *link = &dev->data->dev_link; + int ret; - mlx5_link_update(dev, 0); + ret = mlx5_link_update(dev, 0); + if (ret) + return ret; if (((link->link_speed == 0) && link->link_status) || ((link->link_speed != 0) && !link->link_status)) { /* @@ -1091,12 +1121,13 @@ void mlx5_dev_interrupt_handler_install(struct rte_eth_dev *dev) { struct priv *priv = dev->data->dev_private; - int rc, flags; + int ret; + int flags; assert(priv->ctx->async_fd > 0); flags = fcntl(priv->ctx->async_fd, F_GETFL); - rc = fcntl(priv->ctx->async_fd, F_SETFL, flags | O_NONBLOCK); - if (rc < 0) { + ret = fcntl(priv->ctx->async_fd, F_SETFL, flags | O_NONBLOCK); + if (ret) { INFO("failed to change file descriptor async event queue"); dev->data->dev_conf.intr_conf.lsc = 0; dev->data->dev_conf.intr_conf.rmv = 0; @@ -1108,8 +1139,10 @@ mlx5_dev_interrupt_handler_install(struct rte_eth_dev *dev) rte_intr_callback_register(&priv->intr_handle, mlx5_dev_interrupt_handler, dev); } - rc = mlx5_socket_init(dev); - if (!rc && priv->primary_socket) { + ret = mlx5_socket_init(dev); + if (ret) + ERROR("cannot initialise socket: %s", strerror(rte_errno)); + else if (priv->primary_socket) { priv->intr_handle_socket.fd = priv->primary_socket; priv->intr_handle_socket.type = RTE_INTR_HANDLE_EXT; rte_intr_callback_register(&priv->intr_handle_socket, @@ -1124,7 +1157,7 @@ mlx5_dev_interrupt_handler_install(struct rte_eth_dev *dev) * Pointer to Ethernet device structure. * * @return - * 0 on success, errno value on failure. + * 0 on success, a negative errno value otherwise and rte_errno is set. */ int mlx5_set_link_down(struct rte_eth_dev *dev) @@ -1139,7 +1172,7 @@ mlx5_set_link_down(struct rte_eth_dev *dev) * Pointer to Ethernet device structure. * * @return - * 0 on success, errno value on failure. + * 0 on success, a negative errno value otherwise and rte_errno is set. */ int mlx5_set_link_up(struct rte_eth_dev *dev) diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index aaa8727ee..09a798924 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -286,7 +286,8 @@ struct mlx5_flow_items { * Internal structure to store the conversion. * * @return - * 0 on success, negative value otherwise. + * 0 on success, a negative errno value otherwise and rte_errno is + * set. */ int (*convert)(const struct rte_flow_item *item, const void *default_mask, @@ -499,45 +500,52 @@ struct ibv_spec_header { * Bit-Mask size in bytes. * * @return - * 0 on success. + * 0 on success, a negative errno value otherwise and rte_errno is set. */ static int mlx5_flow_item_validate(const struct rte_flow_item *item, const uint8_t *mask, unsigned int size) { - int ret = 0; - - if (!item->spec && (item->mask || item->last)) - return -1; + if (!item->spec && (item->mask || item->last)) { + rte_errno = EINVAL; + return -rte_errno; + } if (item->spec && !item->mask) { unsigned int i; const uint8_t *spec = item->spec; for (i = 0; i < size; ++i) - if ((spec[i] | mask[i]) != mask[i]) - return -1; + if ((spec[i] | mask[i]) != mask[i]) { + rte_errno = EINVAL; + return -rte_errno; + } } if (item->last && !item->mask) { unsigned int i; const uint8_t *spec = item->last; for (i = 0; i < size; ++i) - if ((spec[i] | mask[i]) != mask[i]) - return -1; + if ((spec[i] | mask[i]) != mask[i]) { + rte_errno = EINVAL; + return -rte_errno; + } } if (item->mask) { unsigned int i; const uint8_t *spec = item->spec; for (i = 0; i < size; ++i) - if ((spec[i] | mask[i]) != mask[i]) - return -1; + if ((spec[i] | mask[i]) != mask[i]) { + rte_errno = EINVAL; + return -rte_errno; + } } if (item->spec && item->last) { uint8_t spec[size]; uint8_t last[size]; const uint8_t *apply = mask; unsigned int i; + int ret; if (item->mask) apply = item->mask; @@ -546,8 +554,12 @@ mlx5_flow_item_validate(const struct rte_flow_item *item, last[i] = ((const uint8_t *)item->last)[i] & apply[i]; } ret = memcmp(spec, last, size); + if (ret != 0) { + rte_errno = EINVAL; + return -rte_errno; + } } - return ret; + return 0; } /** @@ -560,7 +572,7 @@ mlx5_flow_item_validate(const struct rte_flow_item *item, * User RSS configuration to save. * * @return - * 0 on success, errno value on failure. + * 0 on success, a negative errno value otherwise and rte_errno is set. */ static int mlx5_flow_convert_rss_conf(struct mlx5_flow_parse *parser, @@ -572,10 +584,14 @@ mlx5_flow_convert_rss_conf(struct mlx5_flow_parse *parser, * device default RSS configuration. */ if (rss_conf) { - if (rss_conf->rss_hf & MLX5_RSS_HF_MASK) - return EINVAL; - if (rss_conf->rss_key_len != 40) - return EINVAL; + if (rss_conf->rss_hf & MLX5_RSS_HF_MASK) { + rte_errno = EINVAL; + return -rte_errno; + } + if (rss_conf->rss_key_len != 40) { + rte_errno = EINVAL; + return -rte_errno; + } if (rss_conf->rss_key_len && rss_conf->rss_key) { parser->rss_conf.rss_key_len = rss_conf->rss_key_len; memcpy(parser->rss_key, rss_conf->rss_key, @@ -655,14 +671,17 @@ mlx5_flow_convert_actions(struct rte_eth_dev *dev, struct mlx5_flow_parse *parser) { struct priv *priv = dev->data->dev_private; + int ret; /* * Add default RSS configuration necessary for Verbs to create QP even * if no RSS is necessary. */ - mlx5_flow_convert_rss_conf(parser, - (const struct rte_eth_rss_conf *) - &priv->rss_conf); + ret = mlx5_flow_convert_rss_conf(parser, + (const struct rte_eth_rss_conf *) + &priv->rss_conf); + if (ret) + return ret; for (; actions->type != RTE_FLOW_ACTION_TYPE_END; ++actions) { if (actions->type == RTE_FLOW_ACTION_TYPE_VOID) { continue; @@ -811,6 +830,7 @@ mlx5_flow_convert_items_validate(const struct rte_flow_item items[], { const struct mlx5_flow_items *cur_item = mlx5_flow_items; unsigned int i; + int ret = 0; /* Initialise the offsets to start after verbs attribute. */ for (i = 0; i != hash_rxq_init_n; ++i) @@ -818,7 +838,6 @@ mlx5_flow_convert_items_validate(const struct rte_flow_item items[], for (; items->type != RTE_FLOW_ITEM_TYPE_END; ++items) { const struct mlx5_flow_items *token = NULL; unsigned int n; - int err; if (items->type == RTE_FLOW_ITEM_TYPE_VOID) continue; @@ -834,10 +853,10 @@ mlx5_flow_convert_items_validate(const struct rte_flow_item items[], if (!token) goto exit_item_not_supported; cur_item = token; - err = mlx5_flow_item_validate(items, + ret = mlx5_flow_item_validate(items, (const uint8_t *)cur_item->mask, cur_item->mask_sz); - if (err) + if (ret) goto exit_item_not_supported; if (items->type == RTE_FLOW_ITEM_TYPE_VXLAN) { if (parser->inner) { @@ -874,9 +893,8 @@ mlx5_flow_convert_items_validate(const struct rte_flow_item items[], } return 0; exit_item_not_supported: - rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ITEM, - items, "item not supported"); - return -rte_errno; + return rte_flow_error_set(error, -ret, RTE_FLOW_ERROR_TYPE_ITEM, + items, "item not supported"); } /** @@ -890,7 +908,7 @@ mlx5_flow_convert_items_validate(const struct rte_flow_item items[], * Perform verbose error reporting if not NULL. * * @return - * A verbs flow attribute on success, NULL otherwise. + * A verbs flow attribute on success, NULL otherwise and rte_errno is set. */ static struct ibv_flow_attr * mlx5_flow_convert_allocate(unsigned int priority, @@ -1092,7 +1110,7 @@ mlx5_flow_convert(struct rte_eth_dev *dev, parser->queue[HASH_RXQ_ETH].offset, error); if (!parser->queue[HASH_RXQ_ETH].ibv_attr) - return ENOMEM; + goto exit_enomem; parser->queue[HASH_RXQ_ETH].offset = sizeof(struct ibv_flow_attr); } else { @@ -1127,7 +1145,7 @@ mlx5_flow_convert(struct rte_eth_dev *dev, cur_item->mask), parser); if (ret) { - rte_flow_error_set(error, ret, + rte_flow_error_set(error, rte_errno, RTE_FLOW_ERROR_TYPE_ITEM, items, "item not supported"); goto exit_free; @@ -1169,13 +1187,13 @@ mlx5_flow_convert(struct rte_eth_dev *dev, parser->queue[i].ibv_attr = NULL; } } - rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + rte_flow_error_set(error, ENOMEM, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, "cannot allocate verbs spec attributes."); - return ret; + return -rte_errno; exit_count_error: rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, "cannot create counter."); - return rte_errno; + return -rte_errno; } /** @@ -1221,6 +1239,9 @@ mlx5_flow_create_copy(struct mlx5_flow_parse *parser, void *src, * Default bit-masks to use when item->mask is not provided. * @param data[in, out] * User structure. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. */ static int mlx5_flow_create_eth(const struct rte_flow_item *item, @@ -1270,6 +1291,9 @@ mlx5_flow_create_eth(const struct rte_flow_item *item, * Default bit-masks to use when item->mask is not provided. * @param data[in, out] * User structure. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. */ static int mlx5_flow_create_vlan(const struct rte_flow_item *item, @@ -1310,6 +1334,9 @@ mlx5_flow_create_vlan(const struct rte_flow_item *item, * Default bit-masks to use when item->mask is not provided. * @param data[in, out] * User structure. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. */ static int mlx5_flow_create_ipv4(const struct rte_flow_item *item, @@ -1362,6 +1389,9 @@ mlx5_flow_create_ipv4(const struct rte_flow_item *item, * Default bit-masks to use when item->mask is not provided. * @param data[in, out] * User structure. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. */ static int mlx5_flow_create_ipv6(const struct rte_flow_item *item, @@ -1418,6 +1448,9 @@ mlx5_flow_create_ipv6(const struct rte_flow_item *item, * Default bit-masks to use when item->mask is not provided. * @param data[in, out] * User structure. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. */ static int mlx5_flow_create_udp(const struct rte_flow_item *item, @@ -1464,6 +1497,9 @@ mlx5_flow_create_udp(const struct rte_flow_item *item, * Default bit-masks to use when item->mask is not provided. * @param data[in, out] * User structure. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. */ static int mlx5_flow_create_tcp(const struct rte_flow_item *item, @@ -1510,6 +1546,9 @@ mlx5_flow_create_tcp(const struct rte_flow_item *item, * Default bit-masks to use when item->mask is not provided. * @param data[in, out] * User structure. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. */ static int mlx5_flow_create_vxlan(const struct rte_flow_item *item, @@ -1549,8 +1588,10 @@ mlx5_flow_create_vxlan(const struct rte_flow_item *item, * before will also match this rule. * To avoid such situation, VNI 0 is currently refused. */ - if (!vxlan.val.tunnel_id) - return EINVAL; + if (!vxlan.val.tunnel_id) { + rte_errno = EINVAL; + return -rte_errno; + } mlx5_flow_create_copy(parser, &vxlan, size); return 0; } @@ -1562,6 +1603,9 @@ mlx5_flow_create_vxlan(const struct rte_flow_item *item, * Internal parser structure. * @param mark_id * Mark identifier. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. */ static int mlx5_flow_create_flag_mark(struct mlx5_flow_parse *parser, uint32_t mark_id) @@ -1587,7 +1631,7 @@ mlx5_flow_create_flag_mark(struct mlx5_flow_parse *parser, uint32_t mark_id) * Pointer to MLX5 flow parser structure. * * @return - * 0 on success, errno value on failure. + * 0 on success, a negative errno value otherwise and rte_errno is set. */ static int mlx5_flow_create_count(struct rte_eth_dev *dev __rte_unused, @@ -1605,8 +1649,10 @@ mlx5_flow_create_count(struct rte_eth_dev *dev __rte_unused, init_attr.counter_set_id = 0; parser->cs = ibv_create_counter_set(priv->ctx, &init_attr); - if (!parser->cs) - return EINVAL; + if (!parser->cs) { + rte_errno = EINVAL; + return -rte_errno; + } counter.counter_set_handle = parser->cs->handle; mlx5_flow_create_copy(parser, &counter, size); #endif @@ -1626,7 +1672,7 @@ mlx5_flow_create_count(struct rte_eth_dev *dev __rte_unused, * Perform verbose error reporting if not NULL. * * @return - * 0 on success, errno value on failure. + * 0 on success, a negative errno value otherwise and rte_errno is set. */ static int mlx5_flow_create_action_queue_drop(struct rte_eth_dev *dev, @@ -1637,7 +1683,6 @@ mlx5_flow_create_action_queue_drop(struct rte_eth_dev *dev, struct priv *priv = dev->data->dev_private; struct ibv_flow_spec_action_drop *drop; unsigned int size = sizeof(struct ibv_flow_spec_action_drop); - int err = 0; assert(priv->pd); assert(priv->ctx); @@ -1663,7 +1708,6 @@ mlx5_flow_create_action_queue_drop(struct rte_eth_dev *dev, if (!flow->frxq[HASH_RXQ_ETH].ibv_flow) { rte_flow_error_set(error, ENOMEM, RTE_FLOW_ERROR_TYPE_HANDLE, NULL, "flow rule creation failure"); - err = ENOMEM; goto error; } return 0; @@ -1682,7 +1726,7 @@ mlx5_flow_create_action_queue_drop(struct rte_eth_dev *dev, flow->cs = NULL; parser->cs = NULL; } - return err; + return -rte_errno; } /** @@ -1698,7 +1742,7 @@ mlx5_flow_create_action_queue_drop(struct rte_eth_dev *dev, * Perform verbose error reporting if not NULL. * * @return - * 0 on success, a errno value otherwise and rte_errno is set. + * 0 on success, a negative errno value otherwise and rte_errno is set. */ static int mlx5_flow_create_action_queue_rss(struct rte_eth_dev *dev, @@ -1736,10 +1780,10 @@ mlx5_flow_create_action_queue_rss(struct rte_eth_dev *dev, parser->queues, parser->queues_n); if (!flow->frxq[i].hrxq) { - rte_flow_error_set(error, ENOMEM, - RTE_FLOW_ERROR_TYPE_HANDLE, - NULL, "cannot create hash rxq"); - return ENOMEM; + return rte_flow_error_set(error, ENOMEM, + RTE_FLOW_ERROR_TYPE_HANDLE, + NULL, + "cannot create hash rxq"); } } return 0; @@ -1758,7 +1802,7 @@ mlx5_flow_create_action_queue_rss(struct rte_eth_dev *dev, * Perform verbose error reporting if not NULL. * * @return - * 0 on success, a errno value otherwise and rte_errno is set. + * 0 on success, a negative errno value otherwise and rte_errno is set. */ static int mlx5_flow_create_action_queue(struct rte_eth_dev *dev, @@ -1767,15 +1811,15 @@ mlx5_flow_create_action_queue(struct rte_eth_dev *dev, struct rte_flow_error *error) { struct priv *priv = dev->data->dev_private; - int err = 0; + int ret; unsigned int i; unsigned int flows_n = 0; assert(priv->pd); assert(priv->ctx); assert(!parser->drop); - err = mlx5_flow_create_action_queue_rss(dev, parser, flow, error); - if (err) + ret = mlx5_flow_create_action_queue_rss(dev, parser, flow, error); + if (ret) goto error; if (parser->count) flow->cs = parser->cs; @@ -1791,7 +1835,6 @@ mlx5_flow_create_action_queue(struct rte_eth_dev *dev, rte_flow_error_set(error, ENOMEM, RTE_FLOW_ERROR_TYPE_HANDLE, NULL, "flow rule creation failure"); - err = ENOMEM; goto error; } ++flows_n; @@ -1813,6 +1856,7 @@ mlx5_flow_create_action_queue(struct rte_eth_dev *dev, } return 0; error: + ret = rte_errno; /* Save rte_errno before cleanup. */ assert(flow); for (i = 0; i != hash_rxq_init_n; ++i) { if (flow->frxq[i].ibv_flow) { @@ -1830,7 +1874,8 @@ mlx5_flow_create_action_queue(struct rte_eth_dev *dev, flow->cs = NULL; parser->cs = NULL; } - return err; + rte_errno = ret; /* Restore rte_errno. */ + return -rte_errno; } /** @@ -1850,7 +1895,7 @@ mlx5_flow_create_action_queue(struct rte_eth_dev *dev, * Perform verbose error reporting if not NULL. * * @return - * A flow on success, NULL otherwise. + * A flow on success, NULL otherwise and rte_errno is set. */ static struct rte_flow * mlx5_flow_list_create(struct rte_eth_dev *dev, @@ -1863,10 +1908,10 @@ mlx5_flow_list_create(struct rte_eth_dev *dev, struct mlx5_flow_parse parser = { .create = 1, }; struct rte_flow *flow = NULL; unsigned int i; - int err; + int ret; - err = mlx5_flow_convert(dev, attr, items, actions, error, &parser); - if (err) + ret = mlx5_flow_convert(dev, attr, items, actions, error, &parser); + if (ret) goto exit; flow = rte_calloc(__func__, 1, sizeof(*flow) + parser.queues_n * sizeof(uint16_t), @@ -1889,11 +1934,11 @@ mlx5_flow_list_create(struct rte_eth_dev *dev, memcpy(flow->rss_key, parser.rss_key, parser.rss_conf.rss_key_len); /* finalise the flow. */ if (parser.drop) - err = mlx5_flow_create_action_queue_drop(dev, &parser, flow, + ret = mlx5_flow_create_action_queue_drop(dev, &parser, flow, error); else - err = mlx5_flow_create_action_queue(dev, &parser, flow, error); - if (err) + ret = mlx5_flow_create_action_queue(dev, &parser, flow, error); + if (ret) goto exit; TAILQ_INSERT_TAIL(list, flow, next); DEBUG("Flow created %p", (void *)flow); @@ -1920,11 +1965,9 @@ mlx5_flow_validate(struct rte_eth_dev *dev, const struct rte_flow_action actions[], struct rte_flow_error *error) { - int ret; struct mlx5_flow_parse parser = { .create = 0, }; - ret = mlx5_flow_convert(dev, attr, items, actions, error, &parser); - return ret; + return mlx5_flow_convert(dev, attr, items, actions, error, &parser); } /** @@ -2047,7 +2090,7 @@ mlx5_flow_list_flush(struct rte_eth_dev *dev, struct mlx5_flows *list) * Pointer to Ethernet device. * * @return - * 0 on success. + * 0 on success, a negative errno value otherwise and rte_errno is set. */ int mlx5_flow_create_drop_queue(struct rte_eth_dev *dev) @@ -2060,11 +2103,13 @@ mlx5_flow_create_drop_queue(struct rte_eth_dev *dev) fdq = rte_calloc(__func__, 1, sizeof(*fdq), 0); if (!fdq) { WARN("cannot allocate memory for drop queue"); - goto error; + rte_errno = ENOMEM; + return -rte_errno; } fdq->cq = ibv_create_cq(priv->ctx, 1, NULL, NULL, 0); if (!fdq->cq) { WARN("cannot allocate CQ for drop queue"); + rte_errno = errno; goto error; } fdq->wq = ibv_create_wq(priv->ctx, @@ -2077,6 +2122,7 @@ mlx5_flow_create_drop_queue(struct rte_eth_dev *dev) }); if (!fdq->wq) { WARN("cannot allocate WQ for drop queue"); + rte_errno = errno; goto error; } fdq->ind_table = ibv_create_rwq_ind_table(priv->ctx, @@ -2087,6 +2133,7 @@ mlx5_flow_create_drop_queue(struct rte_eth_dev *dev) }); if (!fdq->ind_table) { WARN("cannot allocate indirection table for drop queue"); + rte_errno = errno; goto error; } fdq->qp = ibv_create_qp_ex(priv->ctx, @@ -2108,6 +2155,7 @@ mlx5_flow_create_drop_queue(struct rte_eth_dev *dev) }); if (!fdq->qp) { WARN("cannot allocate QP for drop queue"); + rte_errno = errno; goto error; } priv->flow_drop_queue = fdq; @@ -2124,7 +2172,7 @@ mlx5_flow_create_drop_queue(struct rte_eth_dev *dev) if (fdq) rte_free(fdq); priv->flow_drop_queue = NULL; - return -1; + return -rte_errno; } /** @@ -2222,7 +2270,7 @@ mlx5_flow_stop(struct rte_eth_dev *dev, struct mlx5_flows *list) * Pointer to a TAILQ flow list. * * @return - * 0 on success, a errno value otherwise and rte_errno is set. + * 0 on success, a negative errno value otherwise and rte_errno is set. */ int mlx5_flow_start(struct rte_eth_dev *dev, struct mlx5_flows *list) @@ -2242,7 +2290,7 @@ mlx5_flow_start(struct rte_eth_dev *dev, struct mlx5_flows *list) DEBUG("Flow %p cannot be applied", (void *)flow); rte_errno = EINVAL; - return rte_errno; + return -rte_errno; } DEBUG("Flow %p applied", (void *)flow); /* Next flow. */ @@ -2269,7 +2317,7 @@ mlx5_flow_start(struct rte_eth_dev *dev, struct mlx5_flows *list) DEBUG("Flow %p cannot be applied", (void *)flow); rte_errno = EINVAL; - return rte_errno; + return -rte_errno; } flow_create: flow->frxq[i].ibv_flow = @@ -2279,7 +2327,7 @@ mlx5_flow_start(struct rte_eth_dev *dev, struct mlx5_flows *list) DEBUG("Flow %p cannot be applied", (void *)flow); rte_errno = EINVAL; - return rte_errno; + return -rte_errno; } DEBUG("Flow %p applied", (void *)flow); } @@ -2329,7 +2377,7 @@ mlx5_flow_verify(struct rte_eth_dev *dev) * A VLAN flow mask to apply. * * @return - * 0 on success. + * 0 on success, a negative errno value otherwise and rte_errno is set. */ int mlx5_ctrl_flow_vlan(struct rte_eth_dev *dev, @@ -2381,8 +2429,10 @@ mlx5_ctrl_flow_vlan(struct rte_eth_dev *dev, } local; } action_rss; - if (!priv->reta_idx_n) - return EINVAL; + if (!priv->reta_idx_n) { + rte_errno = EINVAL; + return -rte_errno; + } for (i = 0; i != priv->reta_idx_n; ++i) action_rss.local.queue[i] = (*priv->reta_idx)[i]; action_rss.local.rss_conf = &priv->rss_conf; @@ -2391,7 +2441,7 @@ mlx5_ctrl_flow_vlan(struct rte_eth_dev *dev, flow = mlx5_flow_list_create(dev, &priv->ctrl_flows, &attr, items, actions, &error); if (!flow) - return rte_errno; + return -rte_errno; return 0; } @@ -2406,7 +2456,7 @@ mlx5_ctrl_flow_vlan(struct rte_eth_dev *dev, * An Ethernet flow mask to apply. * * @return - * 0 on success. + * 0 on success, a negative errno value otherwise and rte_errno is set. */ int mlx5_ctrl_flow(struct rte_eth_dev *dev, @@ -2459,7 +2509,7 @@ mlx5_flow_flush(struct rte_eth_dev *dev, * returned data from the counter. * * @return - * 0 on success, a errno value otherwise and rte_errno is set. + * 0 on success, a negative errno value otherwise and rte_errno is set. */ static int mlx5_flow_query_count(struct ibv_counter_set *cs, @@ -2476,15 +2526,13 @@ mlx5_flow_query_count(struct ibv_counter_set *cs, .out = counters, .outlen = 2 * sizeof(uint64_t), }; - int res = ibv_query_counter_set(&query_cs_attr, &query_out); + int err = ibv_query_counter_set(&query_cs_attr, &query_out); - if (res) { - rte_flow_error_set(error, -res, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, - NULL, - "cannot read counter"); - return -res; - } + if (err) + return rte_flow_error_set(error, err, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, + "cannot read counter"); query_count->hits_set = 1; query_count->bytes_set = 1; query_count->hits = counters[0] - counter_stats->hits; @@ -2509,20 +2557,22 @@ mlx5_flow_query(struct rte_eth_dev *dev __rte_unused, void *data, struct rte_flow_error *error) { - int res = EINVAL; - if (flow->cs) { - res = mlx5_flow_query_count(flow->cs, - &flow->counter_stats, - (struct rte_flow_query_count *)data, - error); + int ret; + + ret = mlx5_flow_query_count(flow->cs, + &flow->counter_stats, + (struct rte_flow_query_count *)data, + error); + if (ret) + return ret; } else { - rte_flow_error_set(error, res, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, - NULL, - "no counter found for flow"); + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, + "no counter found for flow"); } - return -res; + return 0; } #endif @@ -2565,7 +2615,7 @@ mlx5_flow_isolate(struct rte_eth_dev *dev, * Generic flow parameters structure. * * @return - * 0 on success, errno value on error. + * 0 on success, a negative errno value otherwise and rte_errno is set. */ static int mlx5_fdir_filter_convert(struct rte_eth_dev *dev, @@ -2578,7 +2628,8 @@ mlx5_fdir_filter_convert(struct rte_eth_dev *dev, /* Validate queue number. */ if (fdir_filter->action.rx_queue >= priv->rxqs_n) { ERROR("invalid queue number %d", fdir_filter->action.rx_queue); - return EINVAL; + rte_errno = EINVAL; + return -rte_errno; } attributes->attr.ingress = 1; attributes->items[0] = (struct rte_flow_item) { @@ -2600,7 +2651,8 @@ mlx5_fdir_filter_convert(struct rte_eth_dev *dev, break; default: ERROR("invalid behavior %d", fdir_filter->action.behavior); - return ENOTSUP; + rte_errno = ENOTSUP; + return -rte_errno; } attributes->queue.index = fdir_filter->action.rx_queue; switch (fdir_filter->input.flow_type) { @@ -2734,9 +2786,9 @@ mlx5_fdir_filter_convert(struct rte_eth_dev *dev, }; break; default: - ERROR("invalid flow type%d", - fdir_filter->input.flow_type); - return ENOTSUP; + ERROR("invalid flow type%d", fdir_filter->input.flow_type); + rte_errno = ENOTSUP; + return -rte_errno; } return 0; } @@ -2750,7 +2802,7 @@ mlx5_fdir_filter_convert(struct rte_eth_dev *dev, * Flow director filter to add. * * @return - * 0 on success, errno value on failure. + * 0 on success, a negative errno value otherwise and rte_errno is set. */ static int mlx5_fdir_filter_add(struct rte_eth_dev *dev, @@ -2774,11 +2826,11 @@ mlx5_fdir_filter_add(struct rte_eth_dev *dev, ret = mlx5_fdir_filter_convert(dev, fdir_filter, &attributes); if (ret) - return -ret; + return ret; ret = mlx5_flow_convert(dev, &attributes.attr, attributes.items, attributes.actions, &error, &parser); if (ret) - return -ret; + return ret; flow = mlx5_flow_list_create(dev, &priv->flows, &attributes.attr, attributes.items, attributes.actions, &error); @@ -2786,7 +2838,7 @@ mlx5_fdir_filter_add(struct rte_eth_dev *dev, DEBUG("FDIR created %p", (void *)flow); return 0; } - return ENOTSUP; + return -rte_errno; } /** @@ -2798,7 +2850,7 @@ mlx5_fdir_filter_add(struct rte_eth_dev *dev, * Filter to be deleted. * * @return - * 0 on success, errno value on failure. + * 0 on success, a negative errno value otherwise and rte_errno is set. */ static int mlx5_fdir_filter_delete(struct rte_eth_dev *dev, @@ -2819,7 +2871,7 @@ mlx5_fdir_filter_delete(struct rte_eth_dev *dev, ret = mlx5_fdir_filter_convert(dev, fdir_filter, &attributes); if (ret) - return -ret; + return ret; ret = mlx5_flow_convert(dev, &attributes.attr, attributes.items, attributes.actions, &error, &parser); if (ret) @@ -2877,6 +2929,7 @@ mlx5_fdir_filter_delete(struct rte_eth_dev *dev, /* The flow does not match. */ continue; } + ret = rte_errno; /* Save rte_errno before cleanup. */ if (flow) mlx5_flow_list_destroy(dev, &priv->flows, flow); exit: @@ -2884,7 +2937,8 @@ mlx5_fdir_filter_delete(struct rte_eth_dev *dev, if (parser.queue[i].ibv_attr) rte_free(parser.queue[i].ibv_attr); } - return -ret; + rte_errno = ret; /* Restore rte_errno. */ + return -rte_errno; } /** @@ -2896,7 +2950,7 @@ mlx5_fdir_filter_delete(struct rte_eth_dev *dev, * Filter to be updated. * * @return - * 0 on success, errno value on failure. + * 0 on success, a negative errno value otherwise and rte_errno is set. */ static int mlx5_fdir_filter_update(struct rte_eth_dev *dev, @@ -2907,8 +2961,7 @@ mlx5_fdir_filter_update(struct rte_eth_dev *dev, ret = mlx5_fdir_filter_delete(dev, fdir_filter); if (ret) return ret; - ret = mlx5_fdir_filter_add(dev, fdir_filter); - return ret; + return mlx5_fdir_filter_add(dev, fdir_filter); } /** @@ -2962,7 +3015,7 @@ mlx5_fdir_info_get(struct rte_eth_dev *dev, struct rte_eth_fdir_info *fdir_info) * Pointer to operation-specific structure. * * @return - * 0 on success, errno value on failure. + * 0 on success, a negative errno value otherwise and rte_errno is set. */ static int mlx5_fdir_ctrl_func(struct rte_eth_dev *dev, enum rte_filter_op filter_op, @@ -2971,7 +3024,6 @@ mlx5_fdir_ctrl_func(struct rte_eth_dev *dev, enum rte_filter_op filter_op, struct priv *priv = dev->data->dev_private; enum rte_fdir_mode fdir_mode = priv->dev->data->dev_conf.fdir_conf.mode; - int ret = 0; if (filter_op == RTE_ETH_FILTER_NOP) return 0; @@ -2979,18 +3031,16 @@ mlx5_fdir_ctrl_func(struct rte_eth_dev *dev, enum rte_filter_op filter_op, fdir_mode != RTE_FDIR_MODE_PERFECT_MAC_VLAN) { ERROR("%p: flow director mode %d not supported", (void *)dev, fdir_mode); - return EINVAL; + rte_errno = EINVAL; + return -rte_errno; } switch (filter_op) { case RTE_ETH_FILTER_ADD: - ret = mlx5_fdir_filter_add(dev, arg); - break; + return mlx5_fdir_filter_add(dev, arg); case RTE_ETH_FILTER_UPDATE: - ret = mlx5_fdir_filter_update(dev, arg); - break; + return mlx5_fdir_filter_update(dev, arg); case RTE_ETH_FILTER_DELETE: - ret = mlx5_fdir_filter_delete(dev, arg); - break; + return mlx5_fdir_filter_delete(dev, arg); case RTE_ETH_FILTER_FLUSH: mlx5_fdir_filter_flush(dev); break; @@ -2998,12 +3048,11 @@ mlx5_fdir_ctrl_func(struct rte_eth_dev *dev, enum rte_filter_op filter_op, mlx5_fdir_info_get(dev, arg); break; default: - DEBUG("%p: unknown operation %u", (void *)dev, - filter_op); - ret = EINVAL; - break; + DEBUG("%p: unknown operation %u", (void *)dev, filter_op); + rte_errno = EINVAL; + return -rte_errno; } - return ret; + return 0; } /** @@ -3019,7 +3068,7 @@ mlx5_fdir_ctrl_func(struct rte_eth_dev *dev, enum rte_filter_op filter_op, * Pointer to operation-specific structure. * * @return - * 0 on success, negative errno value on failure. + * 0 on success, a negative errno value otherwise and rte_errno is set. */ int mlx5_dev_filter_ctrl(struct rte_eth_dev *dev, @@ -3027,21 +3076,21 @@ mlx5_dev_filter_ctrl(struct rte_eth_dev *dev, enum rte_filter_op filter_op, void *arg) { - int ret = EINVAL; - switch (filter_type) { case RTE_ETH_FILTER_GENERIC: - if (filter_op != RTE_ETH_FILTER_GET) - return -EINVAL; + if (filter_op != RTE_ETH_FILTER_GET) { + rte_errno = EINVAL; + return -rte_errno; + } *(const void **)arg = &mlx5_flow_ops; return 0; case RTE_ETH_FILTER_FDIR: - ret = mlx5_fdir_ctrl_func(dev, filter_op, arg); - break; + return mlx5_fdir_ctrl_func(dev, filter_op, arg); default: ERROR("%p: filter type (%d) not supported", (void *)dev, filter_type); - break; + rte_errno = ENOTSUP; + return -rte_errno; } - return -ret; + return 0; } diff --git a/drivers/net/mlx5/mlx5_mac.c b/drivers/net/mlx5/mlx5_mac.c index 20fed527b..e9d9c67e9 100644 --- a/drivers/net/mlx5/mlx5_mac.c +++ b/drivers/net/mlx5/mlx5_mac.c @@ -69,15 +69,17 @@ * MAC address output buffer. * * @return - * 0 on success, -1 on failure and errno is set. + * 0 on success, a negative errno value otherwise and rte_errno is set. */ int mlx5_get_mac(struct rte_eth_dev *dev, uint8_t (*mac)[ETHER_ADDR_LEN]) { struct ifreq request; + int ret; - if (mlx5_ifreq(dev, SIOCGIFHWADDR, &request)) - return -1; + ret = mlx5_ifreq(dev, SIOCGIFHWADDR, &request); + if (ret) + return ret; memcpy(mac, request.ifr_hwaddr.sa_data, ETHER_ADDR_LEN); return 0; } @@ -95,8 +97,13 @@ mlx5_mac_addr_remove(struct rte_eth_dev *dev, uint32_t index) { assert(index < MLX5_MAX_MAC_ADDRESSES); memset(&dev->data->mac_addrs[index], 0, sizeof(struct ether_addr)); - if (!dev->data->promiscuous) - mlx5_traffic_restart(dev); + if (!dev->data->promiscuous) { + int ret = mlx5_traffic_restart(dev); + + if (ret) + ERROR("%p cannot remove mac address: %s", (void *)dev, + strerror(rte_errno)); + } } /** @@ -112,14 +119,13 @@ mlx5_mac_addr_remove(struct rte_eth_dev *dev, uint32_t index) * VMDq pool index to associate address with (ignored). * * @return - * 0 on success. + * 0 on success, a negative errno value otherwise and rte_errno is set. */ int mlx5_mac_addr_add(struct rte_eth_dev *dev, struct ether_addr *mac, uint32_t index, uint32_t vmdq __rte_unused) { unsigned int i; - int ret = 0; assert(index < MLX5_MAX_MAC_ADDRESSES); /* First, make sure this address isn't already configured. */ @@ -130,12 +136,13 @@ mlx5_mac_addr_add(struct rte_eth_dev *dev, struct ether_addr *mac, if (memcmp(&dev->data->mac_addrs[i], mac, sizeof(*mac))) continue; /* Address already configured elsewhere, return with error. */ - return EADDRINUSE; + rte_errno = EADDRINUSE; + return -rte_errno; } dev->data->mac_addrs[index] = *mac; if (!dev->data->promiscuous) - mlx5_traffic_restart(dev); - return ret; + return mlx5_traffic_restart(dev); + return 0; } /** @@ -149,6 +156,10 @@ mlx5_mac_addr_add(struct rte_eth_dev *dev, struct ether_addr *mac, void mlx5_mac_addr_set(struct rte_eth_dev *dev, struct ether_addr *mac_addr) { + int ret; + DEBUG("%p: setting primary MAC address", (void *)dev); - mlx5_mac_addr_add(dev, mac_addr, 0, 0); + ret = mlx5_mac_addr_add(dev, mac_addr, 0, 0); + if (ret) + ERROR("cannot set mac address: %s", strerror(rte_errno)); } diff --git a/drivers/net/mlx5/mlx5_mr.c b/drivers/net/mlx5/mlx5_mr.c index cfad4798b..3a4e46f9a 100644 --- a/drivers/net/mlx5/mlx5_mr.c +++ b/drivers/net/mlx5/mlx5_mr.c @@ -120,7 +120,7 @@ mlx5_check_mempool(struct rte_mempool *mp, uintptr_t *start, * Index of the next available entry. * * @return - * mr on success, NULL on failure. + * mr on success, NULL on failure and rte_errno is set. */ struct mlx5_mr * mlx5_txq_mp2mr_reg(struct mlx5_txq_data *txq, struct rte_mempool *mp, @@ -144,6 +144,7 @@ mlx5_txq_mp2mr_reg(struct mlx5_txq_data *txq, struct rte_mempool *mp, " rte_eth_dev_start()", (void *)mp, mp->name); rte_spinlock_unlock(&txq_ctrl->priv->mr_lock); + rte_errno = ENOTSUP; return NULL; } mr = mlx5_mr_new(dev, mp); @@ -232,7 +233,9 @@ mlx5_mp2mr_iter(struct rte_mempool *mp, void *arg) mlx5_mr_release(mr); return; } - mlx5_mr_new(priv->dev, mp); + mr = mlx5_mr_new(priv->dev, mp); + if (!mr) + ERROR("cannot create memory region: %s", strerror(rte_errno)); } /** @@ -245,7 +248,7 @@ mlx5_mp2mr_iter(struct rte_mempool *mp, void *arg) * Pointer to the memory pool to register. * * @return - * The memory region on success. + * The memory region on success, NULL on failure and rte_errno is set. */ struct mlx5_mr * mlx5_mr_new(struct rte_eth_dev *dev, struct rte_mempool *mp) @@ -260,11 +263,13 @@ mlx5_mr_new(struct rte_eth_dev *dev, struct rte_mempool *mp) mr = rte_zmalloc_socket(__func__, sizeof(*mr), 0, mp->socket_id); if (!mr) { DEBUG("unable to configure MR, ibv_reg_mr() failed."); + rte_errno = ENOMEM; return NULL; } if (mlx5_check_mempool(mp, &start, &end) != 0) { ERROR("mempool %p: not virtually contiguous", (void *)mp); + rte_errno = ENOMEM; return NULL; } DEBUG("mempool %p area start=%p end=%p size=%zu", @@ -289,6 +294,10 @@ mlx5_mr_new(struct rte_eth_dev *dev, struct rte_mempool *mp) (size_t)(end - start)); mr->mr = ibv_reg_mr(priv->pd, (void *)start, end - start, IBV_ACCESS_LOCAL_WRITE); + if (!mr->mr) { + rte_errno = ENOMEM; + return NULL; + } mr->mp = mp; mr->lkey = rte_cpu_to_be_32(mr->mr->lkey); rte_atomic32_inc(&mr->refcnt); diff --git a/drivers/net/mlx5/mlx5_rss.c b/drivers/net/mlx5/mlx5_rss.c index badf0c0f9..90682a308 100644 --- a/drivers/net/mlx5/mlx5_rss.c +++ b/drivers/net/mlx5/mlx5_rss.c @@ -63,33 +63,31 @@ * RSS configuration data. * * @return - * 0 on success, negative errno value on failure. + * 0 on success, a negative errno value otherwise and rte_errno is set. */ int mlx5_rss_hash_update(struct rte_eth_dev *dev, struct rte_eth_rss_conf *rss_conf) { struct priv *priv = dev->data->dev_private; - int ret = 0; if (rss_conf->rss_hf & MLX5_RSS_HF_MASK) { - ret = -EINVAL; - goto out; + rte_errno = EINVAL; + return -rte_errno; } if (rss_conf->rss_key && rss_conf->rss_key_len) { priv->rss_conf.rss_key = rte_realloc(priv->rss_conf.rss_key, rss_conf->rss_key_len, 0); if (!priv->rss_conf.rss_key) { - ret = -ENOMEM; - goto out; + rte_errno = ENOMEM; + return -rte_errno; } memcpy(priv->rss_conf.rss_key, rss_conf->rss_key, rss_conf->rss_key_len); priv->rss_conf.rss_key_len = rss_conf->rss_key_len; } priv->rss_conf.rss_hf = rss_conf->rss_hf; -out: - return ret; + return 0; } /** @@ -101,7 +99,7 @@ mlx5_rss_hash_update(struct rte_eth_dev *dev, * RSS configuration data. * * @return - * 0 on success, negative errno value on failure. + * 0 on success, a negative errno value otherwise and rte_errno is set. */ int mlx5_rss_hash_conf_get(struct rte_eth_dev *dev, @@ -109,8 +107,10 @@ mlx5_rss_hash_conf_get(struct rte_eth_dev *dev, { struct priv *priv = dev->data->dev_private; - if (!rss_conf) - return -EINVAL; + if (!rss_conf) { + rte_errno = EINVAL; + return -rte_errno; + } if (rss_conf->rss_key && (rss_conf->rss_key_len >= priv->rss_conf.rss_key_len)) { memcpy(rss_conf->rss_key, priv->rss_conf.rss_key, @@ -130,7 +130,7 @@ mlx5_rss_hash_conf_get(struct rte_eth_dev *dev, * The size of the array to allocate. * * @return - * 0 on success, errno value on failure. + * 0 on success, a negative errno value otherwise and rte_errno is set. */ int mlx5_rss_reta_index_resize(struct rte_eth_dev *dev, unsigned int reta_size) @@ -144,8 +144,10 @@ mlx5_rss_reta_index_resize(struct rte_eth_dev *dev, unsigned int reta_size) mem = rte_realloc(priv->reta_idx, reta_size * sizeof((*priv->reta_idx)[0]), 0); - if (!mem) - return ENOMEM; + if (!mem) { + rte_errno = ENOMEM; + return -rte_errno; + } priv->reta_idx = mem; priv->reta_idx_n = reta_size; if (old_size < reta_size) @@ -166,7 +168,7 @@ mlx5_rss_reta_index_resize(struct rte_eth_dev *dev, unsigned int reta_size) * Size of the RETA table. * * @return - * 0 on success, negative errno value on failure. + * 0 on success, a negative errno value otherwise and rte_errno is set. */ int mlx5_dev_rss_reta_query(struct rte_eth_dev *dev, @@ -177,8 +179,10 @@ mlx5_dev_rss_reta_query(struct rte_eth_dev *dev, unsigned int idx; unsigned int i; - if (!reta_size || reta_size > priv->reta_idx_n) - return -EINVAL; + if (!reta_size || reta_size > priv->reta_idx_n) { + rte_errno = EINVAL; + return -rte_errno; + } /* Fill each entry of the table even if its bit is not set. */ for (idx = 0, i = 0; (i != reta_size); ++i) { idx = i / RTE_RETA_GROUP_SIZE; @@ -199,7 +203,7 @@ mlx5_dev_rss_reta_query(struct rte_eth_dev *dev, * Size of the RETA table. * * @return - * 0 on success, negative errno value on failure. + * 0 on success, a negative errno value otherwise and rte_errno is set. */ int mlx5_dev_rss_reta_update(struct rte_eth_dev *dev, @@ -212,8 +216,10 @@ mlx5_dev_rss_reta_update(struct rte_eth_dev *dev, unsigned int i; unsigned int pos; - if (!reta_size) - return -EINVAL; + if (!reta_size) { + rte_errno = EINVAL; + return -rte_errno; + } ret = mlx5_rss_reta_index_resize(dev, reta_size); if (ret) return ret; @@ -227,7 +233,7 @@ mlx5_dev_rss_reta_update(struct rte_eth_dev *dev, } if (dev->data->dev_started) { mlx5_dev_stop(dev); - mlx5_dev_start(dev); + return mlx5_dev_start(dev); } - return -ret; + return 0; } diff --git a/drivers/net/mlx5/mlx5_rxmode.c b/drivers/net/mlx5/mlx5_rxmode.c index 6fb245ba1..09808a39a 100644 --- a/drivers/net/mlx5/mlx5_rxmode.c +++ b/drivers/net/mlx5/mlx5_rxmode.c @@ -60,8 +60,13 @@ void mlx5_promiscuous_enable(struct rte_eth_dev *dev) { + int ret; + dev->data->promiscuous = 1; - mlx5_traffic_restart(dev); + ret = mlx5_traffic_restart(dev); + if (ret) + ERROR("%p cannot enable promiscuous mode: %s", (void *)dev, + strerror(rte_errno)); } /** @@ -73,8 +78,13 @@ mlx5_promiscuous_enable(struct rte_eth_dev *dev) void mlx5_promiscuous_disable(struct rte_eth_dev *dev) { + int ret; + dev->data->promiscuous = 0; - mlx5_traffic_restart(dev); + ret = mlx5_traffic_restart(dev); + if (ret) + ERROR("%p cannot disable promiscuous mode: %s", (void *)dev, + strerror(rte_errno)); } /** @@ -86,8 +96,13 @@ mlx5_promiscuous_disable(struct rte_eth_dev *dev) void mlx5_allmulticast_enable(struct rte_eth_dev *dev) { + int ret; + dev->data->all_multicast = 1; - mlx5_traffic_restart(dev); + ret = mlx5_traffic_restart(dev); + if (ret) + ERROR("%p cannot enable allmulicast mode: %s", (void *)dev, + strerror(rte_errno)); } /** @@ -99,6 +114,11 @@ mlx5_allmulticast_enable(struct rte_eth_dev *dev) void mlx5_allmulticast_disable(struct rte_eth_dev *dev) { + int ret; + dev->data->all_multicast = 0; - mlx5_traffic_restart(dev); + ret = mlx5_traffic_restart(dev); + if (ret) + ERROR("%p cannot disable allmulicast mode: %s", (void *)dev, + strerror(rte_errno)); } diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c index cc1d7ba5d..40a8b72e7 100644 --- a/drivers/net/mlx5/mlx5_rxq.c +++ b/drivers/net/mlx5/mlx5_rxq.c @@ -88,7 +88,7 @@ const size_t rss_hash_default_key_len = sizeof(rss_hash_default_key); * Pointer to RX queue structure. * * @return - * 0 on success, errno value on failure. + * 0 on success, a negative errno value otherwise and rte_errno is set. */ int rxq_alloc_elts(struct mlx5_rxq_ctrl *rxq_ctrl) @@ -96,7 +96,7 @@ rxq_alloc_elts(struct mlx5_rxq_ctrl *rxq_ctrl) const unsigned int sges_n = 1 << rxq_ctrl->rxq.sges_n; unsigned int elts_n = 1 << rxq_ctrl->rxq.elts_n; unsigned int i; - int ret = 0; + int err; /* Iterate on segments. */ for (i = 0; (i != elts_n); ++i) { @@ -105,7 +105,7 @@ rxq_alloc_elts(struct mlx5_rxq_ctrl *rxq_ctrl) buf = rte_pktmbuf_alloc(rxq_ctrl->rxq.mp); if (buf == NULL) { ERROR("%p: empty mbuf pool", (void *)rxq_ctrl); - ret = ENOMEM; + rte_errno = ENOMEM; goto error; } /* Headroom is reserved by rte_pktmbuf_alloc(). */ @@ -147,9 +147,9 @@ rxq_alloc_elts(struct mlx5_rxq_ctrl *rxq_ctrl) } DEBUG("%p: allocated and configured %u segments (max %u packets)", (void *)rxq_ctrl, elts_n, elts_n / (1 << rxq_ctrl->rxq.sges_n)); - assert(ret == 0); return 0; error: + err = rte_errno; /* Save rte_errno before cleanup. */ elts_n = i; for (i = 0; (i != elts_n); ++i) { if ((*rxq_ctrl->rxq.elts)[i] != NULL) @@ -157,8 +157,8 @@ rxq_alloc_elts(struct mlx5_rxq_ctrl *rxq_ctrl) (*rxq_ctrl->rxq.elts)[i] = NULL; } DEBUG("%p: failed, freed everything", (void *)rxq_ctrl); - assert(ret > 0); - return ret; + rte_errno = err; /* Restore rte_errno. */ + return -rte_errno; } /** @@ -228,7 +228,7 @@ mlx5_rxq_cleanup(struct mlx5_rxq_ctrl *rxq_ctrl) * Memory pool for buffer allocations. * * @return - * 0 on success, negative errno value on failure. + * 0 on success, a negative errno value otherwise and rte_errno is set. */ int mlx5_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, @@ -240,7 +240,6 @@ mlx5_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, struct mlx5_rxq_data *rxq = (*priv->rxqs)[idx]; struct mlx5_rxq_ctrl *rxq_ctrl = container_of(rxq, struct mlx5_rxq_ctrl, rxq); - int ret = 0; if (!rte_is_power_of_2(desc)) { desc = 1 << log2above(desc); @@ -253,27 +252,27 @@ mlx5_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, if (idx >= priv->rxqs_n) { ERROR("%p: queue index out of range (%u >= %u)", (void *)dev, idx, priv->rxqs_n); - return -EOVERFLOW; + rte_errno = EOVERFLOW; + return -rte_errno; } if (!mlx5_rxq_releasable(dev, idx)) { - ret = EBUSY; ERROR("%p: unable to release queue index %u", (void *)dev, idx); - goto out; + rte_errno = EBUSY; + return -rte_errno; } mlx5_rxq_release(dev, idx); rxq_ctrl = mlx5_rxq_new(dev, idx, desc, socket, mp); if (!rxq_ctrl) { ERROR("%p: unable to allocate queue index %u", (void *)dev, idx); - ret = ENOMEM; - goto out; + rte_errno = ENOMEM; + return -rte_errno; } DEBUG("%p: adding RX queue %p to list", (void *)dev, (void *)rxq_ctrl); (*priv->rxqs)[idx] = &rxq_ctrl->rxq; -out: - return -ret; + return 0; } /** @@ -306,7 +305,7 @@ mlx5_rx_queue_release(void *dpdk_rxq) * Pointer to Ethernet device. * * @return - * 0 on success, negative on failure. + * 0 on success, a negative errno value otherwise and rte_errno is set. */ int mlx5_rx_intr_vec_enable(struct rte_eth_dev *dev) @@ -325,7 +324,8 @@ mlx5_rx_intr_vec_enable(struct rte_eth_dev *dev) if (intr_handle->intr_vec == NULL) { ERROR("failed to allocate memory for interrupt vector," " Rx interrupts will not be supported"); - return -ENOMEM; + rte_errno = ENOMEM; + return -rte_errno; } intr_handle->type = RTE_INTR_HANDLE_EXT; for (i = 0; i != n; ++i) { @@ -348,16 +348,18 @@ mlx5_rx_intr_vec_enable(struct rte_eth_dev *dev) " (%d), Rx interrupts cannot be enabled", RTE_MAX_RXTX_INTR_VEC_ID); mlx5_rx_intr_vec_disable(dev); - return -1; + rte_errno = ENOMEM; + return -rte_errno; } fd = rxq_ibv->channel->fd; flags = fcntl(fd, F_GETFL); rc = fcntl(fd, F_SETFL, flags | O_NONBLOCK); if (rc < 0) { + rte_errno = errno; ERROR("failed to make Rx interrupt file descriptor" " %d non-blocking for queue index %d", fd, i); mlx5_rx_intr_vec_disable(dev); - return -1; + return -rte_errno; } intr_handle->intr_vec[i] = RTE_INTR_VEC_RXTX_OFFSET + count; intr_handle->efds[count] = fd; @@ -446,7 +448,7 @@ mlx5_arm_cq(struct mlx5_rxq_data *rxq, int sq_n_rxq) * Rx queue number. * * @return - * 0 on success, negative on failure. + * 0 on success, a negative errno value otherwise and rte_errno is set. */ int mlx5_rx_intr_enable(struct rte_eth_dev *dev, uint16_t rx_queue_id) @@ -454,12 +456,11 @@ mlx5_rx_intr_enable(struct rte_eth_dev *dev, uint16_t rx_queue_id) struct priv *priv = dev->data->dev_private; struct mlx5_rxq_data *rxq_data; struct mlx5_rxq_ctrl *rxq_ctrl; - int ret = 0; rxq_data = (*priv->rxqs)[rx_queue_id]; if (!rxq_data) { - ret = EINVAL; - goto exit; + rte_errno = EINVAL; + return -rte_errno; } rxq_ctrl = container_of(rxq_data, struct mlx5_rxq_ctrl, rxq); if (rxq_ctrl->irq) { @@ -467,16 +468,13 @@ mlx5_rx_intr_enable(struct rte_eth_dev *dev, uint16_t rx_queue_id) rxq_ibv = mlx5_rxq_ibv_get(dev, rx_queue_id); if (!rxq_ibv) { - ret = EINVAL; - goto exit; + rte_errno = EINVAL; + return -rte_errno; } mlx5_arm_cq(rxq_data, rxq_data->cq_arm_sn); mlx5_rxq_ibv_release(rxq_ibv); } -exit: - if (ret) - WARN("unable to arm interrupt on rx queue %d", rx_queue_id); - return -ret; + return 0; } /** @@ -488,7 +486,7 @@ mlx5_rx_intr_enable(struct rte_eth_dev *dev, uint16_t rx_queue_id) * Rx queue number. * * @return - * 0 on success, negative on failure. + * 0 on success, a negative errno value otherwise and rte_errno is set. */ int mlx5_rx_intr_disable(struct rte_eth_dev *dev, uint16_t rx_queue_id) @@ -499,35 +497,36 @@ mlx5_rx_intr_disable(struct rte_eth_dev *dev, uint16_t rx_queue_id) struct mlx5_rxq_ibv *rxq_ibv = NULL; struct ibv_cq *ev_cq; void *ev_ctx; - int ret = 0; + int ret; rxq_data = (*priv->rxqs)[rx_queue_id]; if (!rxq_data) { - ret = EINVAL; - goto exit; + rte_errno = EINVAL; + return -rte_errno; } rxq_ctrl = container_of(rxq_data, struct mlx5_rxq_ctrl, rxq); if (!rxq_ctrl->irq) - goto exit; + return 0; rxq_ibv = mlx5_rxq_ibv_get(dev, rx_queue_id); if (!rxq_ibv) { - ret = EINVAL; - goto exit; + rte_errno = EINVAL; + return -rte_errno; } ret = ibv_get_cq_event(rxq_ibv->channel, &ev_cq, &ev_ctx); if (ret || ev_cq != rxq_ibv->cq) { - ret = EINVAL; + rte_errno = EINVAL; goto exit; } rxq_data->cq_arm_sn++; ibv_ack_cq_events(rxq_ibv->cq, 1); + return 0; exit: + ret = rte_errno; /* Save rte_errno before cleanup. */ if (rxq_ibv) mlx5_rxq_ibv_release(rxq_ibv); - if (ret) - WARN("unable to disable interrupt on rx queue %d", - rx_queue_id); - return -ret; + WARN("unable to disable interrupt on rx queue %d", rx_queue_id); + rte_errno = ret; /* Restore rte_errno. */ + return -rte_errno; } /** @@ -539,7 +538,7 @@ mlx5_rx_intr_disable(struct rte_eth_dev *dev, uint16_t rx_queue_id) * Queue index in DPDK Rx queue array * * @return - * The Verbs object initialised if it can be created. + * The Verbs object initialised, NULL otherwise and rte_errno is set. */ struct mlx5_rxq_ibv * mlx5_rxq_ibv_new(struct rte_eth_dev *dev, uint16_t idx) @@ -574,6 +573,7 @@ mlx5_rxq_ibv_new(struct rte_eth_dev *dev, uint16_t idx) if (!tmpl) { ERROR("%p: cannot allocate verbs resources", (void *)rxq_ctrl); + rte_errno = ENOMEM; goto error; } tmpl->rxq_ctrl = rxq_ctrl; @@ -591,6 +591,7 @@ mlx5_rxq_ibv_new(struct rte_eth_dev *dev, uint16_t idx) if (!tmpl->channel) { ERROR("%p: Comp Channel creation failure", (void *)rxq_ctrl); + rte_errno = ENOMEM; goto error; } } @@ -619,6 +620,7 @@ mlx5_rxq_ibv_new(struct rte_eth_dev *dev, uint16_t idx) &attr.cq.mlx5)); if (tmpl->cq == NULL) { ERROR("%p: CQ creation failure", (void *)rxq_ctrl); + rte_errno = ENOMEM; goto error; } DEBUG("priv->device_attr.max_qp_wr is %d", @@ -655,6 +657,7 @@ mlx5_rxq_ibv_new(struct rte_eth_dev *dev, uint16_t idx) tmpl->wq = ibv_create_wq(priv->ctx, &attr.wq); if (tmpl->wq == NULL) { ERROR("%p: WQ creation failure", (void *)rxq_ctrl); + rte_errno = ENOMEM; goto error; } /* @@ -669,6 +672,7 @@ mlx5_rxq_ibv_new(struct rte_eth_dev *dev, uint16_t idx) ((1 << rxq_data->elts_n) >> rxq_data->sges_n), (1 << rxq_data->sges_n), attr.wq.max_wr, attr.wq.max_sge); + rte_errno = EINVAL; goto error; } /* Change queue state to ready. */ @@ -680,6 +684,7 @@ mlx5_rxq_ibv_new(struct rte_eth_dev *dev, uint16_t idx) if (ret) { ERROR("%p: WQ state to IBV_WQS_RDY failed", (void *)rxq_ctrl); + rte_errno = ret; goto error; } obj.cq.in = tmpl->cq; @@ -687,11 +692,14 @@ mlx5_rxq_ibv_new(struct rte_eth_dev *dev, uint16_t idx) obj.rwq.in = tmpl->wq; obj.rwq.out = &rwq; ret = mlx5dv_init_obj(&obj, MLX5DV_OBJ_CQ | MLX5DV_OBJ_RWQ); - if (ret != 0) + if (ret) { + rte_errno = ret; goto error; + } if (cq_info.cqe_size != RTE_CACHE_LINE_SIZE) { ERROR("Wrong MLX5_CQE_SIZE environment variable value: " "it should be set to %u", RTE_CACHE_LINE_SIZE); + rte_errno = EINVAL; goto error; } /* Fill the rings. */ @@ -735,6 +743,7 @@ mlx5_rxq_ibv_new(struct rte_eth_dev *dev, uint16_t idx) priv->verbs_alloc_ctx.type = MLX5_VERBS_ALLOC_TYPE_NONE; return tmpl; error: + ret = rte_errno; /* Save rte_errno before cleanup. */ if (tmpl->wq) claim_zero(ibv_destroy_wq(tmpl->wq)); if (tmpl->cq) @@ -744,6 +753,7 @@ mlx5_rxq_ibv_new(struct rte_eth_dev *dev, uint16_t idx) if (tmpl->mr) mlx5_mr_release(tmpl->mr); priv->verbs_alloc_ctx.type = MLX5_VERBS_ALLOC_TYPE_NONE; + rte_errno = ret; /* Restore rte_errno. */ return NULL; } @@ -866,7 +876,7 @@ mlx5_rxq_ibv_releasable(struct mlx5_rxq_ibv *rxq_ibv) * NUMA socket on which memory must be allocated. * * @return - * A DPDK queue object on success. + * A DPDK queue object on success, NULL otherwise and rte_errno is set. */ struct mlx5_rxq_ctrl * mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, @@ -882,8 +892,10 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, sizeof(*tmpl) + desc_n * sizeof(struct rte_mbuf *), 0, socket); - if (!tmpl) + if (!tmpl) { + rte_errno = ENOMEM; return NULL; + } tmpl->socket = socket; if (priv->dev->data->dev_conf.intr_conf.rxq) tmpl->irq = 1; @@ -913,6 +925,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, (void *)dev, 1 << sges_n, dev->data->dev_conf.rxmode.max_rx_pkt_len); + rte_errno = EOVERFLOW; goto error; } } else { @@ -931,6 +944,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, (void *)dev, desc, 1 << tmpl->rxq.sges_n); + rte_errno = EINVAL; goto error; } /* Toggle RX checksum offload if hardware supports it. */ @@ -989,7 +1003,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, * TX queue index. * * @return - * A pointer to the queue if it exists. + * A pointer to the queue if it exists, NULL otherwise. */ struct mlx5_rxq_ctrl * mlx5_rxq_get(struct rte_eth_dev *dev, uint16_t idx) @@ -1052,7 +1066,8 @@ mlx5_rxq_release(struct rte_eth_dev *dev, uint16_t idx) * TX queue index. * * @return - * 1 if the queue can be released. + * 1 if the queue can be released, negative errno otherwise and rte_errno is + * set. */ int mlx5_rxq_releasable(struct rte_eth_dev *dev, uint16_t idx) @@ -1060,8 +1075,10 @@ mlx5_rxq_releasable(struct rte_eth_dev *dev, uint16_t idx) struct priv *priv = dev->data->dev_private; struct mlx5_rxq_ctrl *rxq_ctrl; - if (!(*priv->rxqs)[idx]) - return -1; + if (!(*priv->rxqs)[idx]) { + rte_errno = EINVAL; + return -rte_errno; + } rxq_ctrl = container_of((*priv->rxqs)[idx], struct mlx5_rxq_ctrl, rxq); return (rte_atomic32_read(&rxq_ctrl->refcnt) == 1); } @@ -1101,7 +1118,7 @@ mlx5_rxq_verify(struct rte_eth_dev *dev) * Number of queues in the array. * * @return - * A new indirection table. + * The Verbs object initialised, NULL otherwise and rte_errno is set. */ struct mlx5_ind_table_ibv * mlx5_ind_table_ibv_new(struct rte_eth_dev *dev, uint16_t queues[], @@ -1118,8 +1135,10 @@ mlx5_ind_table_ibv_new(struct rte_eth_dev *dev, uint16_t queues[], ind_tbl = rte_calloc(__func__, 1, sizeof(*ind_tbl) + queues_n * sizeof(uint16_t), 0); - if (!ind_tbl) + if (!ind_tbl) { + rte_errno = ENOMEM; return NULL; + } for (i = 0; i != queues_n; ++i) { struct mlx5_rxq_ctrl *rxq = mlx5_rxq_get(dev, queues[i]); @@ -1139,8 +1158,10 @@ mlx5_ind_table_ibv_new(struct rte_eth_dev *dev, uint16_t queues[], .ind_tbl = wq, .comp_mask = 0, }); - if (!ind_tbl->ind_table) + if (!ind_tbl->ind_table) { + rte_errno = errno; goto error; + } rte_atomic32_inc(&ind_tbl->refcnt); LIST_INSERT_HEAD(&priv->ind_tbls, ind_tbl, next); DEBUG("%p: Indirection table %p: refcnt %d", (void *)dev, @@ -1264,7 +1285,7 @@ mlx5_ind_table_ibv_verify(struct rte_eth_dev *dev) * Number of queues. * * @return - * An hash Rx queue on success. + * The Verbs object initialised, NULL otherwise and rte_errno is set. */ struct mlx5_hrxq * mlx5_hrxq_new(struct rte_eth_dev *dev, uint8_t *rss_key, uint8_t rss_key_len, @@ -1274,13 +1295,16 @@ mlx5_hrxq_new(struct rte_eth_dev *dev, uint8_t *rss_key, uint8_t rss_key_len, struct mlx5_hrxq *hrxq; struct mlx5_ind_table_ibv *ind_tbl; struct ibv_qp *qp; + int err; queues_n = hash_fields ? queues_n : 1; ind_tbl = mlx5_ind_table_ibv_get(dev, queues, queues_n); if (!ind_tbl) ind_tbl = mlx5_ind_table_ibv_new(dev, queues, queues_n); - if (!ind_tbl) + if (!ind_tbl) { + rte_errno = ENOMEM; return NULL; + } qp = ibv_create_qp_ex( priv->ctx, &(struct ibv_qp_init_attr_ex){ @@ -1298,8 +1322,10 @@ mlx5_hrxq_new(struct rte_eth_dev *dev, uint8_t *rss_key, uint8_t rss_key_len, .rwq_ind_tbl = ind_tbl->ind_table, .pd = priv->pd, }); - if (!qp) + if (!qp) { + rte_errno = errno; goto error; + } hrxq = rte_calloc(__func__, 1, sizeof(*hrxq) + rss_key_len, 0); if (!hrxq) goto error; @@ -1314,9 +1340,11 @@ mlx5_hrxq_new(struct rte_eth_dev *dev, uint8_t *rss_key, uint8_t rss_key_len, (void *)hrxq, rte_atomic32_read(&hrxq->refcnt)); return hrxq; error: + err = rte_errno; /* Save rte_errno before cleanup. */ mlx5_ind_table_ibv_release(dev, ind_tbl); if (qp) claim_zero(ibv_destroy_qp(qp)); + rte_errno = err; /* Restore rte_errno. */ return NULL; } diff --git a/drivers/net/mlx5/mlx5_socket.c b/drivers/net/mlx5/mlx5_socket.c index 8f400d06e..5499a01b9 100644 --- a/drivers/net/mlx5/mlx5_socket.c +++ b/drivers/net/mlx5/mlx5_socket.c @@ -49,7 +49,7 @@ * Pointer to Ethernet device. * * @return - * 0 on success, errno value on failure. + * 0 on success, a negative errno value otherwise and rte_errno is set. */ int mlx5_socket_init(struct rte_eth_dev *dev) @@ -67,45 +67,51 @@ mlx5_socket_init(struct rte_eth_dev *dev) */ ret = socket(AF_UNIX, SOCK_STREAM, 0); if (ret < 0) { + rte_errno = errno; WARN("secondary process not supported: %s", strerror(errno)); - return ret; + goto error; } priv->primary_socket = ret; flags = fcntl(priv->primary_socket, F_GETFL, 0); - if (flags == -1) - goto out; + if (flags == -1) { + rte_errno = errno; + goto error; + } ret = fcntl(priv->primary_socket, F_SETFL, flags | O_NONBLOCK); - if (ret < 0) - goto out; + if (ret < 0) { + rte_errno = errno; + goto error; + } snprintf(sun.sun_path, sizeof(sun.sun_path), "/var/tmp/%s_%d", MLX5_DRIVER_NAME, priv->primary_socket); remove(sun.sun_path); ret = bind(priv->primary_socket, (const struct sockaddr *)&sun, sizeof(sun)); if (ret < 0) { + rte_errno = errno; WARN("cannot bind socket, secondary process not supported: %s", strerror(errno)); goto close; } ret = listen(priv->primary_socket, 0); if (ret < 0) { + rte_errno = errno; WARN("Secondary process not supported: %s", strerror(errno)); goto close; } - return ret; + return 0; close: remove(sun.sun_path); -out: +error: claim_zero(close(priv->primary_socket)); priv->primary_socket = 0; - return -(ret); + return -rte_errno; } /** * Un-Initialise the socket to communicate with the secondary process * * @param[in] dev - * Pointer to Ethernet device. */ void mlx5_socket_uninit(struct rte_eth_dev *dev) @@ -155,19 +161,21 @@ mlx5_socket_handle(struct rte_eth_dev *dev) ret = setsockopt(conn_sock, SOL_SOCKET, SO_PASSCRED, &(int){1}, sizeof(int)); if (ret < 0) { - WARN("cannot change socket options"); - goto out; + ret = errno; + WARN("cannot change socket options: %s", strerror(rte_errno)); + goto error; } ret = recvmsg(conn_sock, &msg, MSG_WAITALL); if (ret < 0) { - WARN("received an empty message: %s", strerror(errno)); - goto out; + ret = errno; + WARN("received an empty message: %s", strerror(rte_errno)); + goto error; } /* Expect to receive credentials only. */ cmsg = CMSG_FIRSTHDR(&msg); if (cmsg == NULL) { WARN("no message"); - goto out; + goto error; } if ((cmsg->cmsg_type == SCM_CREDENTIALS) && (cmsg->cmsg_len >= sizeof(*cred))) { @@ -177,13 +185,13 @@ mlx5_socket_handle(struct rte_eth_dev *dev) cmsg = CMSG_NXTHDR(&msg, cmsg); if (cmsg != NULL) { WARN("Message wrongly formatted"); - goto out; + goto error; } /* Make sure all the ancillary data was received and valid. */ if ((cred == NULL) || (cred->uid != getuid()) || (cred->gid != getgid())) { WARN("wrong credentials"); - goto out; + goto error; } /* Set-up the ancillary data. */ cmsg = CMSG_FIRSTHDR(&msg); @@ -196,7 +204,7 @@ mlx5_socket_handle(struct rte_eth_dev *dev) ret = sendmsg(conn_sock, &msg, 0); if (ret < 0) WARN("cannot send response"); -out: +error: close(conn_sock); } @@ -207,7 +215,7 @@ mlx5_socket_handle(struct rte_eth_dev *dev) * Pointer to Ethernet structure. * * @return - * fd on success, negative errno value on failure. + * fd on success, negative errno value otherwise and rte_errno is set. */ int mlx5_socket_connect(struct rte_eth_dev *dev) @@ -216,7 +224,7 @@ mlx5_socket_connect(struct rte_eth_dev *dev) struct sockaddr_un sun = { .sun_family = AF_UNIX, }; - int socket_fd; + int socket_fd = -1; int *fd = NULL; int ret; struct ucred *cred; @@ -236,57 +244,67 @@ mlx5_socket_connect(struct rte_eth_dev *dev) ret = socket(AF_UNIX, SOCK_STREAM, 0); if (ret < 0) { + rte_errno = errno; WARN("cannot connect to primary"); - return ret; + goto error; } socket_fd = ret; snprintf(sun.sun_path, sizeof(sun.sun_path), "/var/tmp/%s_%d", MLX5_DRIVER_NAME, priv->primary_socket); ret = connect(socket_fd, (const struct sockaddr *)&sun, sizeof(sun)); if (ret < 0) { + rte_errno = errno; WARN("cannot connect to primary"); - goto out; + goto error; } cmsg = CMSG_FIRSTHDR(&msg); if (cmsg == NULL) { + rte_errno = EINVAL; DEBUG("cannot get first message"); - goto out; + goto error; } cmsg->cmsg_level = SOL_SOCKET; cmsg->cmsg_type = SCM_CREDENTIALS; cmsg->cmsg_len = CMSG_LEN(sizeof(*cred)); cred = (struct ucred *)CMSG_DATA(cmsg); if (cred == NULL) { + rte_errno = EINVAL; DEBUG("no credentials received"); - goto out; + goto error; } cred->pid = getpid(); cred->uid = getuid(); cred->gid = getgid(); ret = sendmsg(socket_fd, &msg, MSG_DONTWAIT); if (ret < 0) { + rte_errno = errno; WARN("cannot send credentials to primary: %s", strerror(errno)); - goto out; + goto error; } ret = recvmsg(socket_fd, &msg, MSG_WAITALL); if (ret <= 0) { + rte_errno = errno; WARN("no message from primary: %s", strerror(errno)); - goto out; + goto error; } cmsg = CMSG_FIRSTHDR(&msg); if (cmsg == NULL) { + rte_errno = EINVAL; WARN("No file descriptor received"); - goto out; + goto error; } fd = (int *)CMSG_DATA(cmsg); - if (*fd <= 0) { + if (*fd < 0) { WARN("no file descriptor received: %s", strerror(errno)); - ret = *fd; - goto out; + rte_errno = *fd; + goto error; } ret = *fd; -out: close(socket_fd); - return ret; + return 0; +error: + if (socket_fd != -1) + close(socket_fd); + return -rte_errno; } diff --git a/drivers/net/mlx5/mlx5_stats.c b/drivers/net/mlx5/mlx5_stats.c index 6d454e5e8..b3500df97 100644 --- a/drivers/net/mlx5/mlx5_stats.c +++ b/drivers/net/mlx5/mlx5_stats.c @@ -140,7 +140,8 @@ static const unsigned int xstats_n = RTE_DIM(mlx5_counters_init); * Counters table output buffer. * * @return - * 0 on success and stats is filled, negative on error. + * 0 on success and stats is filled, negative errno value otherwise and + * rte_errno is set. */ static int mlx5_read_dev_counters(struct rte_eth_dev *dev, uint64_t *stats) @@ -152,13 +153,15 @@ mlx5_read_dev_counters(struct rte_eth_dev *dev, uint64_t *stats) unsigned int stats_sz = xstats_ctrl->stats_n * sizeof(uint64_t); unsigned char et_stat_buf[sizeof(struct ethtool_stats) + stats_sz]; struct ethtool_stats *et_stats = (struct ethtool_stats *)et_stat_buf; + int ret; et_stats->cmd = ETHTOOL_GSTATS; et_stats->n_stats = xstats_ctrl->stats_n; ifr.ifr_data = (caddr_t)et_stats; - if (mlx5_ifreq(dev, SIOCETHTOOL, &ifr) != 0) { + ret = mlx5_ifreq(dev, SIOCETHTOOL, &ifr); + if (ret) { WARN("unable to read statistic values from device"); - return -1; + return ret; } for (i = 0; i != xstats_n; ++i) { if (mlx5_counters_init[i].ib) { @@ -190,18 +193,21 @@ mlx5_read_dev_counters(struct rte_eth_dev *dev, uint64_t *stats) * Pointer to Ethernet device. * * @return - * Number of statistics on success, -1 on error. + * Number of statistics on success, negative errno value otherwise and + * rte_errno is set. */ static int mlx5_ethtool_get_stats_n(struct rte_eth_dev *dev) { struct ethtool_drvinfo drvinfo; struct ifreq ifr; + int ret; drvinfo.cmd = ETHTOOL_GDRVINFO; ifr.ifr_data = (caddr_t)&drvinfo; - if (mlx5_ifreq(dev, SIOCETHTOOL, &ifr) != 0) { + ret = mlx5_ifreq(dev, SIOCETHTOOL, &ifr); + if (ret) { WARN("unable to query number of statistics"); - return -1; + return ret; } return drvinfo.n_stats; } @@ -223,12 +229,14 @@ mlx5_xstats_init(struct rte_eth_dev *dev) struct ethtool_gstrings *strings = NULL; unsigned int dev_stats_n; unsigned int str_sz; + int ret; - dev_stats_n = mlx5_ethtool_get_stats_n(dev); - if (dev_stats_n < 1) { + ret = mlx5_ethtool_get_stats_n(dev); + if (ret < 0) { WARN("no extended statistics available"); return; } + dev_stats_n = ret; xstats_ctrl->stats_n = dev_stats_n; /* Allocate memory to grab stat names and values. */ str_sz = dev_stats_n * ETH_GSTRING_LEN; @@ -243,7 +251,8 @@ mlx5_xstats_init(struct rte_eth_dev *dev) strings->string_set = ETH_SS_STATS; strings->len = dev_stats_n; ifr.ifr_data = (caddr_t)strings; - if (mlx5_ifreq(dev, SIOCETHTOOL, &ifr) != 0) { + ret = mlx5_ifreq(dev, SIOCETHTOOL, &ifr); + if (ret) { WARN("unable to get statistic names"); goto free; } @@ -272,7 +281,9 @@ mlx5_xstats_init(struct rte_eth_dev *dev) } /* Copy to base at first time. */ assert(xstats_n <= MLX5_MAX_XSTATS); - mlx5_read_dev_counters(dev, xstats_ctrl->base); + ret = mlx5_read_dev_counters(dev, xstats_ctrl->base); + if (ret) + ERROR("cannot read device counters: %s", strerror(rte_errno)); free: rte_free(strings); } @@ -289,7 +300,7 @@ mlx5_xstats_init(struct rte_eth_dev *dev) * * @return * Number of extended stats on success and stats is filled, - * negative on error. + * negative on error and rte_errno is set. */ int mlx5_xstats_get(struct rte_eth_dev *dev, struct rte_eth_xstat *stats, @@ -298,15 +309,15 @@ mlx5_xstats_get(struct rte_eth_dev *dev, struct rte_eth_xstat *stats, struct priv *priv = dev->data->dev_private; unsigned int i; uint64_t counters[n]; - int ret = 0; if (n >= xstats_n && stats) { struct mlx5_xstats_ctrl *xstats_ctrl = &priv->xstats_ctrl; int stats_n; + int ret; stats_n = mlx5_ethtool_get_stats_n(dev); if (stats_n < 0) - return -1; + return stats_n; if (xstats_ctrl->stats_n != stats_n) mlx5_xstats_init(dev); ret = mlx5_read_dev_counters(dev, counters); @@ -327,6 +338,10 @@ mlx5_xstats_get(struct rte_eth_dev *dev, struct rte_eth_xstat *stats, * Pointer to Ethernet device structure. * @param[out] stats * Stats structure output buffer. + * + * @return + * 0 on success and stats is filled, negative errno value otherwise and + * rte_errno is set. */ int mlx5_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats) @@ -431,14 +446,22 @@ mlx5_xstats_reset(struct rte_eth_dev *dev) unsigned int i; unsigned int n = xstats_n; uint64_t counters[n]; + int ret; stats_n = mlx5_ethtool_get_stats_n(dev); - if (stats_n < 0) + if (stats_n < 0) { + ERROR("%p cannot get stats: %s", (void *)dev, + strerror(-stats_n)); return; + } if (xstats_ctrl->stats_n != stats_n) mlx5_xstats_init(dev); - if (mlx5_read_dev_counters(dev, counters) < 0) + ret = mlx5_read_dev_counters(dev, counters); + if (ret) { + ERROR("%p cannot read device counters: %s", (void *)dev, + strerror(rte_errno)); return; + } for (i = 0; i != n; ++i) xstats_ctrl->base[i] = counters[i]; } diff --git a/drivers/net/mlx5/mlx5_trigger.c b/drivers/net/mlx5/mlx5_trigger.c index 19434b921..5d2eff506 100644 --- a/drivers/net/mlx5/mlx5_trigger.c +++ b/drivers/net/mlx5/mlx5_trigger.c @@ -64,14 +64,14 @@ mlx5_txq_stop(struct rte_eth_dev *dev) * Pointer to Ethernet device structure. * * @return - * 0 on success, errno on error. + * 0 on success, a negative errno value otherwise and rte_errno is set. */ static int mlx5_txq_start(struct rte_eth_dev *dev) { struct priv *priv = dev->data->dev_private; unsigned int i; - int ret = 0; + int ret; /* Add memory regions to Tx queues. */ for (i = 0; i != priv->txqs_n; ++i) { @@ -89,17 +89,19 @@ mlx5_txq_start(struct rte_eth_dev *dev) txq_alloc_elts(txq_ctrl); txq_ctrl->ibv = mlx5_txq_ibv_new(dev, i); if (!txq_ctrl->ibv) { - ret = ENOMEM; + rte_errno = ENOMEM; goto error; } } ret = mlx5_tx_uar_remap(dev, priv->ctx->cmd_fd); if (ret) goto error; - return ret; + return 0; error: + ret = rte_errno; /* Save rte_errno before cleanup. */ mlx5_txq_stop(dev); - return ret; + rte_errno = ret; /* Restore rte_errno. */ + return -rte_errno; } /** @@ -125,7 +127,7 @@ mlx5_rxq_stop(struct rte_eth_dev *dev) * Pointer to Ethernet device structure. * * @return - * 0 on success, errno on error. + * 0 on success, a negative errno value otherwise and rte_errno is set. */ static int mlx5_rxq_start(struct rte_eth_dev *dev) @@ -143,15 +145,15 @@ mlx5_rxq_start(struct rte_eth_dev *dev) if (ret) goto error; rxq_ctrl->ibv = mlx5_rxq_ibv_new(dev, i); - if (!rxq_ctrl->ibv) { - ret = ENOMEM; + if (!rxq_ctrl->ibv) goto error; - } } - return -ret; + return 0; error: + ret = rte_errno; /* Save rte_errno before cleanup. */ mlx5_rxq_stop(dev); - return -ret; + rte_errno = ret; /* Restore rte_errno. */ + return -rte_errno; } /** @@ -163,48 +165,48 @@ mlx5_rxq_start(struct rte_eth_dev *dev) * Pointer to Ethernet device structure. * * @return - * 0 on success, negative errno value on failure. + * 0 on success, a negative errno value otherwise and rte_errno is set. */ int mlx5_dev_start(struct rte_eth_dev *dev) { struct priv *priv = dev->data->dev_private; struct mlx5_mr *mr = NULL; - int err; + int ret; dev->data->dev_started = 1; - err = mlx5_flow_create_drop_queue(dev); - if (err) { + ret = mlx5_flow_create_drop_queue(dev); + if (ret) { ERROR("%p: Drop queue allocation failed: %s", - (void *)dev, strerror(err)); + (void *)dev, strerror(rte_errno)); goto error; } DEBUG("%p: allocating and configuring hash RX queues", (void *)dev); rte_mempool_walk(mlx5_mp2mr_iter, priv); - err = mlx5_txq_start(dev); - if (err) { - ERROR("%p: TXQ allocation failed: %s", - (void *)dev, strerror(err)); + ret = mlx5_txq_start(dev); + if (ret) { + ERROR("%p: Tx Queue allocation failed: %s", + (void *)dev, strerror(rte_errno)); goto error; } - err = mlx5_rxq_start(dev); - if (err) { - ERROR("%p: RXQ allocation failed: %s", - (void *)dev, strerror(err)); + ret = mlx5_rxq_start(dev); + if (ret) { + ERROR("%p: Rx Queue allocation failed: %s", + (void *)dev, strerror(rte_errno)); goto error; } - err = mlx5_rx_intr_vec_enable(dev); - if (err) { - ERROR("%p: RX interrupt vector creation failed", - (void *)priv); + ret = mlx5_rx_intr_vec_enable(dev); + if (ret) { + ERROR("%p: Rx interrupt vector creation failed", + (void *)dev); goto error; } mlx5_xstats_init(dev); /* Update link status and Tx/Rx callbacks for the first time. */ memset(&dev->data->dev_link, 0, sizeof(struct rte_eth_link)); INFO("Forcing port %u link to be up", dev->data->port_id); - err = mlx5_force_link_status_change(dev, ETH_LINK_UP); - if (err) { + ret = mlx5_force_link_status_change(dev, ETH_LINK_UP); + if (ret) { DEBUG("Failed to set port %u link to be up", dev->data->port_id); goto error; @@ -212,6 +214,7 @@ mlx5_dev_start(struct rte_eth_dev *dev) mlx5_dev_interrupt_handler_install(dev); return 0; error: + ret = rte_errno; /* Save rte_errno before cleanup. */ /* Rollback. */ dev->data->dev_started = 0; for (mr = LIST_FIRST(&priv->mr); mr; mr = LIST_FIRST(&priv->mr)) @@ -221,7 +224,8 @@ mlx5_dev_start(struct rte_eth_dev *dev) mlx5_txq_stop(dev); mlx5_rxq_stop(dev); mlx5_flow_delete_drop_queue(dev); - return err; + rte_errno = ret; /* Restore rte_errno. */ + return -rte_errno; } /** @@ -265,7 +269,7 @@ mlx5_dev_stop(struct rte_eth_dev *dev) * Pointer to Ethernet device structure. * * @return - * 0 on success. + * 0 on success, a negative errno value otherwise and rte_errno is set. */ int mlx5_traffic_enable(struct rte_eth_dev *dev) @@ -303,8 +307,9 @@ mlx5_traffic_enable(struct rte_eth_dev *dev) .type = 0, }; - claim_zero(mlx5_ctrl_flow(dev, &promisc, &promisc)); - return 0; + ret = mlx5_ctrl_flow(dev, &promisc, &promisc); + if (ret) + goto error; } if (dev->data->all_multicast) { struct rte_flow_item_eth multicast = { @@ -313,7 +318,9 @@ mlx5_traffic_enable(struct rte_eth_dev *dev) .type = 0, }; - claim_zero(mlx5_ctrl_flow(dev, &multicast, &multicast)); + ret = mlx5_ctrl_flow(dev, &multicast, &multicast); + if (ret) + goto error; } else { /* Add broadcast/multicast flows. */ for (i = 0; i != vlan_filter_n; ++i) { @@ -373,15 +380,17 @@ mlx5_traffic_enable(struct rte_eth_dev *dev) goto error; } if (!vlan_filter_n) { - ret = mlx5_ctrl_flow(dev, &unicast, - &unicast_mask); + ret = mlx5_ctrl_flow(dev, &unicast, &unicast_mask); if (ret) goto error; } } return 0; error: - return rte_errno; + ret = rte_errno; /* Save rte_errno before cleanup. */ + mlx5_flow_list_flush(dev, &priv->ctrl_flows); + rte_errno = ret; /* Restore rte_errno. */ + return -rte_errno; } @@ -406,14 +415,14 @@ mlx5_traffic_disable(struct rte_eth_dev *dev) * Pointer to Ethernet device private data. * * @return - * 0 on success. + * 0 on success, a negative errno value otherwise and rte_errno is set. */ int mlx5_traffic_restart(struct rte_eth_dev *dev) { if (dev->data->dev_started) { mlx5_traffic_disable(dev); - mlx5_traffic_enable(dev); + return mlx5_traffic_enable(dev); } return 0; } diff --git a/drivers/net/mlx5/mlx5_txq.c b/drivers/net/mlx5/mlx5_txq.c index 53a21c259..baf3fe984 100644 --- a/drivers/net/mlx5/mlx5_txq.c +++ b/drivers/net/mlx5/mlx5_txq.c @@ -130,7 +130,7 @@ txq_free_elts(struct mlx5_txq_ctrl *txq_ctrl) * Thresholds parameters. * * @return - * 0 on success, negative errno value on failure. + * 0 on success, a negative errno value otherwise and rte_errno is set. */ int mlx5_tx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, @@ -140,7 +140,6 @@ mlx5_tx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, struct mlx5_txq_data *txq = (*priv->txqs)[idx]; struct mlx5_txq_ctrl *txq_ctrl = container_of(txq, struct mlx5_txq_ctrl, txq); - int ret = 0; if (desc <= MLX5_TX_COMP_THRESH) { WARN("%p: number of descriptors requested for TX queue %u" @@ -160,27 +159,26 @@ mlx5_tx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, if (idx >= priv->txqs_n) { ERROR("%p: queue index out of range (%u >= %u)", (void *)dev, idx, priv->txqs_n); - return -EOVERFLOW; + rte_errno = EOVERFLOW; + return -rte_errno; } if (!mlx5_txq_releasable(dev, idx)) { - ret = EBUSY; + rte_errno = EBUSY; ERROR("%p: unable to release queue index %u", (void *)dev, idx); - goto out; + return -rte_errno; } mlx5_txq_release(dev, idx); txq_ctrl = mlx5_txq_new(dev, idx, desc, socket, conf); if (!txq_ctrl) { ERROR("%p: unable to allocate queue index %u", (void *)dev, idx); - ret = ENOMEM; - goto out; + return -rte_errno; } DEBUG("%p: adding TX queue %p to list", (void *)dev, (void *)txq_ctrl); (*priv->txqs)[idx] = &txq_ctrl->txq; -out: - return -ret; + return 0; } /** @@ -203,9 +201,9 @@ mlx5_tx_queue_release(void *dpdk_txq) priv = txq_ctrl->priv; for (i = 0; (i != priv->txqs_n); ++i) if ((*priv->txqs)[i] == txq) { + mlx5_txq_release(priv->dev, i); DEBUG("%p: removing TX queue %p from list", (void *)priv->dev, (void *)txq_ctrl); - mlx5_txq_release(priv->dev, i); break; } } @@ -222,7 +220,7 @@ mlx5_tx_queue_release(void *dpdk_txq) * Verbs file descriptor to map UAR pages. * * @return - * 0 on success, errno value on failure. + * 0 on success, a negative errno value otherwise and rte_errno is set. */ int mlx5_tx_uar_remap(struct rte_eth_dev *dev, int fd) @@ -239,7 +237,6 @@ mlx5_tx_uar_remap(struct rte_eth_dev *dev, int fd) struct mlx5_txq_ctrl *txq_ctrl; int already_mapped; size_t page_size = sysconf(_SC_PAGESIZE); - int r; memset(pages, 0, priv->txqs_n * sizeof(uintptr_t)); /* @@ -278,8 +275,8 @@ mlx5_tx_uar_remap(struct rte_eth_dev *dev, int fd) /* fixed mmap have to return same address */ ERROR("call to mmap failed on UAR for txq %d\n", i); - r = ENXIO; - return r; + rte_errno = ENXIO; + return -rte_errno; } } if (rte_eal_process_type() == RTE_PROC_PRIMARY) /* save once */ @@ -300,7 +297,7 @@ mlx5_tx_uar_remap(struct rte_eth_dev *dev, int fd) * Queue index in DPDK Rx queue array * * @return - * The Verbs object initialised if it can be created. + * The Verbs object initialised, NULL otherwise and rte_errno is set. */ struct mlx5_txq_ibv * mlx5_txq_ibv_new(struct rte_eth_dev *dev, uint16_t idx) @@ -329,7 +326,8 @@ mlx5_txq_ibv_new(struct rte_eth_dev *dev, uint16_t idx) priv->verbs_alloc_ctx.obj = txq_ctrl; if (mlx5_getenv_int("MLX5_ENABLE_CQE_COMPRESSION")) { ERROR("MLX5_ENABLE_CQE_COMPRESSION must never be set"); - goto error; + rte_errno = EINVAL; + return NULL; } memset(&tmpl, 0, sizeof(struct mlx5_txq_ibv)); /* MRs will be registered in mp2mr[] later. */ @@ -343,6 +341,7 @@ mlx5_txq_ibv_new(struct rte_eth_dev *dev, uint16_t idx) tmpl.cq = ibv_create_cq(priv->ctx, cqe_n, NULL, NULL, 0); if (tmpl.cq == NULL) { ERROR("%p: CQ creation failure", (void *)txq_ctrl); + rte_errno = errno; goto error; } attr.init = (struct ibv_qp_init_attr_ex){ @@ -384,6 +383,7 @@ mlx5_txq_ibv_new(struct rte_eth_dev *dev, uint16_t idx) tmpl.qp = ibv_create_qp_ex(priv->ctx, &attr.init); if (tmpl.qp == NULL) { ERROR("%p: QP creation failure", (void *)txq_ctrl); + rte_errno = errno; goto error; } attr.mod = (struct ibv_qp_attr){ @@ -395,6 +395,7 @@ mlx5_txq_ibv_new(struct rte_eth_dev *dev, uint16_t idx) ret = ibv_modify_qp(tmpl.qp, &attr.mod, (IBV_QP_STATE | IBV_QP_PORT)); if (ret) { ERROR("%p: QP state to IBV_QPS_INIT failed", (void *)txq_ctrl); + rte_errno = errno; goto error; } attr.mod = (struct ibv_qp_attr){ @@ -403,18 +404,21 @@ mlx5_txq_ibv_new(struct rte_eth_dev *dev, uint16_t idx) ret = ibv_modify_qp(tmpl.qp, &attr.mod, IBV_QP_STATE); if (ret) { ERROR("%p: QP state to IBV_QPS_RTR failed", (void *)txq_ctrl); + rte_errno = errno; goto error; } attr.mod.qp_state = IBV_QPS_RTS; ret = ibv_modify_qp(tmpl.qp, &attr.mod, IBV_QP_STATE); if (ret) { ERROR("%p: QP state to IBV_QPS_RTS failed", (void *)txq_ctrl); + rte_errno = errno; goto error; } txq_ibv = rte_calloc_socket(__func__, 1, sizeof(struct mlx5_txq_ibv), 0, txq_ctrl->socket); if (!txq_ibv) { ERROR("%p: cannot allocate memory", (void *)txq_ctrl); + rte_errno = ENOMEM; goto error; } obj.cq.in = tmpl.cq; @@ -422,11 +426,14 @@ mlx5_txq_ibv_new(struct rte_eth_dev *dev, uint16_t idx) obj.qp.in = tmpl.qp; obj.qp.out = &qp; ret = mlx5dv_init_obj(&obj, MLX5DV_OBJ_CQ | MLX5DV_OBJ_QP); - if (ret != 0) + if (ret != 0) { + rte_errno = errno; goto error; + } if (cq_info.cqe_size != RTE_CACHE_LINE_SIZE) { ERROR("Wrong MLX5_CQE_SIZE environment variable value: " "it should be set to %u", RTE_CACHE_LINE_SIZE); + rte_errno = EINVAL; goto error; } txq_data->cqe_n = log2above(cq_info.cqe_cnt); @@ -450,6 +457,7 @@ mlx5_txq_ibv_new(struct rte_eth_dev *dev, uint16_t idx) txq_ctrl->uar_mmap_offset = qp.uar_mmap_offset; } else { ERROR("Failed to retrieve UAR info, invalid libmlx5.so version"); + rte_errno = EINVAL; goto error; } DEBUG("%p: Verbs Tx queue %p: refcnt %d", (void *)dev, @@ -458,11 +466,13 @@ mlx5_txq_ibv_new(struct rte_eth_dev *dev, uint16_t idx) priv->verbs_alloc_ctx.type = MLX5_VERBS_ALLOC_TYPE_NONE; return txq_ibv; error: + ret = rte_errno; /* Save rte_errno before cleanup. */ if (tmpl.cq) claim_zero(ibv_destroy_cq(tmpl.cq)); if (tmpl.qp) claim_zero(ibv_destroy_qp(tmpl.qp)); priv->verbs_alloc_ctx.type = MLX5_VERBS_ALLOC_TYPE_NONE; + rte_errno = ret; /* Restore rte_errno. */ return NULL; } @@ -574,7 +584,7 @@ mlx5_txq_ibv_verify(struct rte_eth_dev *dev) * Thresholds parameters. * * @return - * A DPDK queue object on success. + * A DPDK queue object on success, NULL otherwise and rte_errno is set. */ struct mlx5_txq_ctrl * mlx5_txq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, @@ -590,8 +600,10 @@ mlx5_txq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, sizeof(*tmpl) + desc * sizeof(struct rte_mbuf *), 0, socket); - if (!tmpl) + if (!tmpl) { + rte_errno = ENOMEM; return NULL; + } assert(desc > MLX5_TX_COMP_THRESH); tmpl->txq.flags = conf->txq_flags; tmpl->priv = priv; diff --git a/drivers/net/mlx5/mlx5_vlan.c b/drivers/net/mlx5/mlx5_vlan.c index 7f070058e..7e4830138 100644 --- a/drivers/net/mlx5/mlx5_vlan.c +++ b/drivers/net/mlx5/mlx5_vlan.c @@ -54,14 +54,13 @@ * Toggle filter. * * @return - * 0 on success, negative errno value on failure. + * 0 on success, a negative errno value otherwise and rte_errno is set. */ int mlx5_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on) { struct priv *priv = dev->data->dev_private; unsigned int i; - int ret = 0; DEBUG("%p: %s VLAN filter ID %" PRIu16, (void *)dev, (on ? "enable" : "disable"), vlan_id); @@ -71,8 +70,8 @@ mlx5_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on) break; /* Check if there's room for another VLAN filter. */ if (i == RTE_DIM(priv->vlan_filter)) { - ret = -ENOMEM; - goto out; + rte_errno = ENOMEM; + return -rte_errno; } if (i < priv->vlan_filter_n) { assert(priv->vlan_filter_n != 0); @@ -95,10 +94,10 @@ mlx5_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on) priv->vlan_filter[priv->vlan_filter_n] = vlan_id; ++priv->vlan_filter_n; } - if (dev->data->dev_started) - mlx5_traffic_restart(dev); out: - return ret; + if (dev->data->dev_started) + return mlx5_traffic_restart(dev); + return 0; } /** @@ -122,7 +121,7 @@ mlx5_vlan_strip_queue_set(struct rte_eth_dev *dev, uint16_t queue, int on) uint16_t vlan_offloads = (on ? IBV_WQ_FLAGS_CVLAN_STRIPPING : 0) | 0; - int err; + int ret; /* Validate hw support */ if (!priv->hw_vlan_strip) { @@ -146,10 +145,10 @@ mlx5_vlan_strip_queue_set(struct rte_eth_dev *dev, uint16_t queue, int on) .flags_mask = IBV_WQ_FLAGS_CVLAN_STRIPPING, .flags = vlan_offloads, }; - err = ibv_modify_wq(rxq_ctrl->ibv->wq, &mod); - if (err) { + ret = ibv_modify_wq(rxq_ctrl->ibv->wq, &mod); + if (ret) { ERROR("%p: failed to modified stripping mode: %s", - (void *)dev, strerror(err)); + (void *)dev, strerror(rte_errno)); return; } /* Update related bits in RX queue. */ @@ -163,6 +162,9 @@ mlx5_vlan_strip_queue_set(struct rte_eth_dev *dev, uint16_t queue, int on) * Pointer to Ethernet device structure. * @param mask * VLAN offload bit mask. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. */ int mlx5_vlan_offload_set(struct rte_eth_dev *dev, int mask) -- 2.11.0