From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from EUR02-AM5-obe.outbound.protection.outlook.com (mail-eopbgr00068.outbound.protection.outlook.com [40.107.0.68]) by dpdk.org (Postfix) with ESMTP id 7E5D1AAC7 for ; Tue, 5 Jun 2018 02:37:58 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Mellanox.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=ujM2SHImMfXfxPa0REczA0RGIxNlSFJkcDZepLC9WzI=; b=qsp+TvUBIjtZKdekLBg7BeKMhq2m3iC2Eg+VuByvzEHVKjm4Gcu6N43H2j8Dy8sOgvI9sk/bt11XTwdhsR/7RB2Oym9YfTidgRuS2z4fS2frnQS7Rulr1lwubLCDI2TG/9EhjnpNy1HMgJ9Gc1vmqzMktIHVZQ1O/+Zn/Qkz99I= Authentication-Results: spf=none (sender IP is ) smtp.mailfrom=yskoh@mellanox.com; Received: from mellanox.com (209.116.155.178) by AM5PR0501MB2034.eurprd05.prod.outlook.com (2603:10a6:203:1a::20) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.820.11; Tue, 5 Jun 2018 00:37:55 +0000 From: Yongseok Koh To: yliu@fridaylinux.org Cc: stable@dpdk.org, shahafs@mellanox.com, adrien.mazarguil@6wind.com, nelio.laranjeiro@6wind.com Date: Mon, 4 Jun 2018 17:37:20 -0700 Message-Id: <20180605003721.14367-10-yskoh@mellanox.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20180605003721.14367-1-yskoh@mellanox.com> References: <20180605002732.13866-1-yskoh@mellanox.com> <20180605003721.14367-1-yskoh@mellanox.com> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Originating-IP: [209.116.155.178] X-ClientProxiedBy: BYAPR03CA0027.namprd03.prod.outlook.com (2603:10b6:a02:a8::40) To AM5PR0501MB2034.eurprd05.prod.outlook.com (2603:10a6:203:1a::20) X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-HT: Tenant X-Microsoft-Antispam: UriScan:; BCL:0; PCL:0; RULEID:(7020095)(4652020)(48565401081)(5600026)(4534165)(4627221)(201703031133081)(201702281549075)(2017052603328)(7153060)(7193020); SRVR:AM5PR0501MB2034; X-Microsoft-Exchange-Diagnostics: 1; AM5PR0501MB2034; 3:N+rsXLim/h++U9BTyYI6W+HFAxB7tdYLtMn08EVjQT2j9CiUUSmKh1BZFMPHASt6ZEDDjm8x28y8gYloVyNgN5lnYGzLP5ejSd25OQ33smODZYBuOcenQZxHWMx5c6Ijv4ZCHZQOLnGOMe1+Slbr0hPPo34+ezKpBNtXmnht20vLxE/MvWL3KOPAfMrRTdqVq2gqj2sFRIYEy+Oasp3G4D0EkHGnnHVt5ZLZGIDdBckECBTnEHf7Z8yMam5kgzBg; 25:1OoKgPy32xGe/3Vn1wzAOiVVoEaOxzZ/I+i7QXxoZCn8bwSTSE5W4UekkU24Poh+2Z7JfuldcghONj6t2a9xMCwjRBiIwdje9a/qDvBB6y35A43ZnYeVhjCnJaQrjSSvcajRJtjOVmuVbOfmxZiOK5xw8EizlHuAjOxW9AuyQZMGKwwjinoaQkJj+3L9zR9hRejjksWTV2Wi/X1vuMfSj/W9z81dSAFzZ+NF1Ry1xnrEAeLSGV0xQTMXC+MUvTMJJQ1WdpKKSa2/VsHlUOoFIGQc0SKOuS/c0Ka00gjX3bK5crst5dAa9jBWnxP92SWOI7vt3/9IOc9RNNATAIb6Mw==; 31:BaJZcmhkfPY9QOeQAuIKFlXqc9sjxq24iuGA0iPBQzXGAam+KLvVt4gRfsgxZBMIo1QbULffu0y8UOrzWAsdcohT1glAzBjDatnu2e7oI8e/0xqMMYEReIgWJD5C6+Tx476EQlVuSLSuOUqpn92xqOWXzL5hhYtEAiuJPyO8MkInaNTCz4lvtL/bPnJ/vVXCJW8wFLRTxPZeAqHqFjvORbcaWx7m2C0d0P1+N84fVNM= X-MS-TrafficTypeDiagnostic: AM5PR0501MB2034: X-LD-Processed: a652971c-7d2e-4d9b-a6a4-d149256f461b,ExtAddr X-Microsoft-Exchange-Diagnostics: 1; AM5PR0501MB2034; 20:seivYvkammWZhB5jnOY9Oepl4Mwepms9P9g4uY/XQwd+g5f2iAB7yQbGUQn3oAiUWCPTAh3skTew8X0Bps0thEvumkm2sFem4+9TTmbPXhFLh2H93g1wo7NXDb6LTuXuFc3HnEi6TbAmcNj6CWRHjg7cNiO7zslhfsqn38y7dhZbclyPTuVefGfUJdhH2s06jwbkNQuxKHRonpP7IsMWCIYt4T3OK4zhnGP1zBds19ghGRBVb7CtxyY1CVV6KCEV5VbAvGXwDUDdWf413C2rPCG2qnxgRN7tdevGrxJ2CDxuR02ACa582+MsRI9yKyfy44c2mnnTNr1aueRX85t0yI2MUss++Zq/Tn9SpQj0nxMNWoKeK9zJvs2YANYUqvyvTIskV6+mFCLNBDCrx59IjE/MFAe5ZJ/wyDnrjH0WHFZSK+GubpAFKlxXsR5leju3LVBJ87rFfgmohJhOCQDmr0/5hx2V1sk1bsAvG1KtSQzBPdwIYkTTVZ3JrE/5qyoF; 4:EYtHonWSvAuE9jKBYMBbW+mufJgpmCMafld+PpjrD4tTv+Naolf03VOaKaCP88H+8W71K2E6DoAu1e7+jVaYyihRj4jDhyVsn80I2AtvuiL31VEOmCkrvrQB2NRMclJaTOU5UqY0lVmq3mGqo6t8e8CLjScRoiNoYo7Y9xfeHhR35V7NAeacudqL5tStu/AwShmPZxo0LmScpoayF0VOfhs+Tzfieee+BJMI3NCAB8RAdk0gc6y4cSTq2eT25EhdIoPF3L4NAK5lm+vRFi4MTir7ml1Iom0x0iUpMCrzqjq6FMHFZCGf58t0RC+Mv+lTiJ16deHVXQkXL3QNke87fw== X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-Test: UriScan:(788757137089)(21532816269658); X-MS-Exchange-SenderADCheck: 1 X-Exchange-Antispam-Report-CFA-Test: BCL:0; PCL:0; RULEID:(8211001083)(6040522)(2401047)(8121501046)(5005006)(3002001)(93006095)(93001095)(3231254)(944501410)(52105095)(10201501046)(6055026)(149027)(150027)(6041310)(201703131423095)(201702281528075)(20161123555045)(201703061421075)(201703061406153)(20161123558120)(20161123562045)(20161123564045)(20161123560045)(6072148)(201708071742011)(7699016); SRVR:AM5PR0501MB2034; BCL:0; PCL:0; RULEID:; SRVR:AM5PR0501MB2034; X-Forefront-PRVS: 0694C54398 X-Forefront-Antispam-Report: SFV:NSPM; SFS:(10009020)(346002)(376002)(396003)(366004)(39860400002)(39380400002)(199004)(189003)(81156014)(50226002)(25786009)(69596002)(4326008)(7736002)(8936002)(305945005)(47776003)(76176011)(53946003)(575784001)(52116002)(316002)(7696005)(68736007)(53936002)(5890100001)(97736004)(66066001)(86362001)(81166006)(55016002)(59450400001)(16200700003)(8676002)(2906002)(446003)(1076002)(105586002)(2870700001)(50466002)(6916009)(476003)(5660300001)(11346002)(6666003)(23676004)(2616005)(386003)(478600001)(21086003)(486006)(106356001)(36756003)(2351001)(2361001)(186003)(6116002)(16526019)(26005)(3846002)(956004)(559001)(579004)(569006); DIR:OUT; SFP:1101; SCL:1; SRVR:AM5PR0501MB2034; H:mellanox.com; FPR:; SPF:None; LANG:en; PTR:InfoNoRecords; A:1; MX:1; Received-SPF: None (protection.outlook.com: mellanox.com does not designate permitted sender hosts) X-Microsoft-Exchange-Diagnostics: =?utf-8?B?MTtBTTVQUjA1MDFNQjIwMzQ7MjM6MFJHRCs4bkNPVjhNYWliaW9nRktFRjVG?= =?utf-8?B?ZzJHSTk2dnZVamdScTRyRTVnZ0loTW5nWi9aL3dMSXdwOHV4OTFoYlNxQklC?= =?utf-8?B?SC9JaXlPRUlkUmJ4UzZVUzI2Q0dGOFZmRS9IVVR5TEtIdHlvS2JaaHpDTWpV?= =?utf-8?B?N3FqN2NhdmdPZW9aWjZvYk82R3pYV0poR2h5SnprdTZSQjdsVFd1VVM2OCsr?= =?utf-8?B?RXhiQWZTY0syeXRGUzRXdncxTGJ2K0htOFZmSU5rQnZKL1RMQ3AyekhZODVq?= =?utf-8?B?Q1NUVXFVeXBvNnBkaDVVb1huMUYrL3hFUnVKVUptMDAydUlXbmFvbElDLzFr?= =?utf-8?B?TWN3QVdXZ1FoMDI4MHRUVmU4K0hFdHJkMjVteE9JcENFc3k1UmxjNkVOWGwx?= =?utf-8?B?QW9UdlJjY3JKVGk2SU00cXc5a0RkK24vbEpoYTR2WlIxajk3Q3pFYzJENU5S?= =?utf-8?B?Mk1taGpwaHFYZ2tlOXB2T2E3RlJ3NTJHd3RzUmh0ZEVvdU0rWE03WjhPVGVj?= =?utf-8?B?RGVTMXkrMmIxVHlKbzVxalFSTUZINHo3VVpXZTBaR25KRDh5TVpoT09rL2o0?= =?utf-8?B?VE9Cai9FUVJUMGFQMzVsUjlwazlUbm54RnFrNThuZ2d5Sk9vSElpZHdmVlJn?= =?utf-8?B?OWtZa1l0NkJ2N0xua0Yxa0JJR2tWaWdOUHlTL0VmVGMwRXh3eTFFa3Uva3Fx?= =?utf-8?B?b0Yvc3NCZzhXT2Z6MjVPdytJcGcxR0NHbFh6S2ZUQkFxSmhTd25XbEpHNmUw?= =?utf-8?B?RHkyM2pGaVNHZStVeTNUa1czcjRmbFdjK0JLUlhRMGtGWnQ0aUtYN296Q1NS?= =?utf-8?B?ZHdCaEE2YlRMRThLUTZndkhBb0lwcTd2eCtzdEd2WEovYVc5WDRyZXRTZXhQ?= =?utf-8?B?VjFidHBUMFBRQlBvM1dJUjE0WXEzTk5UbzlGZDZNTUQvQml1WDRPNzY0WGlP?= =?utf-8?B?dzZIUTNTSDRPei9uRkJDSXBSQlJ0NDgwc1JyVk0wbG5XeExjWHc5K01KVGQ3?= =?utf-8?B?T0xUT3JRYUlCcXpJMlBLRG0zTjM2U2xvUXNVSFo5NFQ5ZlRzZnVQWVdsN2Rw?= =?utf-8?B?QzVNcWNFZDJQeVpGem9lSVEyajJzVU9QRWZScVVTcG94dlNJOXFWUFgxUk9D?= =?utf-8?B?VW5xcC9UL1RoYzFsb2RBTE1YQlB4VTBIUlQyamFYK0RVTWQxdmxOeFVXZXAv?= =?utf-8?B?Q01YR1YrSmRSaTBycHh1aEh3UzhCYWNreE16ZDJPOHdUZ1lKZkxvVFRnd3du?= =?utf-8?B?aUVpNE94T1BkUnlqYUJHUi8rUTBCSXJOSHozbTJOR29zbGRYbkJyMW4reEFh?= =?utf-8?B?UkJiR0JZRitzUnZPVzRMelk1NWFKM1Qwekk4MlM4UHhFNFJ4WmpQWEZueGhR?= =?utf-8?B?dHR4QVpCcm9FRWtzckp5ZTVYUVc2QXVGbC9HSTRuK0F3WE1rbG1laGFsNUU1?= =?utf-8?B?VEp5SUV0NTY0SFIvaVIwQ1Zjc2JSUVhsVTF0VUxtYzkzbVQ3RHdnTEVnaml0?= =?utf-8?B?QXhNNWEzdXZ5N21QNjhucjZicEdmYktUbHdKaHZkQ0lMOGM4SllMS1ozbm5K?= =?utf-8?B?dHhKcU1WWTZkU21CWEZNdmNBRHJXS0swdjZiT3lwbE9MSDNiekhyaE9BRW5K?= =?utf-8?B?a3Y5MnNuNHVOMFd4RjZGUjd5MktydWNlRUJSUkpxN0JIbExQOGFUYy9kTTMy?= =?utf-8?B?L1Nob3FsQ3R5WlJtWjhXeWRTT0M3ZFB0TWh6d2FCbWFRZFVTb1VhT3VZYXVZ?= =?utf-8?B?V3VFekpoZG41SFYybVc0TmZkcmJNRFFVQUxmN2JrQnNmT0Q5bE9mL1FteTdQ?= =?utf-8?B?aWtobDVxK0l0d0Q1M2VNVVAwRFB3VzBUYm10ekdkZmwxYy9yM0hBNm1MZ0pZ?= =?utf-8?B?SE40SnNVNTZDZnJabG1rVFRkQm5EOU90MXpXVkFwSWo4RlNEMlcvcnZXK0lz?= =?utf-8?B?dU44K1lWMHE4Q3c9PQ==?= X-Microsoft-Antispam-Message-Info: qnNoYHP5C4ppntAxLEfA76aLC1fVLIYYz8Cw8DmGXhLtmzdaNsMERo/dcXiSIj4Vn+CWoGpkjMw4Cv8sGINctMbkUP+PtCLwBTPAM2ornWry1G+VQ3DYRCX4F+NJUU5X99VNLHKhs0PKtyr59Wi9BljXsXGhLAaK8ApKCLJqZag3UfI9yiidqlWLPzuA7clR X-Microsoft-Exchange-Diagnostics: 1; AM5PR0501MB2034; 6:d5USmFcaxBfS+6AneFAbbg4uz0z5reheehXGELMIzAplwqjIroktO8CUBxKMLYwNp86B/1+UDf5rKtkrbjYPi1fupMdr11eAfV600YukUqfEabvlmpQqcZq9bEZ5qegvOel3KLF2axmB2tqfC6rF1oETkHw5mgd4B1lTnz2oVHKXc6qScDSEFsFJNkAv4V+1kXUzUiO5CncoqEbEvM5rCIeNtyWKgkQhq7sBpdWaM4RBH5XYkj2Drcchq7WRFVl5SN1fj86wdkn+HpLSzdHRSwcGedu3vxeafC1hFwnjhS5Ll97C+ZCESoyymp5zawwombiu0qNPNu6f5f6hHJgpreo5hEJL56faHSEb4ssS8x/K+t3xE9feEtY6tgfLCuHO8Td3PWOf765cJHA5R7sMiKUN2dnlxOVrSfErOn/VCFfaYjWeH9FsURLq9hE/EHQ2E3G/CtxKhDyWBYpYeNYB3g==; 5:Lxa2Q8wx921r85aW08xvX+HFLghFtnNF7X869tSNbMTXJFmYi3GnrRSdpxQV0v/42kKM4ohwg5lfb+XWP3mX8NwJXRjMTHEQmycrtNoOgpOr96oReM0leq1pQ8upCNHYe2In02arRKsXj3AqKqncBC4q3PEv4MNs3OKTOcMBqAw=; 24:0ZhYxWKERvUiYKgUDw63A7X2lHhU/O/yzIpO9gmoFAm5XcmXS7Y3dOW853EW9cjDnIYYV1KAA0Aug8dl3oiTTyxpi+yTF+UPW8r69BddJWM= SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-Microsoft-Exchange-Diagnostics: 1; AM5PR0501MB2034; 7:D3Gm+oN3vyGk1UEGBQfF2aMikX5ql8GHTb2NtJTu7Xc2JJKkStFZFs2EU3gQs7wWcwS/yNZBd7OfrbvMuVKkdBxY56Bb0spB39IfRb5rSXI44tWYzaTrUqMNh5kbRcq9+XNi5c3CO7l/plRByd6XSXZCH1590Yjci4l6Z4ASBINUr95StTv+OP0cehUXcMcW5tggjG66SZUJNFVxeGQY6+fW8zoS/DCkb0NYoUtL0Im/dN41lpZR3HVUaeR7xItR X-MS-Office365-Filtering-Correlation-Id: 142da645-b8ff-4461-172f-08d5ca7c94bc X-OriginatorOrg: Mellanox.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 Jun 2018 00:37:55.0652 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 142da645-b8ff-4461-172f-08d5ca7c94bc X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: a652971c-7d2e-4d9b-a6a4-d149256f461b X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM5PR0501MB2034 Subject: [dpdk-stable] [PATCH v2 29/67] net/mlx5: use dynamic logging X-BeenThere: stable@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches for DPDK stable branches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 05 Jun 2018 00:37:58 -0000 From: NĂ©lio Laranjeiro [ backported from upstream commit a170a30d22a8c34c36541d0dd6bcc2fcc4c9ee2f ] Signed-off-by: Nelio Laranjeiro Acked-by: Adrien Mazarguil --- drivers/net/mlx5/mlx5.c | 221 +++++++++++++++++-------------- drivers/net/mlx5/mlx5_ethdev.c | 104 +++++++++------ drivers/net/mlx5/mlx5_flow.c | 105 ++++++++------- drivers/net/mlx5/mlx5_mac.c | 12 +- drivers/net/mlx5/mlx5_mr.c | 85 ++++++------ drivers/net/mlx5/mlx5_rxmode.c | 16 +-- drivers/net/mlx5/mlx5_rxq.c | 283 ++++++++++++++++++++++------------------ drivers/net/mlx5/mlx5_rxtx.h | 17 +-- drivers/net/mlx5/mlx5_socket.c | 65 +++++---- drivers/net/mlx5/mlx5_stats.c | 38 +++--- drivers/net/mlx5/mlx5_trigger.c | 30 ++--- drivers/net/mlx5/mlx5_txq.c | 152 +++++++++++---------- drivers/net/mlx5/mlx5_utils.h | 27 ++-- drivers/net/mlx5/mlx5_vlan.c | 24 ++-- 14 files changed, 646 insertions(+), 533 deletions(-) diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c index 8518fa588..911d4cf65 100644 --- a/drivers/net/mlx5/mlx5.c +++ b/drivers/net/mlx5/mlx5.c @@ -119,6 +119,10 @@ struct mlx5_args { int tx_vec_en; int rx_vec_en; }; + +/** Driver-specific log messages type. */ +int mlx5_logtype; + /** * Retrieve integer value from environment variable. * @@ -207,9 +211,9 @@ mlx5_dev_close(struct rte_eth_dev *dev) unsigned int i; int ret; - DEBUG("port %u closing device \"%s\"", - dev->data->port_id, - ((priv->ctx != NULL) ? priv->ctx->device->name : "")); + DRV_LOG(DEBUG, "port %u closing device \"%s\"", + dev->data->port_id, + ((priv->ctx != NULL) ? priv->ctx->device->name : "")); /* In case mlx5_dev_stop() has not been called. */ mlx5_dev_interrupt_handler_uninstall(dev); mlx5_traffic_disable(dev); @@ -246,35 +250,36 @@ mlx5_dev_close(struct rte_eth_dev *dev) mlx5_socket_uninit(dev); ret = mlx5_hrxq_ibv_verify(dev); if (ret) - WARN("port %u some hash Rx queue still remain", - dev->data->port_id); + DRV_LOG(WARNING, "port %u some hash Rx queue still remain", + dev->data->port_id); ret = mlx5_ind_table_ibv_verify(dev); if (ret) - WARN("port %u some indirection table still remain", - dev->data->port_id); + DRV_LOG(WARNING, "port %u some indirection table still remain", + dev->data->port_id); ret = mlx5_rxq_ibv_verify(dev); if (ret) - WARN("port %u some Verbs Rx queue still remain", - dev->data->port_id); + DRV_LOG(WARNING, "port %u some Verbs Rx queue still remain", + dev->data->port_id); ret = mlx5_rxq_verify(dev); if (ret) - WARN("port %u some Rx queues still remain", - dev->data->port_id); + DRV_LOG(WARNING, "port %u some Rx queues still remain", + dev->data->port_id); ret = mlx5_txq_ibv_verify(dev); if (ret) - WARN("port %u some Verbs Tx queue still remain", - dev->data->port_id); + DRV_LOG(WARNING, "port %u some Verbs Tx queue still remain", + dev->data->port_id); ret = mlx5_txq_verify(dev); if (ret) - WARN("port %u some Tx queues still remain", - dev->data->port_id); + DRV_LOG(WARNING, "port %u some Tx queues still remain", + dev->data->port_id); ret = mlx5_flow_verify(dev); if (ret) - WARN("port %u some flows still remain", dev->data->port_id); + DRV_LOG(WARNING, "port %u some flows still remain", + dev->data->port_id); ret = mlx5_mr_verify(dev); if (ret) - WARN("port %u some memory region still remain", - dev->data->port_id); + DRV_LOG(WARNING, "port %u some memory region still remain", + dev->data->port_id); memset(priv, 0, sizeof(*priv)); } @@ -424,7 +429,7 @@ mlx5_args_check(const char *key, const char *val, void *opaque) tmp = strtoul(val, NULL, 0); if (errno) { rte_errno = errno; - WARN("%s: \"%s\" is not a valid integer", key, val); + DRV_LOG(WARNING, "%s: \"%s\" is not a valid integer", key, val); return -rte_errno; } if (strcmp(MLX5_RXQ_CQE_COMP_EN, key) == 0) { @@ -446,7 +451,7 @@ mlx5_args_check(const char *key, const char *val, void *opaque) } else if (strcmp(MLX5_RX_VEC_EN, key) == 0) { args->rx_vec_en = !!tmp; } else { - WARN("%s: unknown parameter", key); + DRV_LOG(WARNING, "%s: unknown parameter", key); rte_errno = EINVAL; return -rte_errno; } @@ -551,17 +556,18 @@ mlx5_uar_init_primary(struct rte_eth_dev *dev) addr = mmap(addr, MLX5_UAR_SIZE, PROT_NONE, MAP_PRIVATE | MAP_ANONYMOUS, -1, 0); if (addr == MAP_FAILED) { - ERROR("port %u failed to reserve UAR address space, please" - " adjust MLX5_UAR_SIZE or try --base-virtaddr", - dev->data->port_id); + DRV_LOG(ERR, + "port %u failed to reserve UAR address space, please" + " adjust MLX5_UAR_SIZE or try --base-virtaddr", + dev->data->port_id); rte_errno = ENOMEM; return -rte_errno; } /* Accept either same addr or a new addr returned from mmap if target * range occupied. */ - INFO("port %u reserved UAR address space: %p", dev->data->port_id, - addr); + DRV_LOG(INFO, "port %u reserved UAR address space: %p", + dev->data->port_id, addr); priv->uar_base = addr; /* for primary and secondary UAR re-mmap. */ uar_base = addr; /* process local, don't reserve again. */ return 0; @@ -592,21 +598,23 @@ mlx5_uar_init_secondary(struct rte_eth_dev *dev) addr = mmap(priv->uar_base, MLX5_UAR_SIZE, PROT_NONE, MAP_PRIVATE | MAP_ANONYMOUS, -1, 0); if (addr == MAP_FAILED) { - ERROR("port %u UAR mmap failed: %p size: %llu", - dev->data->port_id, priv->uar_base, MLX5_UAR_SIZE); + DRV_LOG(ERR, "port %u UAR mmap failed: %p size: %llu", + dev->data->port_id, priv->uar_base, MLX5_UAR_SIZE); rte_errno = ENXIO; return -rte_errno; } if (priv->uar_base != addr) { - ERROR("port %u UAR address %p size %llu occupied, please adjust " - "MLX5_UAR_OFFSET or try EAL parameter --base-virtaddr", - dev->data->port_id, priv->uar_base, MLX5_UAR_SIZE); + DRV_LOG(ERR, + "port %u UAR address %p size %llu occupied, please" + " adjust MLX5_UAR_OFFSET or try EAL parameter" + " --base-virtaddr", + dev->data->port_id, priv->uar_base, MLX5_UAR_SIZE); rte_errno = ENXIO; return -rte_errno; } uar_base = addr; /* process local, don't reserve again */ - INFO("port %u reserved UAR address space: %p", dev->data->port_id, - addr); + DRV_LOG(INFO, "port %u reserved UAR address space: %p", + dev->data->port_id, addr); return 0; } @@ -679,11 +687,11 @@ mlx5_pci_probe(struct rte_pci_driver *pci_drv __rte_unused, /* Get mlx5_dev[] index. */ idx = mlx5_dev_idx(&pci_dev->addr); if (idx == -1) { - ERROR("this driver cannot support any more adapters"); + DRV_LOG(ERR, "this driver cannot support any more adapters"); err = ENOMEM; goto error; } - DEBUG("using driver device index %d", idx); + DRV_LOG(DEBUG, "using driver device index %d", idx); /* Save PCI address. */ mlx5_dev[idx].pci_addr = pci_dev->addr; list = ibv_get_device_list(&i); @@ -691,7 +699,8 @@ mlx5_pci_probe(struct rte_pci_driver *pci_drv __rte_unused, assert(errno); err = errno; if (errno == ENOSYS) - ERROR("cannot list devices, is ib_uverbs loaded?"); + DRV_LOG(ERR, + "cannot list devices, is ib_uverbs loaded?"); goto error; } assert(i >= 0); @@ -703,7 +712,7 @@ mlx5_pci_probe(struct rte_pci_driver *pci_drv __rte_unused, struct rte_pci_addr pci_addr; --i; - DEBUG("checking device \"%s\"", list[i]->name); + DRV_LOG(DEBUG, "checking device \"%s\"", list[i]->name); if (mlx5_ibv_device_to_pci_addr(list[i], &pci_addr)) continue; if ((pci_dev->addr.domain != pci_addr.domain) || @@ -725,7 +734,7 @@ mlx5_pci_probe(struct rte_pci_driver *pci_drv __rte_unused, default: break; } - INFO("PCI information matches, using device \"%s\"", + DRV_LOG(INFO, "PCI information matches, using device \"%s\"", list[i]->name); attr_ctx = ibv_open_device(list[i]); rte_errno = errno; @@ -736,16 +745,18 @@ mlx5_pci_probe(struct rte_pci_driver *pci_drv __rte_unused, ibv_free_device_list(list); switch (err) { case 0: - ERROR("cannot access device, is mlx5_ib loaded?"); + DRV_LOG(ERR, + "cannot access device, is mlx5_ib loaded?"); err = ENODEV; goto error; case EINVAL: - ERROR("cannot use device, are drivers up to date?"); + DRV_LOG(ERR, + "cannot use device, are drivers up to date?"); goto error; } } ibv_dev = list[i]; - DEBUG("device opened"); + DRV_LOG(DEBUG, "device opened"); /* * Multi-packet send is supported by ConnectX-4 Lx PF as well * as all ConnectX-5 devices. @@ -753,14 +764,14 @@ mlx5_pci_probe(struct rte_pci_driver *pci_drv __rte_unused, mlx5dv_query_device(attr_ctx, &attrs_out); if (attrs_out.flags & MLX5DV_CONTEXT_FLAGS_MPW_ALLOWED) { if (attrs_out.flags & MLX5DV_CONTEXT_FLAGS_ENHANCED_MPW) { - DEBUG("enhanced MPW is supported"); + DRV_LOG(DEBUG, "enhanced MPW is supported"); mps = MLX5_MPW_ENHANCED; } else { - DEBUG("MPW is supported"); + DRV_LOG(DEBUG, "MPW is supported"); mps = MLX5_MPW; } } else { - DEBUG("MPW isn't supported"); + DRV_LOG(DEBUG, "MPW isn't supported"); mps = MLX5_MPW_DISABLED; } if (RTE_CACHE_LINE_SIZE == 128 && @@ -772,7 +783,8 @@ mlx5_pci_probe(struct rte_pci_driver *pci_drv __rte_unused, err = errno; goto error; } - INFO("%u port(s) detected", device_attr.orig_attr.phys_port_cnt); + DRV_LOG(INFO, "%u port(s) detected", + device_attr.orig_attr.phys_port_cnt); for (i = 0; i < device_attr.orig_attr.phys_port_cnt; i++) { char name[RTE_ETH_NAME_MAX_LEN]; uint32_t port = i + 1; /* ports are indexed from one */ @@ -804,7 +816,7 @@ mlx5_pci_probe(struct rte_pci_driver *pci_drv __rte_unused, if (rte_eal_process_type() == RTE_PROC_SECONDARY) { eth_dev = rte_eth_dev_attach_secondary(name); if (eth_dev == NULL) { - ERROR("can not attach rte ethdev"); + DRV_LOG(ERR, "can not attach rte ethdev"); rte_errno = ENOMEM; err = rte_errno; goto error; @@ -826,7 +838,7 @@ mlx5_pci_probe(struct rte_pci_driver *pci_drv __rte_unused, mlx5_select_tx_function(eth_dev); continue; } - DEBUG("using port %u (%08" PRIx32 ")", port, test); + DRV_LOG(DEBUG, "using port %u (%08" PRIx32 ")", port, test); ctx = ibv_open_device(ibv_dev); if (ctx == NULL) { err = ENODEV; @@ -836,23 +848,24 @@ mlx5_pci_probe(struct rte_pci_driver *pci_drv __rte_unused, /* Check port status. */ err = ibv_query_port(ctx, port, &port_attr); if (err) { - ERROR("port query failed: %s", strerror(err)); + DRV_LOG(ERR, "port query failed: %s", strerror(err)); goto port_error; } if (port_attr.link_layer != IBV_LINK_LAYER_ETHERNET) { - ERROR("port %d is not configured in Ethernet mode", - port); + DRV_LOG(ERR, + "port %d is not configured in Ethernet mode", + port); err = EINVAL; goto port_error; } if (port_attr.state != IBV_PORT_ACTIVE) - DEBUG("port %d is not active: \"%s\" (%d)", - port, ibv_port_state_str(port_attr.state), - port_attr.state); + DRV_LOG(DEBUG, "port %d is not active: \"%s\" (%d)", + port, ibv_port_state_str(port_attr.state), + port_attr.state); /* Allocate protection domain. */ pd = ibv_alloc_pd(ctx); if (pd == NULL) { - ERROR("PD allocation failure"); + DRV_LOG(ERR, "PD allocation failure"); err = ENOMEM; goto port_error; } @@ -862,7 +875,7 @@ mlx5_pci_probe(struct rte_pci_driver *pci_drv __rte_unused, sizeof(*priv), RTE_CACHE_LINE_SIZE); if (priv == NULL) { - ERROR("priv allocation failure"); + DRV_LOG(ERR, "priv allocation failure"); err = ENOMEM; goto port_error; } @@ -881,35 +894,36 @@ mlx5_pci_probe(struct rte_pci_driver *pci_drv __rte_unused, priv->rx_vec_en = 1; err = mlx5_args(&args, pci_dev->device.devargs); if (err) { - ERROR("failed to process device arguments: %s", - strerror(err)); + DRV_LOG(ERR, "failed to process device arguments: %s", + strerror(err)); goto port_error; } mlx5_args_assign(priv, &args); if (ibv_query_device_ex(ctx, NULL, &device_attr_ex)) { - ERROR("ibv_query_device_ex() failed"); + DRV_LOG(ERR, "ibv_query_device_ex() failed"); err = errno; goto port_error; } priv->hw_csum = !!(device_attr_ex.device_cap_flags_ex & IBV_DEVICE_RAW_IP_CSUM); - DEBUG("checksum offloading is %ssupported", - (priv->hw_csum ? "" : "not ")); + DRV_LOG(DEBUG, "checksum offloading is %ssupported", + (priv->hw_csum ? "" : "not ")); #ifdef HAVE_IBV_DEVICE_VXLAN_SUPPORT priv->hw_csum_l2tun = !!(exp_device_attr.exp_device_cap_flags & IBV_DEVICE_VXLAN_SUPPORT); #endif - DEBUG("Rx L2 tunnel checksum offloads are %ssupported", - (priv->hw_csum_l2tun ? "" : "not ")); + DRV_LOG(DEBUG, "Rx L2 tunnel checksum offloads are %ssupported", + (priv->hw_csum_l2tun ? "" : "not ")); #ifdef HAVE_IBV_DEVICE_COUNTERS_SET_SUPPORT priv->counter_set_supported = !!(device_attr.max_counter_sets); ibv_describe_counter_set(ctx, 0, &cs_desc); - DEBUG("counter type = %d, num of cs = %ld, attributes = %d", - cs_desc.counter_type, cs_desc.num_of_cs, - cs_desc.attributes); + DRV_LOG(DEBUG, + "counter type = %d, num of cs = %ld, attributes = %d", + cs_desc.counter_type, cs_desc.num_of_cs, + cs_desc.attributes); #endif priv->ind_table_max_size = device_attr_ex.rss_caps.max_rwq_indirection_table_size; @@ -918,23 +932,24 @@ mlx5_pci_probe(struct rte_pci_driver *pci_drv __rte_unused, if (priv->ind_table_max_size > (unsigned int)ETH_RSS_RETA_SIZE_512) priv->ind_table_max_size = ETH_RSS_RETA_SIZE_512; - DEBUG("maximum Rx indirection table size is %u", - priv->ind_table_max_size); + DRV_LOG(DEBUG, "maximum Rx indirection table size is %u", + priv->ind_table_max_size); priv->hw_vlan_strip = !!(device_attr_ex.raw_packet_caps & IBV_RAW_PACKET_CAP_CVLAN_STRIPPING); - DEBUG("VLAN stripping is %ssupported", - (priv->hw_vlan_strip ? "" : "not ")); + DRV_LOG(DEBUG, "VLAN stripping is %ssupported", + (priv->hw_vlan_strip ? "" : "not ")); priv->hw_fcs_strip = !!(device_attr_ex.raw_packet_caps & IBV_RAW_PACKET_CAP_SCATTER_FCS); - DEBUG("FCS stripping configuration is %ssupported", - (priv->hw_fcs_strip ? "" : "not ")); + DRV_LOG(DEBUG, "FCS stripping configuration is %ssupported", + (priv->hw_fcs_strip ? "" : "not ")); #ifdef HAVE_IBV_WQ_FLAG_RX_END_PADDING priv->hw_padding = !!device_attr_ex.rx_pad_end_addr_align; #endif - DEBUG("hardware Rx end alignment padding is %ssupported", - (priv->hw_padding ? "" : "not ")); + DRV_LOG(DEBUG, + "hardware Rx end alignment padding is %ssupported", + (priv->hw_padding ? "" : "not ")); priv->tso = ((priv->tso) && (device_attr_ex.tso_caps.max_tso > 0) && (device_attr_ex.tso_caps.supported_qpts & @@ -943,18 +958,21 @@ mlx5_pci_probe(struct rte_pci_driver *pci_drv __rte_unused, priv->max_tso_payload_sz = device_attr_ex.tso_caps.max_tso; if (priv->mps && !mps) { - ERROR("multi-packet send not supported on this device" - " (" MLX5_TXQ_MPW_EN ")"); + DRV_LOG(ERR, + "multi-packet send not supported on this device" + " (" MLX5_TXQ_MPW_EN ")"); err = ENOTSUP; goto port_error; } else if (priv->mps && priv->tso) { - WARN("multi-packet send not supported in conjunction " - "with TSO. MPS disabled"); + DRV_LOG(WARNING, + "multi-packet send not supported in conjunction" + " with TSO. MPS disabled"); priv->mps = 0; } - INFO("%s MPS is %s", - priv->mps == MLX5_MPW_ENHANCED ? "enhanced " : "", - priv->mps != MLX5_MPW_DISABLED ? "enabled" : "disabled"); + DRV_LOG(INFO, "%s MPS is %s", + priv->mps == MLX5_MPW_ENHANCED ? "enhanced " : "", + priv->mps != MLX5_MPW_DISABLED ? "enabled" : + "disabled"); /* Set default values for Enhanced MPW, a.k.a MPWv2. */ if (priv->mps == MLX5_MPW_ENHANCED) { if (args.txqs_inline == MLX5_ARG_UNSET) @@ -967,12 +985,12 @@ mlx5_pci_probe(struct rte_pci_driver *pci_drv __rte_unused, MLX5_WQE_SIZE; } if (priv->cqe_comp && !cqe_comp) { - WARN("Rx CQE compression isn't supported"); + DRV_LOG(WARNING, "Rx CQE compression isn't supported"); priv->cqe_comp = 0; } eth_dev = rte_eth_dev_allocate(name); if (eth_dev == NULL) { - ERROR("can not allocate rte ethdev"); + DRV_LOG(ERR, "can not allocate rte ethdev"); err = ENOMEM; goto port_error; } @@ -987,34 +1005,37 @@ mlx5_pci_probe(struct rte_pci_driver *pci_drv __rte_unused, goto port_error; /* Configure the first MAC address by default. */ if (mlx5_get_mac(eth_dev, &mac.addr_bytes)) { - ERROR("port %u cannot get MAC address, is mlx5_en" - " loaded? (errno: %s)", eth_dev->data->port_id, - strerror(errno)); + DRV_LOG(ERR, + "port %u cannot get MAC address, is mlx5_en" + " loaded? (errno: %s)", + eth_dev->data->port_id, strerror(errno)); err = ENODEV; goto port_error; } - INFO("port %u MAC address is %02x:%02x:%02x:%02x:%02x:%02x", - eth_dev->data->port_id, - mac.addr_bytes[0], mac.addr_bytes[1], - mac.addr_bytes[2], mac.addr_bytes[3], - mac.addr_bytes[4], mac.addr_bytes[5]); + DRV_LOG(INFO, + "port %u MAC address is %02x:%02x:%02x:%02x:%02x:%02x", + eth_dev->data->port_id, + mac.addr_bytes[0], mac.addr_bytes[1], + mac.addr_bytes[2], mac.addr_bytes[3], + mac.addr_bytes[4], mac.addr_bytes[5]); #ifndef NDEBUG { char ifname[IF_NAMESIZE]; if (mlx5_get_ifname(eth_dev, &ifname) == 0) - DEBUG("port %u ifname is \"%s\"", - eth_dev->data->port_id, ifname); + DRV_LOG(DEBUG, "port %u ifname is \"%s\"", + eth_dev->data->port_id, ifname); else - DEBUG("port %u ifname is unknown", - eth_dev->data->port_id); + DRV_LOG(DEBUG, "port %u ifname is unknown", + eth_dev->data->port_id); } #endif /* Get actual MTU if possible. */ err = mlx5_get_mtu(eth_dev, &priv->mtu); if (err) goto port_error; - DEBUG("port %u MTU is %u", eth_dev->data->port_id, priv->mtu); + DRV_LOG(DEBUG, "port %u MTU is %u", eth_dev->data->port_id, + priv->mtu); /* * Initialize burst functions to prevent crashes before link-up. */ @@ -1034,8 +1055,8 @@ mlx5_pci_probe(struct rte_pci_driver *pci_drv __rte_unused, mlx5dv_set_context_attr(ctx, MLX5DV_CTX_ATTR_BUF_ALLOCATORS, (void *)((uintptr_t)&alctr)); /* Bring Ethernet device up. */ - DEBUG("port %u forcing Ethernet interface up", - eth_dev->data->port_id); + DRV_LOG(DEBUG, "port %u forcing Ethernet interface up", + eth_dev->data->port_id); mlx5_set_flags(eth_dev, ~IFF_UP, IFF_UP); continue; port_error: @@ -1143,3 +1164,11 @@ rte_mlx5_pmd_init(void) RTE_PMD_EXPORT_NAME(net_mlx5, __COUNTER__); RTE_PMD_REGISTER_PCI_TABLE(net_mlx5, mlx5_pci_id_map); RTE_PMD_REGISTER_KMOD_DEP(net_mlx5, "* ib_uverbs & mlx5_core & mlx5_ib"); + +/** Initialize driver log type. */ +RTE_INIT(vdev_netvsc_init_log) +{ + mlx5_logtype = rte_log_register("pmd.net.mlx5"); + if (mlx5_logtype >= 0) + rte_log_set_level(mlx5_logtype, RTE_LOG_NOTICE); +} diff --git a/drivers/net/mlx5/mlx5_ethdev.c b/drivers/net/mlx5/mlx5_ethdev.c index 8696c2d45..b78756efc 100644 --- a/drivers/net/mlx5/mlx5_ethdev.c +++ b/drivers/net/mlx5/mlx5_ethdev.c @@ -344,8 +344,8 @@ mlx5_dev_configure(struct rte_eth_dev *dev) rte_realloc(priv->rss_conf.rss_key, rss_hash_default_key_len, 0); if (!priv->rss_conf.rss_key) { - ERROR("port %u cannot allocate RSS hash key memory (%u)", - dev->data->port_id, rxqs_n); + DRV_LOG(ERR, "port %u cannot allocate RSS hash key memory (%u)", + dev->data->port_id, rxqs_n); rte_errno = ENOMEM; return -rte_errno; } @@ -359,20 +359,20 @@ mlx5_dev_configure(struct rte_eth_dev *dev) priv->rxqs = (void *)dev->data->rx_queues; priv->txqs = (void *)dev->data->tx_queues; if (txqs_n != priv->txqs_n) { - INFO("port %u Tx queues number update: %u -> %u", - dev->data->port_id, priv->txqs_n, txqs_n); + DRV_LOG(INFO, "port %u Tx queues number update: %u -> %u", + dev->data->port_id, priv->txqs_n, txqs_n); priv->txqs_n = txqs_n; } if (rxqs_n > priv->ind_table_max_size) { - ERROR("port %u cannot handle this many Rx queues (%u)", - dev->data->port_id, rxqs_n); + DRV_LOG(ERR, "port %u cannot handle this many Rx queues (%u)", + dev->data->port_id, rxqs_n); rte_errno = EINVAL; return -rte_errno; } if (rxqs_n == priv->rxqs_n) return 0; - INFO("port %u Rx queues number update: %u -> %u", - dev->data->port_id, priv->rxqs_n, rxqs_n); + DRV_LOG(INFO, "port %u Rx queues number update: %u -> %u", + dev->data->port_id, priv->rxqs_n, rxqs_n); priv->rxqs_n = rxqs_n; /* If the requested number of RX queues is not a power of two, use the * maximum indirection table size for better balancing. @@ -515,8 +515,8 @@ mlx5_link_update_unlocked_gset(struct rte_eth_dev *dev) ret = mlx5_ifreq(dev, SIOCGIFFLAGS, &ifr); if (ret) { - WARN("port %u ioctl(SIOCGIFFLAGS) failed: %s", - dev->data->port_id, strerror(rte_errno)); + DRV_LOG(WARNING, "port %u ioctl(SIOCGIFFLAGS) failed: %s", + dev->data->port_id, strerror(rte_errno)); return ret; } memset(&dev_link, 0, sizeof(dev_link)); @@ -525,8 +525,9 @@ mlx5_link_update_unlocked_gset(struct rte_eth_dev *dev) ifr.ifr_data = (void *)&edata; ret = mlx5_ifreq(dev, SIOCETHTOOL, &ifr); if (ret) { - WARN("port %u ioctl(SIOCETHTOOL, ETHTOOL_GSET) failed: %s", - dev->data->port_id, strerror(rte_errno)); + DRV_LOG(WARNING, + "port %u ioctl(SIOCETHTOOL, ETHTOOL_GSET) failed: %s", + dev->data->port_id, strerror(rte_errno)); return ret; } link_speed = ethtool_cmd_speed(&edata); @@ -582,8 +583,8 @@ mlx5_link_update_unlocked_gs(struct rte_eth_dev *dev) ret = mlx5_ifreq(dev, SIOCGIFFLAGS, &ifr); if (ret) { - WARN("port %u ioctl(SIOCGIFFLAGS) failed: %s", - dev->data->port_id, strerror(rte_errno)); + DRV_LOG(WARNING, "port %u ioctl(SIOCGIFFLAGS) failed: %s", + dev->data->port_id, strerror(rte_errno)); return ret; } memset(&dev_link, 0, sizeof(dev_link)); @@ -592,8 +593,10 @@ mlx5_link_update_unlocked_gs(struct rte_eth_dev *dev) ifr.ifr_data = (void *)&gcmd; ret = mlx5_ifreq(dev, SIOCETHTOOL, &ifr); if (ret) { - DEBUG("port %u ioctl(SIOCETHTOOL, ETHTOOL_GLINKSETTINGS)" - " failed: %s", dev->data->port_id, strerror(rte_errno)); + DRV_LOG(DEBUG, + "port %u ioctl(SIOCETHTOOL, ETHTOOL_GLINKSETTINGS)" + " failed: %s", + dev->data->port_id, strerror(rte_errno)); return ret; } gcmd.link_mode_masks_nwords = -gcmd.link_mode_masks_nwords; @@ -607,8 +610,10 @@ mlx5_link_update_unlocked_gs(struct rte_eth_dev *dev) ifr.ifr_data = (void *)ecmd; ret = mlx5_ifreq(dev, SIOCETHTOOL, &ifr); if (ret) { - DEBUG("port %u ioctl(SIOCETHTOOL, ETHTOOL_GLINKSETTINGS)" - " failed: %s", dev->data->port_id, strerror(rte_errno)); + DRV_LOG(DEBUG, + "port %u ioctl(SIOCETHTOOL, ETHTOOL_GLINKSETTINGS)" + " failed: %s", + dev->data->port_id, strerror(rte_errno)); return ret; } dev_link.link_speed = ecmd->speed; @@ -679,14 +684,17 @@ mlx5_link_start(struct rte_eth_dev *dev) mlx5_select_rx_function(dev); ret = mlx5_traffic_enable(dev); if (ret) { - ERROR("port %u error occurred while configuring control flows:" - " %s", dev->data->port_id, strerror(rte_errno)); + DRV_LOG(ERR, + "port %u error occurred while configuring control" + " flows: %s", + dev->data->port_id, strerror(rte_errno)); return; } ret = mlx5_flow_start(dev, &priv->flows); if (ret) - ERROR("port %u error occurred while configuring flows: %s", - dev->data->port_id, strerror(rte_errno)); + DRV_LOG(ERR, + "port %u error occurred while configuring flows: %s", + dev->data->port_id, strerror(rte_errno)); } /** @@ -807,7 +815,8 @@ mlx5_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu) return ret; if (kern_mtu == mtu) { priv->mtu = mtu; - DEBUG("port %u adapter MTU set to %u", dev->data->port_id, mtu); + DRV_LOG(DEBUG, "port %u adapter MTU set to %u", + dev->data->port_id, mtu); return 0; } rte_errno = EAGAIN; @@ -837,8 +846,10 @@ mlx5_dev_get_flow_ctrl(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf) ifr.ifr_data = (void *)ðpause; ret = mlx5_ifreq(dev, SIOCETHTOOL, &ifr); if (ret) { - WARN("port %u ioctl(SIOCETHTOOL, ETHTOOL_GPAUSEPARAM) failed:" - " %s", dev->data->port_id, strerror(rte_errno)); + DRV_LOG(WARNING, + "port %u ioctl(SIOCETHTOOL, ETHTOOL_GPAUSEPARAM) failed:" + " %s", + dev->data->port_id, strerror(rte_errno)); return ret; } fc_conf->autoneg = ethpause.autoneg; @@ -888,8 +899,10 @@ mlx5_dev_set_flow_ctrl(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf) ethpause.tx_pause = 0; ret = mlx5_ifreq(dev, SIOCETHTOOL, &ifr); if (ret) { - WARN("port %u ioctl(SIOCETHTOOL, ETHTOOL_SPAUSEPARAM)" - " failed: %s", dev->data->port_id, strerror(rte_errno)); + DRV_LOG(WARNING, + "port %u ioctl(SIOCETHTOOL, ETHTOOL_SPAUSEPARAM)" + " failed: %s", + dev->data->port_id, strerror(rte_errno)); return ret; } return 0; @@ -1018,8 +1031,9 @@ mlx5_dev_status_handler(struct rte_eth_dev *dev) dev->data->dev_conf.intr_conf.rmv == 1) ret |= (1 << RTE_ETH_EVENT_INTR_RMV); else - DEBUG("port %u event type %d on not handled", - dev->data->port_id, event.event_type); + DRV_LOG(DEBUG, + "port %u event type %d on not handled", + dev->data->port_id, event.event_type); ibv_ack_async_event(&event); } if (ret & (1 << RTE_ETH_EVENT_INTR_LSC)) @@ -1130,8 +1144,10 @@ mlx5_dev_interrupt_handler_install(struct rte_eth_dev *dev) flags = fcntl(priv->ctx->async_fd, F_GETFL); ret = fcntl(priv->ctx->async_fd, F_SETFL, flags | O_NONBLOCK); if (ret) { - INFO("port %u failed to change file descriptor async event" - " queue", dev->data->port_id); + DRV_LOG(INFO, + "port %u failed to change file descriptor async event" + " queue", + dev->data->port_id); dev->data->dev_conf.intr_conf.lsc = 0; dev->data->dev_conf.intr_conf.rmv = 0; } @@ -1144,8 +1160,8 @@ mlx5_dev_interrupt_handler_install(struct rte_eth_dev *dev) } ret = mlx5_socket_init(dev); if (ret) - ERROR("port %u cannot initialise socket: %s", - dev->data->port_id, strerror(rte_errno)); + DRV_LOG(ERR, "port %u cannot initialise socket: %s", + dev->data->port_id, strerror(rte_errno)); else if (priv->primary_socket) { priv->intr_handle_socket.fd = priv->primary_socket; priv->intr_handle_socket.type = RTE_INTR_HANDLE_EXT; @@ -1203,20 +1219,24 @@ mlx5_select_tx_function(struct rte_eth_dev *dev) dev->tx_pkt_burst = mlx5_tx_burst_raw_vec; else dev->tx_pkt_burst = mlx5_tx_burst_vec; - DEBUG("port %u selected enhanced MPW Tx vectorized" - " function", dev->data->port_id); + DRV_LOG(DEBUG, + "port %u selected enhanced MPW Tx vectorized" + " function", + dev->data->port_id); } else { dev->tx_pkt_burst = mlx5_tx_burst_empw; - DEBUG("port %u selected enhanced MPW Tx function", - dev->data->port_id); + DRV_LOG(DEBUG, + "port %u selected enhanced MPW Tx function", + dev->data->port_id); } } else if (priv->mps && priv->txq_inline) { dev->tx_pkt_burst = mlx5_tx_burst_mpw_inline; - DEBUG("port %u selected MPW inline Tx function", - dev->data->port_id); + DRV_LOG(DEBUG, "port %u selected MPW inline Tx function", + dev->data->port_id); } else if (priv->mps) { dev->tx_pkt_burst = mlx5_tx_burst_mpw; - DEBUG("port %u selected MPW Tx function", dev->data->port_id); + DRV_LOG(DEBUG, "port %u selected MPW Tx function", + dev->data->port_id); } } @@ -1232,8 +1252,8 @@ mlx5_select_rx_function(struct rte_eth_dev *dev) assert(dev != NULL); if (mlx5_check_vec_rx_support(dev) > 0) { dev->rx_pkt_burst = mlx5_rx_burst_vec; - DEBUG("port %u selected Rx vectorized function", - dev->data->port_id); + DRV_LOG(DEBUG, "port %u selected Rx vectorized function", + dev->data->port_id); } else { dev->rx_pkt_burst = mlx5_rx_burst; } diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index 326392798..9a3fcf43e 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -1838,11 +1838,11 @@ mlx5_flow_create_action_queue(struct rte_eth_dev *dev, goto error; } ++flows_n; - DEBUG("port %u %p type %d QP %p ibv_flow %p", - dev->data->port_id, - (void *)flow, i, - (void *)flow->frxq[i].hrxq, - (void *)flow->frxq[i].ibv_flow); + DRV_LOG(DEBUG, "port %u %p type %d QP %p ibv_flow %p", + dev->data->port_id, + (void *)flow, i, + (void *)flow->frxq[i].hrxq, + (void *)flow->frxq[i].ibv_flow); } if (!flows_n) { rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_HANDLE, @@ -1942,7 +1942,8 @@ mlx5_flow_list_create(struct rte_eth_dev *dev, if (ret) goto exit; TAILQ_INSERT_TAIL(list, flow, next); - DEBUG("port %u flow created %p", dev->data->port_id, (void *)flow); + DRV_LOG(DEBUG, "port %u flow created %p", dev->data->port_id, + (void *)flow); return flow; exit: for (i = 0; i != hash_rxq_init_n; ++i) { @@ -2061,7 +2062,8 @@ mlx5_flow_list_destroy(struct rte_eth_dev *dev, struct mlx5_flows *list, flow->cs = NULL; } TAILQ_REMOVE(list, flow, next); - DEBUG("port %u flow destroyed %p", dev->data->port_id, (void *)flow); + DRV_LOG(DEBUG, "port %u flow destroyed %p", dev->data->port_id, + (void *)flow); rte_free(flow); } @@ -2103,15 +2105,16 @@ mlx5_flow_create_drop_queue(struct rte_eth_dev *dev) assert(priv->ctx); fdq = rte_calloc(__func__, 1, sizeof(*fdq), 0); if (!fdq) { - WARN("port %u cannot allocate memory for drop queue", - dev->data->port_id); + DRV_LOG(WARNING, + "port %u cannot allocate memory for drop queue", + dev->data->port_id); rte_errno = ENOMEM; return -rte_errno; } fdq->cq = ibv_create_cq(priv->ctx, 1, NULL, NULL, 0); if (!fdq->cq) { - WARN("port %u cannot allocate CQ for drop queue", - dev->data->port_id); + DRV_LOG(WARNING, "port %u cannot allocate CQ for drop queue", + dev->data->port_id); rte_errno = errno; goto error; } @@ -2124,8 +2127,8 @@ mlx5_flow_create_drop_queue(struct rte_eth_dev *dev) .cq = fdq->cq, }); if (!fdq->wq) { - WARN("port %u cannot allocate WQ for drop queue", - dev->data->port_id); + DRV_LOG(WARNING, "port %u cannot allocate WQ for drop queue", + dev->data->port_id); rte_errno = errno; goto error; } @@ -2136,8 +2139,10 @@ mlx5_flow_create_drop_queue(struct rte_eth_dev *dev) .comp_mask = 0, }); if (!fdq->ind_table) { - WARN("port %u cannot allocate indirection table for drop" - " queue", dev->data->port_id); + DRV_LOG(WARNING, + "port %u cannot allocate indirection table for drop" + " queue", + dev->data->port_id); rte_errno = errno; goto error; } @@ -2159,8 +2164,8 @@ mlx5_flow_create_drop_queue(struct rte_eth_dev *dev) .pd = priv->pd }); if (!fdq->qp) { - WARN("port %u cannot allocate QP for drop queue", - dev->data->port_id); + DRV_LOG(WARNING, "port %u cannot allocate QP for drop queue", + dev->data->port_id); rte_errno = errno; goto error; } @@ -2231,8 +2236,8 @@ mlx5_flow_stop(struct rte_eth_dev *dev, struct mlx5_flows *list) claim_zero(ibv_destroy_flow (flow->frxq[HASH_RXQ_ETH].ibv_flow)); flow->frxq[HASH_RXQ_ETH].ibv_flow = NULL; - DEBUG("port %u flow %p removed", dev->data->port_id, - (void *)flow); + DRV_LOG(DEBUG, "port %u flow %p removed", + dev->data->port_id, (void *)flow); /* Next flow. */ continue; } @@ -2264,8 +2269,8 @@ mlx5_flow_stop(struct rte_eth_dev *dev, struct mlx5_flows *list) mlx5_hrxq_release(dev, flow->frxq[i].hrxq); flow->frxq[i].hrxq = NULL; } - DEBUG("port %u flow %p removed", dev->data->port_id, - (void *)flow); + DRV_LOG(DEBUG, "port %u flow %p removed", dev->data->port_id, + (void *)flow); } } @@ -2295,14 +2300,14 @@ mlx5_flow_start(struct rte_eth_dev *dev, struct mlx5_flows *list) (priv->flow_drop_queue->qp, flow->frxq[HASH_RXQ_ETH].ibv_attr); if (!flow->frxq[HASH_RXQ_ETH].ibv_flow) { - DEBUG("port %u flow %p cannot be applied", - dev->data->port_id, - (void *)flow); + DRV_LOG(DEBUG, + "port %u flow %p cannot be applied", + dev->data->port_id, (void *)flow); rte_errno = EINVAL; return -rte_errno; } - DEBUG("port %u flow %p applied", dev->data->port_id, - (void *)flow); + DRV_LOG(DEBUG, "port %u flow %p applied", + dev->data->port_id, (void *)flow); /* Next flow. */ continue; } @@ -2324,8 +2329,9 @@ mlx5_flow_start(struct rte_eth_dev *dev, struct mlx5_flows *list) (*flow->queues), flow->queues_n); if (!flow->frxq[i].hrxq) { - DEBUG("port %u flow %p cannot be applied", - dev->data->port_id, (void *)flow); + DRV_LOG(DEBUG, + "port %u flow %p cannot be applied", + dev->data->port_id, (void *)flow); rte_errno = EINVAL; return -rte_errno; } @@ -2334,13 +2340,14 @@ mlx5_flow_start(struct rte_eth_dev *dev, struct mlx5_flows *list) ibv_create_flow(flow->frxq[i].hrxq->qp, flow->frxq[i].ibv_attr); if (!flow->frxq[i].ibv_flow) { - DEBUG("port %u flow %p cannot be applied", - dev->data->port_id, (void *)flow); + DRV_LOG(DEBUG, + "port %u flow %p cannot be applied", + dev->data->port_id, (void *)flow); rte_errno = EINVAL; return -rte_errno; } - DEBUG("port %u flow %p applied", - dev->data->port_id, (void *)flow); + DRV_LOG(DEBUG, "port %u flow %p applied", + dev->data->port_id, (void *)flow); } if (!flow->mark) continue; @@ -2366,8 +2373,8 @@ mlx5_flow_verify(struct rte_eth_dev *dev) int ret = 0; TAILQ_FOREACH(flow, &priv->flows, next) { - DEBUG("port %u flow %p still referenced", - dev->data->port_id, (void *)flow); + DRV_LOG(DEBUG, "port %u flow %p still referenced", + dev->data->port_id, (void *)flow); ++ret; } return ret; @@ -2638,8 +2645,8 @@ mlx5_fdir_filter_convert(struct rte_eth_dev *dev, /* Validate queue number. */ if (fdir_filter->action.rx_queue >= priv->rxqs_n) { - ERROR("port %u invalid queue number %d", - dev->data->port_id, fdir_filter->action.rx_queue); + DRV_LOG(ERR, "port %u invalid queue number %d", + dev->data->port_id, fdir_filter->action.rx_queue); rte_errno = EINVAL; return -rte_errno; } @@ -2662,9 +2669,9 @@ mlx5_fdir_filter_convert(struct rte_eth_dev *dev, }; break; default: - ERROR("port %u invalid behavior %d", - dev->data->port_id, - fdir_filter->action.behavior); + DRV_LOG(ERR, "port %u invalid behavior %d", + dev->data->port_id, + fdir_filter->action.behavior); rte_errno = ENOTSUP; return -rte_errno; } @@ -2800,8 +2807,8 @@ mlx5_fdir_filter_convert(struct rte_eth_dev *dev, }; break; default: - ERROR("port %u invalid flow type%d", - dev->data->port_id, fdir_filter->input.flow_type); + DRV_LOG(ERR, "port %u invalid flow type%d", + dev->data->port_id, fdir_filter->input.flow_type); rte_errno = ENOTSUP; return -rte_errno; } @@ -2850,8 +2857,8 @@ mlx5_fdir_filter_add(struct rte_eth_dev *dev, attributes.items, attributes.actions, &error); if (flow) { - DEBUG("port %u FDIR created %p", dev->data->port_id, - (void *)flow); + DRV_LOG(DEBUG, "port %u FDIR created %p", dev->data->port_id, + (void *)flow); return 0; } return -rte_errno; @@ -3045,8 +3052,8 @@ mlx5_fdir_ctrl_func(struct rte_eth_dev *dev, enum rte_filter_op filter_op, return 0; if (fdir_mode != RTE_FDIR_MODE_PERFECT && fdir_mode != RTE_FDIR_MODE_PERFECT_MAC_VLAN) { - ERROR("port %u flow director mode %d not supported", - dev->data->port_id, fdir_mode); + DRV_LOG(ERR, "port %u flow director mode %d not supported", + dev->data->port_id, fdir_mode); rte_errno = EINVAL; return -rte_errno; } @@ -3064,8 +3071,8 @@ mlx5_fdir_ctrl_func(struct rte_eth_dev *dev, enum rte_filter_op filter_op, mlx5_fdir_info_get(dev, arg); break; default: - DEBUG("port %u unknown operation %u", dev->data->port_id, - filter_op); + DRV_LOG(DEBUG, "port %u unknown operation %u", + dev->data->port_id, filter_op); rte_errno = EINVAL; return -rte_errno; } @@ -3104,8 +3111,8 @@ mlx5_dev_filter_ctrl(struct rte_eth_dev *dev, case RTE_ETH_FILTER_FDIR: return mlx5_fdir_ctrl_func(dev, filter_op, arg); default: - ERROR("port %u filter type (%d) not supported", - dev->data->port_id, filter_type); + DRV_LOG(ERR, "port %u filter type (%d) not supported", + dev->data->port_id, filter_type); rte_errno = ENOTSUP; return -rte_errno; } diff --git a/drivers/net/mlx5/mlx5_mac.c b/drivers/net/mlx5/mlx5_mac.c index 69fc06897..9de351426 100644 --- a/drivers/net/mlx5/mlx5_mac.c +++ b/drivers/net/mlx5/mlx5_mac.c @@ -101,8 +101,8 @@ mlx5_mac_addr_remove(struct rte_eth_dev *dev, uint32_t index) int ret = mlx5_traffic_restart(dev); if (ret) - ERROR("port %u cannot remove mac address: %s", - dev->data->port_id, strerror(rte_errno)); + DRV_LOG(ERR, "port %u cannot remove mac address: %s", + dev->data->port_id, strerror(rte_errno)); } } @@ -158,9 +158,11 @@ mlx5_mac_addr_set(struct rte_eth_dev *dev, struct ether_addr *mac_addr) { int ret; - DEBUG("port %u setting primary MAC address", dev->data->port_id); + DRV_LOG(DEBUG, "port %u setting primary MAC address", + dev->data->port_id); + ret = mlx5_mac_addr_add(dev, mac_addr, 0, 0); if (ret) - ERROR("port %u cannot set mac address: %s", - dev->data->port_id, strerror(rte_errno)); + DRV_LOG(ERR, "port %u cannot set mac address: %s", + dev->data->port_id, strerror(rte_errno)); } diff --git a/drivers/net/mlx5/mlx5_mr.c b/drivers/net/mlx5/mlx5_mr.c index 42109a6a4..933bfe395 100644 --- a/drivers/net/mlx5/mlx5_mr.c +++ b/drivers/net/mlx5/mlx5_mr.c @@ -133,17 +133,18 @@ mlx5_txq_mp2mr_reg(struct mlx5_txq_data *txq, struct rte_mempool *mp, rte_spinlock_lock(&txq_ctrl->priv->mr_lock); /* Add a new entry, register MR first. */ - DEBUG("port %u discovered new memory pool \"%s\" (%p)", - txq_ctrl->priv->dev->data->port_id, mp->name, (void *)mp); + DRV_LOG(DEBUG, "port %u discovered new memory pool \"%s\" (%p)", + txq_ctrl->priv->dev->data->port_id, mp->name, (void *)mp); dev = txq_ctrl->priv->dev; mr = mlx5_mr_get(dev, mp); if (mr == NULL) { if (rte_eal_process_type() != RTE_PROC_PRIMARY) { - DEBUG("port %u using unregistered mempool 0x%p(%s) in " - "secondary process, please create mempool before " - " rte_eth_dev_start()", - txq_ctrl->priv->dev->data->port_id, - (void *)mp, mp->name); + DRV_LOG(DEBUG, + "port %u using unregistered mempool 0x%p(%s)" + " in secondary process, please create mempool" + " before rte_eth_dev_start()", + txq_ctrl->priv->dev->data->port_id, + (void *)mp, mp->name); rte_spinlock_unlock(&txq_ctrl->priv->mr_lock); rte_errno = ENOTSUP; return NULL; @@ -151,17 +152,19 @@ mlx5_txq_mp2mr_reg(struct mlx5_txq_data *txq, struct rte_mempool *mp, mr = mlx5_mr_new(dev, mp); } if (unlikely(mr == NULL)) { + DRV_LOG(DEBUG, + "port %u unable to configure memory region," + " ibv_reg_mr() failed.", + txq_ctrl->priv->dev->data->port_id); rte_spinlock_unlock(&txq_ctrl->priv->mr_lock); - DEBUG("port %u unable to configure memory region, ibv_reg_mr()" - " failed", - txq_ctrl->priv->dev->data->port_id); return NULL; } if (unlikely(idx == RTE_DIM(txq->mp2mr))) { /* Table is full, remove oldest entry. */ - DEBUG("port %u memroy region <-> memory pool table full, " - " dropping oldest entry", - txq_ctrl->priv->dev->data->port_id); + DRV_LOG(DEBUG, + "port %u memory region <-> memory pool table full, " + " dropping oldest entry", + txq_ctrl->priv->dev->data->port_id); --idx; mlx5_mr_release(txq->mp2mr[0]); memmove(&txq->mp2mr[0], &txq->mp2mr[1], @@ -169,9 +172,11 @@ mlx5_txq_mp2mr_reg(struct mlx5_txq_data *txq, struct rte_mempool *mp, } /* Store the new entry. */ txq_ctrl->txq.mp2mr[idx] = mr; - DEBUG("port %u new memory region lkey for MP \"%s\" (%p): 0x%08" PRIu32, - txq_ctrl->priv->dev->data->port_id, mp->name, (void *)mp, - txq_ctrl->txq.mp2mr[idx]->lkey); + DRV_LOG(DEBUG, + "port %u new memory region lkey for MP \"%s\" (%p): 0x%08" + PRIu32, + txq_ctrl->priv->dev->data->port_id, mp->name, (void *)mp, + txq_ctrl->txq.mp2mr[idx]->lkey); rte_spinlock_unlock(&txq_ctrl->priv->mr_lock); return mr; } @@ -238,8 +243,8 @@ mlx5_mp2mr_iter(struct rte_mempool *mp, void *arg) } mr = mlx5_mr_new(priv->dev, mp); if (!mr) - ERROR("port %u cannot create memory region: %s", - priv->dev->data->port_id, strerror(rte_errno)); + DRV_LOG(ERR, "port %u cannot create memory region: %s", + priv->dev->data->port_id, strerror(rte_errno)); } /** @@ -266,21 +271,22 @@ mlx5_mr_new(struct rte_eth_dev *dev, struct rte_mempool *mp) mr = rte_zmalloc_socket(__func__, sizeof(*mr), 0, mp->socket_id); if (!mr) { - DEBUG("port %u unable to configure memory region, ibv_reg_mr()" - " failed", - dev->data->port_id); + DRV_LOG(DEBUG, + "port %u unable to configure memory region," + " ibv_reg_mr() failed.", + dev->data->port_id); rte_errno = ENOMEM; return NULL; } if (mlx5_check_mempool(mp, &start, &end) != 0) { - ERROR("port %u mempool %p: not virtually contiguous", - dev->data->port_id, (void *)mp); + DRV_LOG(ERR, "port %u mempool %p: not virtually contiguous", + dev->data->port_id, (void *)mp); rte_errno = ENOMEM; return NULL; } - DEBUG("port %u mempool %p area start=%p end=%p size=%zu", - dev->data->port_id, (void *)mp, (void *)start, (void *)end, - (size_t)(end - start)); + DRV_LOG(DEBUG, "port %u mempool %p area start=%p end=%p size=%zu", + dev->data->port_id, (void *)mp, (void *)start, (void *)end, + (size_t)(end - start)); /* Save original addresses for exact MR lookup. */ mr->start = start; mr->end = end; @@ -295,10 +301,11 @@ mlx5_mr_new(struct rte_eth_dev *dev, struct rte_mempool *mp) if ((end > addr) && (end < addr + len)) end = RTE_ALIGN_CEIL(end, align); } - DEBUG("port %u mempool %p using start=%p end=%p size=%zu for memory" - " region", - dev->data->port_id, (void *)mp, (void *)start, (void *)end, - (size_t)(end - start)); + DRV_LOG(DEBUG, + "port %u mempool %p using start=%p end=%p size=%zu for memory" + " region", + dev->data->port_id, (void *)mp, (void *)start, (void *)end, + (size_t)(end - start)); mr->mr = ibv_reg_mr(priv->pd, (void *)start, end - start, IBV_ACCESS_LOCAL_WRITE); if (!mr->mr) { @@ -308,8 +315,8 @@ mlx5_mr_new(struct rte_eth_dev *dev, struct rte_mempool *mp) mr->mp = mp; mr->lkey = rte_cpu_to_be_32(mr->mr->lkey); rte_atomic32_inc(&mr->refcnt); - DEBUG("port %u new memory region %p refcnt: %d", - dev->data->port_id, (void *)mr, rte_atomic32_read(&mr->refcnt)); + DRV_LOG(DEBUG, "port %u new memory Region %p refcnt: %d", + dev->data->port_id, (void *)mr, rte_atomic32_read(&mr->refcnt)); LIST_INSERT_HEAD(&priv->mr, mr, next); return mr; } @@ -337,9 +344,9 @@ mlx5_mr_get(struct rte_eth_dev *dev, struct rte_mempool *mp) LIST_FOREACH(mr, &priv->mr, next) { if (mr->mp == mp) { rte_atomic32_inc(&mr->refcnt); - DEBUG("port %u memory region %p refcnt: %d", - dev->data->port_id, (void *)mr, - rte_atomic32_read(&mr->refcnt)); + DRV_LOG(DEBUG, "port %u memory region %p refcnt: %d", + dev->data->port_id, (void *)mr, + rte_atomic32_read(&mr->refcnt)); return mr; } } @@ -359,8 +366,8 @@ int mlx5_mr_release(struct mlx5_mr *mr) { assert(mr); - DEBUG("memory region %p refcnt: %d", (void *)mr, - rte_atomic32_read(&mr->refcnt)); + DRV_LOG(DEBUG, "memory region %p refcnt: %d", (void *)mr, + rte_atomic32_read(&mr->refcnt)); if (rte_atomic32_dec_and_test(&mr->refcnt)) { claim_zero(ibv_dereg_mr(mr->mr)); LIST_REMOVE(mr, next); @@ -387,8 +394,8 @@ mlx5_mr_verify(struct rte_eth_dev *dev) struct mlx5_mr *mr; LIST_FOREACH(mr, &priv->mr, next) { - DEBUG("port %u memory region %p still referenced", - dev->data->port_id, (void *)mr); + DRV_LOG(DEBUG, "port %u memory region %p still referenced", + dev->data->port_id, (void *)mr); ++ret; } return ret; diff --git a/drivers/net/mlx5/mlx5_rxmode.c b/drivers/net/mlx5/mlx5_rxmode.c index f92ce8ef8..23eae7c12 100644 --- a/drivers/net/mlx5/mlx5_rxmode.c +++ b/drivers/net/mlx5/mlx5_rxmode.c @@ -65,8 +65,8 @@ mlx5_promiscuous_enable(struct rte_eth_dev *dev) dev->data->promiscuous = 1; ret = mlx5_traffic_restart(dev); if (ret) - ERROR("port %u cannot enable promiscuous mode: %s", - dev->data->port_id, strerror(rte_errno)); + DRV_LOG(ERR, "port %u cannot enable promiscuous mode: %s", + dev->data->port_id, strerror(rte_errno)); } /** @@ -83,8 +83,8 @@ mlx5_promiscuous_disable(struct rte_eth_dev *dev) dev->data->promiscuous = 0; ret = mlx5_traffic_restart(dev); if (ret) - ERROR("port %u cannot disable promiscuous mode: %s", - dev->data->port_id, strerror(rte_errno)); + DRV_LOG(ERR, "port %u cannot disable promiscuous mode: %s", + dev->data->port_id, strerror(rte_errno)); } /** @@ -101,8 +101,8 @@ mlx5_allmulticast_enable(struct rte_eth_dev *dev) dev->data->all_multicast = 1; ret = mlx5_traffic_restart(dev); if (ret) - ERROR("port %u cannot enable allmulicast mode: %s", - dev->data->port_id, strerror(rte_errno)); + DRV_LOG(ERR, "port %u cannot enable allmulicast mode: %s", + dev->data->port_id, strerror(rte_errno)); } /** @@ -119,6 +119,6 @@ mlx5_allmulticast_disable(struct rte_eth_dev *dev) dev->data->all_multicast = 0; ret = mlx5_traffic_restart(dev); if (ret) - ERROR("port %u cannot disable allmulicast mode: %s", - dev->data->port_id, strerror(rte_errno)); + DRV_LOG(ERR, "port %u cannot disable allmulicast mode: %s", + dev->data->port_id, strerror(rte_errno)); } diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c index c97844f63..1b0a95e0a 100644 --- a/drivers/net/mlx5/mlx5_rxq.c +++ b/drivers/net/mlx5/mlx5_rxq.c @@ -104,8 +104,8 @@ rxq_alloc_elts(struct mlx5_rxq_ctrl *rxq_ctrl) buf = rte_pktmbuf_alloc(rxq_ctrl->rxq.mp); if (buf == NULL) { - ERROR("port %u empty mbuf pool", - rxq_ctrl->priv->dev->data->port_id); + DRV_LOG(ERR, "port %u empty mbuf pool", + rxq_ctrl->priv->dev->data->port_id); rte_errno = ENOMEM; goto error; } @@ -146,9 +146,11 @@ rxq_alloc_elts(struct mlx5_rxq_ctrl *rxq_ctrl) for (j = 0; j < MLX5_VPMD_DESCS_PER_LOOP; ++j) (*rxq->elts)[elts_n + j] = &rxq->fake_mbuf; } - DEBUG("port %u Rx queue %u allocated and configured %u segments" - " (max %u packets)", rxq_ctrl->priv->dev->data->port_id, - rxq_ctrl->idx, elts_n, elts_n / (1 << rxq_ctrl->rxq.sges_n)); + DRV_LOG(DEBUG, + "port %u Rx queue %u allocated and configured %u segments" + " (max %u packets)", + rxq_ctrl->priv->dev->data->port_id, rxq_ctrl->idx, elts_n, + elts_n / (1 << rxq_ctrl->rxq.sges_n)); return 0; error: err = rte_errno; /* Save rte_errno before cleanup. */ @@ -158,8 +160,8 @@ rxq_alloc_elts(struct mlx5_rxq_ctrl *rxq_ctrl) rte_pktmbuf_free_seg((*rxq_ctrl->rxq.elts)[i]); (*rxq_ctrl->rxq.elts)[i] = NULL; } - DEBUG("port %u Rx queue %u failed, freed everything", - rxq_ctrl->priv->dev->data->port_id, rxq_ctrl->idx); + DRV_LOG(DEBUG, "port %u Rx queue %u failed, freed everything", + rxq_ctrl->priv->dev->data->port_id, rxq_ctrl->idx); rte_errno = err; /* Restore rte_errno. */ return -rte_errno; } @@ -179,8 +181,8 @@ rxq_free_elts(struct mlx5_rxq_ctrl *rxq_ctrl) uint16_t used = q_n - (rxq->rq_ci - rxq->rq_pi); uint16_t i; - DEBUG("port %u Rx queue %u freeing WRs", - rxq_ctrl->priv->dev->data->port_id, rxq_ctrl->idx); + DRV_LOG(DEBUG, "port %u Rx queue %u freeing WRs", + rxq_ctrl->priv->dev->data->port_id, rxq_ctrl->idx); if (rxq->elts == NULL) return; /** @@ -210,8 +212,8 @@ rxq_free_elts(struct mlx5_rxq_ctrl *rxq_ctrl) void mlx5_rxq_cleanup(struct mlx5_rxq_ctrl *rxq_ctrl) { - DEBUG("port %u cleaning up Rx queue %u", - rxq_ctrl->priv->dev->data->port_id, rxq_ctrl->idx); + DRV_LOG(DEBUG, "port %u cleaning up Rx queue %u", + rxq_ctrl->priv->dev->data->port_id, rxq_ctrl->idx); if (rxq_ctrl->ibv) mlx5_rxq_ibv_release(rxq_ctrl->ibv); memset(rxq_ctrl, 0, sizeof(*rxq_ctrl)); @@ -248,33 +250,35 @@ mlx5_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, if (!rte_is_power_of_2(desc)) { desc = 1 << log2above(desc); - WARN("port %u increased number of descriptors in Rx queue %u" - " to the next power of two (%d)", - dev->data->port_id, idx, desc); + DRV_LOG(WARNING, + "port %u increased number of descriptors in Rx queue %u" + " to the next power of two (%d)", + dev->data->port_id, idx, desc); } - DEBUG("port %u configuring Rx queue %u for %u descriptors", - dev->data->port_id, idx, desc); + DRV_LOG(DEBUG, "port %u configuring Rx queue %u for %u descriptors", + dev->data->port_id, idx, desc); if (idx >= priv->rxqs_n) { - ERROR("port %u Rx queue index out of range (%u >= %u)", - dev->data->port_id, idx, priv->rxqs_n); + DRV_LOG(ERR, "port %u Rx queue index out of range (%u >= %u)", + dev->data->port_id, idx, priv->rxqs_n); rte_errno = EOVERFLOW; return -rte_errno; } if (!mlx5_rxq_releasable(dev, idx)) { - ERROR("port %u unable to release queue index %u", - dev->data->port_id, idx); + DRV_LOG(ERR, "port %u unable to release queue index %u", + dev->data->port_id, idx); rte_errno = EBUSY; return -rte_errno; } mlx5_rxq_release(dev, idx); rxq_ctrl = mlx5_rxq_new(dev, idx, desc, socket, mp); if (!rxq_ctrl) { - ERROR("port %u unable to allocate queue index %u", - dev->data->port_id, idx); + DRV_LOG(ERR, "port %u unable to allocate queue index %u", + dev->data->port_id, idx); rte_errno = ENOMEM; return -rte_errno; } - DEBUG("port %u adding Rx queue %u to list", dev->data->port_id, idx); + DRV_LOG(DEBUG, "port %u adding Rx queue %u to list", + dev->data->port_id, idx); (*priv->rxqs)[idx] = &rxq_ctrl->rxq; return 0; } @@ -327,9 +331,10 @@ mlx5_rx_intr_vec_enable(struct rte_eth_dev *dev) mlx5_rx_intr_vec_disable(dev); intr_handle->intr_vec = malloc(n * sizeof(intr_handle->intr_vec[0])); if (intr_handle->intr_vec == NULL) { - ERROR("port %u failed to allocate memory for interrupt vector," - " Rx interrupts will not be supported", - dev->data->port_id); + DRV_LOG(ERR, + "port %u failed to allocate memory for interrupt" + " vector, Rx interrupts will not be supported", + dev->data->port_id); rte_errno = ENOMEM; return -rte_errno; } @@ -350,9 +355,11 @@ mlx5_rx_intr_vec_enable(struct rte_eth_dev *dev) continue; } if (count >= RTE_MAX_RXTX_INTR_VEC_ID) { - ERROR("port %u too many Rx queues for interrupt vector" - " size (%d), Rx interrupts cannot be enabled", - dev->data->port_id, RTE_MAX_RXTX_INTR_VEC_ID); + DRV_LOG(ERR, + "port %u too many Rx queues for interrupt" + " vector size (%d), Rx interrupts cannot be" + " enabled", + dev->data->port_id, RTE_MAX_RXTX_INTR_VEC_ID); mlx5_rx_intr_vec_disable(dev); rte_errno = ENOMEM; return -rte_errno; @@ -362,9 +369,11 @@ mlx5_rx_intr_vec_enable(struct rte_eth_dev *dev) rc = fcntl(fd, F_SETFL, flags | O_NONBLOCK); if (rc < 0) { rte_errno = errno; - ERROR("port %u failed to make Rx interrupt file" - " descriptor %d non-blocking for queue index %d", - dev->data->port_id, fd, i); + DRV_LOG(ERR, + "port %u failed to make Rx interrupt file" + " descriptor %d non-blocking for queue index" + " %d", + dev->data->port_id, fd, i); mlx5_rx_intr_vec_disable(dev); return -rte_errno; } @@ -531,8 +540,8 @@ mlx5_rx_intr_disable(struct rte_eth_dev *dev, uint16_t rx_queue_id) ret = rte_errno; /* Save rte_errno before cleanup. */ if (rxq_ibv) mlx5_rxq_ibv_release(rxq_ibv); - WARN("port %u unable to disable interrupt on Rx queue %d", - dev->data->port_id, rx_queue_id); + DRV_LOG(WARNING, "port %u unable to disable interrupt on Rx queue %d", + dev->data->port_id, rx_queue_id); rte_errno = ret; /* Restore rte_errno. */ return -rte_errno; } @@ -579,8 +588,9 @@ mlx5_rxq_ibv_new(struct rte_eth_dev *dev, uint16_t idx) tmpl = rte_calloc_socket(__func__, 1, sizeof(*tmpl), 0, rxq_ctrl->socket); if (!tmpl) { - ERROR("port %u Rx queue %u cannot allocate verbs resources", - dev->data->port_id, rxq_ctrl->idx); + DRV_LOG(ERR, + "port %u Rx queue %u cannot allocate verbs resources", + dev->data->port_id, rxq_ctrl->idx); rte_errno = ENOMEM; goto error; } @@ -590,16 +600,16 @@ mlx5_rxq_ibv_new(struct rte_eth_dev *dev, uint16_t idx) if (!tmpl->mr) { tmpl->mr = mlx5_mr_new(dev, rxq_data->mp); if (!tmpl->mr) { - ERROR("port %u: memory region creation failure", - dev->data->port_id); + DRV_LOG(ERR, "port %u: memeroy region creation failure", + dev->data->port_id); goto error; } } if (rxq_ctrl->irq) { tmpl->channel = ibv_create_comp_channel(priv->ctx); if (!tmpl->channel) { - ERROR("port %u: comp channel creation failure", - dev->data->port_id); + DRV_LOG(ERR, "port %u: comp channel creation failure", + dev->data->port_id); rte_errno = ENOMEM; goto error; } @@ -623,21 +633,23 @@ mlx5_rxq_ibv_new(struct rte_eth_dev *dev, uint16_t idx) if (mlx5_rxq_check_vec_support(rxq_data) < 0) attr.cq.ibv.cqe *= 2; } else if (priv->cqe_comp && rxq_data->hw_timestamp) { - DEBUG("port %u Rx CQE compression is disabled for HW timestamp", - dev->data->port_id); + DRV_LOG(DEBUG, + "port %u Rx CQE compression is disabled for HW" + " timestamp", + dev->data->port_id); } tmpl->cq = ibv_cq_ex_to_cq(mlx5dv_create_cq(priv->ctx, &attr.cq.ibv, &attr.cq.mlx5)); if (tmpl->cq == NULL) { - ERROR("port %u Rx queue %u CQ creation failure", - dev->data->port_id, idx); + DRV_LOG(ERR, "port %u Rx queue %u CQ creation failure", + dev->data->port_id, idx); rte_errno = ENOMEM; goto error; } - DEBUG("port %u priv->device_attr.max_qp_wr is %d", - dev->data->port_id, priv->device_attr.orig_attr.max_qp_wr); - DEBUG("port %u priv->device_attr.max_sge is %d", - dev->data->port_id, priv->device_attr.orig_attr.max_sge); + DRV_LOG(DEBUG, "port %u priv->device_attr.max_qp_wr is %d", + dev->data->port_id, priv->device_attr.orig_attr.max_qp_wr); + DRV_LOG(DEBUG, "port %u priv->device_attr.max_sge is %d", + dev->data->port_id, priv->device_attr.orig_attr.max_sge); attr.wq = (struct ibv_wq_init_attr){ .wq_context = NULL, /* Could be useful in the future. */ .wq_type = IBV_WQT_RQ, @@ -667,8 +679,8 @@ mlx5_rxq_ibv_new(struct rte_eth_dev *dev, uint16_t idx) #endif tmpl->wq = ibv_create_wq(priv->ctx, &attr.wq); if (tmpl->wq == NULL) { - ERROR("port %u Rx queue %u WQ creation failure", - dev->data->port_id, idx); + DRV_LOG(ERR, "port %u Rx queue %u WQ creation failure", + dev->data->port_id, idx); rte_errno = ENOMEM; goto error; } @@ -679,12 +691,13 @@ mlx5_rxq_ibv_new(struct rte_eth_dev *dev, uint16_t idx) if (((int)attr.wq.max_wr != ((1 << rxq_data->elts_n) >> rxq_data->sges_n)) || ((int)attr.wq.max_sge != (1 << rxq_data->sges_n))) { - ERROR("port %u Rx queue %u requested %u*%u but got %u*%u" - " WRs*SGEs", - dev->data->port_id, idx, - ((1 << rxq_data->elts_n) >> rxq_data->sges_n), - (1 << rxq_data->sges_n), - attr.wq.max_wr, attr.wq.max_sge); + DRV_LOG(ERR, + "port %u Rx queue %u requested %u*%u but got %u*%u" + " WRs*SGEs", + dev->data->port_id, idx, + ((1 << rxq_data->elts_n) >> rxq_data->sges_n), + (1 << rxq_data->sges_n), + attr.wq.max_wr, attr.wq.max_sge); rte_errno = EINVAL; goto error; } @@ -695,8 +708,9 @@ mlx5_rxq_ibv_new(struct rte_eth_dev *dev, uint16_t idx) }; ret = ibv_modify_wq(tmpl->wq, &mod); if (ret) { - ERROR("port %u Rx queue %u WQ state to IBV_WQS_RDY failed", - dev->data->port_id, idx); + DRV_LOG(ERR, + "port %u Rx queue %u WQ state to IBV_WQS_RDY failed", + dev->data->port_id, idx); rte_errno = ret; goto error; } @@ -710,9 +724,10 @@ mlx5_rxq_ibv_new(struct rte_eth_dev *dev, uint16_t idx) goto error; } if (cq_info.cqe_size != RTE_CACHE_LINE_SIZE) { - ERROR("port %u wrong MLX5_CQE_SIZE environment variable value: " - "it should be set to %u", dev->data->port_id, - RTE_CACHE_LINE_SIZE); + DRV_LOG(ERR, + "port %u wrong MLX5_CQE_SIZE environment variable" + " value: it should be set to %u", + dev->data->port_id, RTE_CACHE_LINE_SIZE); rte_errno = EINVAL; goto error; } @@ -749,11 +764,11 @@ mlx5_rxq_ibv_new(struct rte_eth_dev *dev, uint16_t idx) rxq_data->rq_ci = (1 << rxq_data->elts_n) >> rxq_data->sges_n; rte_wmb(); *rxq_data->rq_db = rte_cpu_to_be_32(rxq_data->rq_ci); - DEBUG("port %u rxq %u updated with %p", dev->data->port_id, idx, - (void *)&tmpl); + DRV_LOG(DEBUG, "port %u rxq %u updated with %p", dev->data->port_id, + idx, (void *)&tmpl); rte_atomic32_inc(&tmpl->refcnt); - DEBUG("port %u Verbs Rx queue %u: refcnt %d", dev->data->port_id, idx, - rte_atomic32_read(&tmpl->refcnt)); + DRV_LOG(DEBUG, "port %u Verbs Rx queue %u: refcnt %d", + dev->data->port_id, idx, rte_atomic32_read(&tmpl->refcnt)); LIST_INSERT_HEAD(&priv->rxqsibv, tmpl, next); priv->verbs_alloc_ctx.type = MLX5_VERBS_ALLOC_TYPE_NONE; return tmpl; @@ -798,9 +813,9 @@ mlx5_rxq_ibv_get(struct rte_eth_dev *dev, uint16_t idx) if (rxq_ctrl->ibv) { mlx5_mr_get(dev, rxq_data->mp); rte_atomic32_inc(&rxq_ctrl->ibv->refcnt); - DEBUG("port %u Verbs Rx queue %u: refcnt %d", - dev->data->port_id, rxq_ctrl->idx, - rte_atomic32_read(&rxq_ctrl->ibv->refcnt)); + DRV_LOG(DEBUG, "port %u Verbs Rx queue %u: refcnt %d", + dev->data->port_id, rxq_ctrl->idx, + rte_atomic32_read(&rxq_ctrl->ibv->refcnt)); } return rxq_ctrl->ibv; } @@ -826,9 +841,9 @@ mlx5_rxq_ibv_release(struct mlx5_rxq_ibv *rxq_ibv) ret = mlx5_mr_release(rxq_ibv->mr); if (!ret) rxq_ibv->mr = NULL; - DEBUG("port %u Verbs Rx queue %u: refcnt %d", - rxq_ibv->rxq_ctrl->priv->dev->data->port_id, - rxq_ibv->rxq_ctrl->idx, rte_atomic32_read(&rxq_ibv->refcnt)); + DRV_LOG(DEBUG, "port %u Verbs Rx queue %u: refcnt %d", + rxq_ibv->rxq_ctrl->priv->dev->data->port_id, + rxq_ibv->rxq_ctrl->idx, rte_atomic32_read(&rxq_ibv->refcnt)); if (rte_atomic32_dec_and_test(&rxq_ibv->refcnt)) { rxq_free_elts(rxq_ibv->rxq_ctrl); claim_zero(ibv_destroy_wq(rxq_ibv->wq)); @@ -859,8 +874,8 @@ mlx5_rxq_ibv_verify(struct rte_eth_dev *dev) struct mlx5_rxq_ibv *rxq_ibv; LIST_FOREACH(rxq_ibv, &priv->rxqsibv, next) { - DEBUG("port %u Verbs Rx queue %u still referenced", - dev->data->port_id, rxq_ibv->rxq_ctrl->idx); + DRV_LOG(DEBUG, "port %u Verbs Rx queue %u still referenced", + dev->data->port_id, rxq_ibv->rxq_ctrl->idx); ++ret; } return ret; @@ -936,30 +951,33 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, size = mb_len * (1 << tmpl->rxq.sges_n); size -= RTE_PKTMBUF_HEADROOM; if (size < dev->data->dev_conf.rxmode.max_rx_pkt_len) { - ERROR("port %u too many SGEs (%u) needed to handle" - " requested maximum packet size %u", - dev->data->port_id, - 1 << sges_n, - dev->data->dev_conf.rxmode.max_rx_pkt_len); + DRV_LOG(ERR, + "port %u too many SGEs (%u) needed to handle" + " requested maximum packet size %u", + dev->data->port_id, + 1 << sges_n, + dev->data->dev_conf.rxmode.max_rx_pkt_len); rte_errno = EOVERFLOW; goto error; } } else { - WARN("port %u the requested maximum Rx packet size (%u) is" - " larger than a single mbuf (%u) and scattered" - " mode has not been requested", - dev->data->port_id, - dev->data->dev_conf.rxmode.max_rx_pkt_len, - mb_len - RTE_PKTMBUF_HEADROOM); + DRV_LOG(WARNING, + "port %u the requested maximum Rx packet size (%u) is" + " larger than a single mbuf (%u) and scattered mode has" + " not been requested", + dev->data->port_id, + dev->data->dev_conf.rxmode.max_rx_pkt_len, + mb_len - RTE_PKTMBUF_HEADROOM); } - DEBUG("port %u maximum number of segments per packet: %u", - dev->data->port_id, 1 << tmpl->rxq.sges_n); + DRV_LOG(DEBUG, "port %u maximum number of segments per packet: %u", + dev->data->port_id, 1 << tmpl->rxq.sges_n); if (desc % (1 << tmpl->rxq.sges_n)) { - ERROR("port %u number of Rx queue descriptors (%u) is not a" - " multiple of SGEs per packet (%u)", - dev->data->port_id, - desc, - 1 << tmpl->rxq.sges_n); + DRV_LOG(ERR, + "port %u number of Rx queue descriptors (%u) is not a" + " multiple of SGEs per packet (%u)", + dev->data->port_id, + desc, + 1 << tmpl->rxq.sges_n); rte_errno = EINVAL; goto error; } @@ -980,17 +998,19 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, } else if (priv->hw_fcs_strip) { tmpl->rxq.crc_present = 1; } else { - WARN("port %u CRC stripping has been disabled but will still" - " be performed by hardware, make sure MLNX_OFED and" - " firmware are up to date", - dev->data->port_id); + DRV_LOG(WARNING, + "port %u CRC stripping has been disabled but will" + " still be performed by hardware, make sure MLNX_OFED" + " and firmware are up to date", + dev->data->port_id); tmpl->rxq.crc_present = 0; } - DEBUG("port %u CRC stripping is %s, %u bytes will be subtracted from" - " incoming frames to hide it", - dev->data->port_id, - tmpl->rxq.crc_present ? "disabled" : "enabled", - tmpl->rxq.crc_present << 2); + DRV_LOG(DEBUG, + "port %u CRC stripping is %s, %u bytes will be subtracted from" + " incoming frames to hide it", + dev->data->port_id, + tmpl->rxq.crc_present ? "disabled" : "enabled", + tmpl->rxq.crc_present << 2); /* Save port ID. */ tmpl->rxq.rss_hash = priv->rxqs_n > 1; tmpl->rxq.port_id = dev->data->port_id; @@ -1002,8 +1022,8 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, (struct rte_mbuf *(*)[1 << tmpl->rxq.elts_n])(tmpl + 1); tmpl->idx = idx; rte_atomic32_inc(&tmpl->refcnt); - DEBUG("port %u Rx queue %u: refcnt %d", dev->data->port_id, - idx, rte_atomic32_read(&tmpl->refcnt)); + DRV_LOG(DEBUG, "port %u Rx queue %u: refcnt %d", dev->data->port_id, + idx, rte_atomic32_read(&tmpl->refcnt)); LIST_INSERT_HEAD(&priv->rxqsctrl, tmpl, next); return tmpl; error: @@ -1034,8 +1054,9 @@ mlx5_rxq_get(struct rte_eth_dev *dev, uint16_t idx) rxq); mlx5_rxq_ibv_get(dev, idx); rte_atomic32_inc(&rxq_ctrl->refcnt); - DEBUG("port %u Rx queue %u: refcnt %d", dev->data->port_id, - rxq_ctrl->idx, rte_atomic32_read(&rxq_ctrl->refcnt)); + DRV_LOG(DEBUG, "port %u Rx queue %u: refcnt %d", + dev->data->port_id, rxq_ctrl->idx, + rte_atomic32_read(&rxq_ctrl->refcnt)); } return rxq_ctrl; } @@ -1063,8 +1084,8 @@ mlx5_rxq_release(struct rte_eth_dev *dev, uint16_t idx) assert(rxq_ctrl->priv); if (rxq_ctrl->ibv && !mlx5_rxq_ibv_release(rxq_ctrl->ibv)) rxq_ctrl->ibv = NULL; - DEBUG("port %u Rx queue %u: refcnt %d", dev->data->port_id, - rxq_ctrl->idx, rte_atomic32_read(&rxq_ctrl->refcnt)); + DRV_LOG(DEBUG, "port %u Rx queue %u: refcnt %d", dev->data->port_id, + rxq_ctrl->idx, rte_atomic32_read(&rxq_ctrl->refcnt)); if (rte_atomic32_dec_and_test(&rxq_ctrl->refcnt)) { LIST_REMOVE(rxq_ctrl, next); rte_free(rxq_ctrl); @@ -1117,8 +1138,8 @@ mlx5_rxq_verify(struct rte_eth_dev *dev) int ret = 0; LIST_FOREACH(rxq_ctrl, &priv->rxqsctrl, next) { - DEBUG("port %u Rx queue %u still referenced", - dev->data->port_id, rxq_ctrl->idx); + DRV_LOG(DEBUG, "port %u Rx Queue %u still referenced", + dev->data->port_id, rxq_ctrl->idx); ++ret; } return ret; @@ -1181,12 +1202,14 @@ mlx5_ind_table_ibv_new(struct rte_eth_dev *dev, uint16_t queues[], } rte_atomic32_inc(&ind_tbl->refcnt); LIST_INSERT_HEAD(&priv->ind_tbls, ind_tbl, next); - DEBUG("port %u indirection table %p: refcnt %d", dev->data->port_id, - (void *)ind_tbl, rte_atomic32_read(&ind_tbl->refcnt)); + DRV_LOG(DEBUG, "port %u indirection table %p: refcnt %d", + dev->data->port_id, (void *)ind_tbl, + rte_atomic32_read(&ind_tbl->refcnt)); return ind_tbl; error: rte_free(ind_tbl); - DEBUG("port %u cannot create indirection table", dev->data->port_id); + DRV_LOG(DEBUG, "port %u cannot create indirection table", + dev->data->port_id); return NULL; } @@ -1221,9 +1244,9 @@ mlx5_ind_table_ibv_get(struct rte_eth_dev *dev, uint16_t queues[], unsigned int i; rte_atomic32_inc(&ind_tbl->refcnt); - DEBUG("port %u indirection table %p: refcnt %d", - dev->data->port_id, (void *)ind_tbl, - rte_atomic32_read(&ind_tbl->refcnt)); + DRV_LOG(DEBUG, "port %u indirection table %p: refcnt %d", + dev->data->port_id, (void *)ind_tbl, + rte_atomic32_read(&ind_tbl->refcnt)); for (i = 0; i != ind_tbl->queues_n; ++i) mlx5_rxq_get(dev, ind_tbl->queues[i]); } @@ -1247,9 +1270,9 @@ mlx5_ind_table_ibv_release(struct rte_eth_dev *dev, { unsigned int i; - DEBUG("port %u indirection table %p: refcnt %d", - ((struct priv *)dev->data->dev_private)->port, - (void *)ind_tbl, rte_atomic32_read(&ind_tbl->refcnt)); + DRV_LOG(DEBUG, "port %u indirection table %p: refcnt %d", + ((struct priv *)dev->data->dev_private)->port, + (void *)ind_tbl, rte_atomic32_read(&ind_tbl->refcnt)); if (rte_atomic32_dec_and_test(&ind_tbl->refcnt)) claim_zero(ibv_destroy_rwq_ind_table(ind_tbl->ind_table)); for (i = 0; i != ind_tbl->queues_n; ++i) @@ -1279,8 +1302,9 @@ mlx5_ind_table_ibv_verify(struct rte_eth_dev *dev) int ret = 0; LIST_FOREACH(ind_tbl, &priv->ind_tbls, next) { - DEBUG("port %u Verbs indirection table %p still referenced", - dev->data->port_id, (void *)ind_tbl); + DRV_LOG(DEBUG, + "port %u Verbs indirection table %p still referenced", + dev->data->port_id, (void *)ind_tbl); ++ret; } return ret; @@ -1355,8 +1379,9 @@ mlx5_hrxq_new(struct rte_eth_dev *dev, uint8_t *rss_key, uint8_t rss_key_len, memcpy(hrxq->rss_key, rss_key, rss_key_len); rte_atomic32_inc(&hrxq->refcnt); LIST_INSERT_HEAD(&priv->hrxqs, hrxq, next); - DEBUG("port %u hash Rx queue %p: refcnt %d", dev->data->port_id, - (void *)hrxq, rte_atomic32_read(&hrxq->refcnt)); + DRV_LOG(DEBUG, "port %u hash Rx queue %p: refcnt %d", + dev->data->port_id, (void *)hrxq, + rte_atomic32_read(&hrxq->refcnt)); return hrxq; error: err = rte_errno; /* Save rte_errno before cleanup. */ @@ -1408,8 +1433,9 @@ mlx5_hrxq_get(struct rte_eth_dev *dev, uint8_t *rss_key, uint8_t rss_key_len, continue; } rte_atomic32_inc(&hrxq->refcnt); - DEBUG("port %u hash Rx queue %p: refcnt %d", dev->data->port_id, - (void *)hrxq, rte_atomic32_read(&hrxq->refcnt)); + DRV_LOG(DEBUG, "port %u hash Rx queue %p: refcnt %d", + dev->data->port_id, (void *)hrxq, + rte_atomic32_read(&hrxq->refcnt)); return hrxq; } return NULL; @@ -1429,9 +1455,9 @@ mlx5_hrxq_get(struct rte_eth_dev *dev, uint8_t *rss_key, uint8_t rss_key_len, int mlx5_hrxq_release(struct rte_eth_dev *dev, struct mlx5_hrxq *hrxq) { - DEBUG("port %u hash Rx queue %p: refcnt %d", - ((struct priv *)dev->data->dev_private)->port, - (void *)hrxq, rte_atomic32_read(&hrxq->refcnt)); + DRV_LOG(DEBUG, "port %u hash Rx queue %p: refcnt %d", + ((struct priv *)dev->data->dev_private)->port, + (void *)hrxq, rte_atomic32_read(&hrxq->refcnt)); if (rte_atomic32_dec_and_test(&hrxq->refcnt)) { claim_zero(ibv_destroy_qp(hrxq->qp)); mlx5_ind_table_ibv_release(dev, hrxq->ind_table); @@ -1460,8 +1486,9 @@ mlx5_hrxq_ibv_verify(struct rte_eth_dev *dev) int ret = 0; LIST_FOREACH(hrxq, &priv->hrxqs, next) { - DEBUG("port %u Verbs hash Rx queue %p still referenced", - dev->data->port_id, (void *)hrxq); + DRV_LOG(DEBUG, + "port %u Verbs hash Rx queue %p still referenced", + dev->data->port_id, (void *)hrxq); ++ret; } return ret; diff --git a/drivers/net/mlx5/mlx5_rxtx.h b/drivers/net/mlx5/mlx5_rxtx.h index 47a8729a8..29019f792 100644 --- a/drivers/net/mlx5/mlx5_rxtx.h +++ b/drivers/net/mlx5/mlx5_rxtx.h @@ -400,9 +400,10 @@ check_cqe(volatile struct mlx5_cqe *cqe, (syndrome == MLX5_CQE_SYNDROME_REMOTE_ABORTED_ERR)) return 0; if (!check_cqe_seen(cqe)) { - ERROR("unexpected CQE error %u (0x%02x)" - " syndrome 0x%02x", - op_code, op_code, syndrome); + DRV_LOG(ERR, + "unexpected CQE error %u (0x%02x) syndrome" + " 0x%02x", + op_code, op_code, syndrome); rte_hexdump(stderr, "MLX5 Error CQE:", (const void *)((uintptr_t)err_cqe), sizeof(*err_cqe)); @@ -411,8 +412,8 @@ check_cqe(volatile struct mlx5_cqe *cqe, } else if ((op_code != MLX5_CQE_RESP_SEND) && (op_code != MLX5_CQE_REQ)) { if (!check_cqe_seen(cqe)) { - ERROR("unexpected CQE opcode %u (0x%02x)", - op_code, op_code); + DRV_LOG(ERR, "unexpected CQE opcode %u (0x%02x)", + op_code, op_code); rte_hexdump(stderr, "MLX5 CQE:", (const void *)((uintptr_t)cqe), sizeof(*cqe)); @@ -472,7 +473,7 @@ mlx5_tx_complete(struct mlx5_txq_data *txq) if ((MLX5_CQE_OPCODE(cqe->op_own) == MLX5_CQE_RESP_ERR) || (MLX5_CQE_OPCODE(cqe->op_own) == MLX5_CQE_REQ_ERR)) { if (!check_cqe_seen(cqe)) { - ERROR("unexpected error CQE, Tx stopped"); + DRV_LOG(ERR, "unexpected error CQE, Tx stopped"); rte_hexdump(stderr, "MLX5 TXQ:", (const void *)((uintptr_t)txq->wqes), ((1 << txq->wqe_n) * @@ -589,8 +590,8 @@ mlx5_tx_mb2mr(struct mlx5_txq_data *txq, struct rte_mbuf *mb) } else { struct rte_mempool *mp = mlx5_tx_mb2mp(mb); - WARN("failed to register mempool 0x%p(%s)", - (void *)mp, mp->name); + DRV_LOG(WARNING, "failed to register mempool 0x%p(%s)", + (void *)mp, mp->name); } return (uint32_t)-1; } diff --git a/drivers/net/mlx5/mlx5_socket.c b/drivers/net/mlx5/mlx5_socket.c index f4a5c835e..bdbd390d1 100644 --- a/drivers/net/mlx5/mlx5_socket.c +++ b/drivers/net/mlx5/mlx5_socket.c @@ -68,8 +68,8 @@ mlx5_socket_init(struct rte_eth_dev *dev) ret = socket(AF_UNIX, SOCK_STREAM, 0); if (ret < 0) { rte_errno = errno; - WARN("port %u secondary process not supported: %s", - dev->data->port_id, strerror(errno)); + DRV_LOG(WARNING, "port %u secondary process not supported: %s", + dev->data->port_id, strerror(errno)); goto error; } priv->primary_socket = ret; @@ -90,15 +90,17 @@ mlx5_socket_init(struct rte_eth_dev *dev) sizeof(sun)); if (ret < 0) { rte_errno = errno; - WARN("port %u cannot bind socket, secondary process not" - " supported: %s", dev->data->port_id, strerror(errno)); + DRV_LOG(WARNING, + "port %u cannot bind socket, secondary process not" + " supported: %s", + dev->data->port_id, strerror(errno)); goto close; } ret = listen(priv->primary_socket, 0); if (ret < 0) { rte_errno = errno; - WARN("port %u secondary process not supported: %s", - dev->data->port_id, strerror(errno)); + DRV_LOG(WARNING, "port %u secondary process not supported: %s", + dev->data->port_id, strerror(errno)); goto close; } return 0; @@ -157,29 +159,29 @@ mlx5_socket_handle(struct rte_eth_dev *dev) /* Accept the connection from the client. */ conn_sock = accept(priv->primary_socket, NULL, NULL); if (conn_sock < 0) { - WARN("port %u connection failed: %s", dev->data->port_id, - strerror(errno)); + DRV_LOG(WARNING, "port %u connection failed: %s", + dev->data->port_id, strerror(errno)); return; } ret = setsockopt(conn_sock, SOL_SOCKET, SO_PASSCRED, &(int){1}, sizeof(int)); if (ret < 0) { ret = errno; - WARN("port %u cannot change socket options: %s", - dev->data->port_id, strerror(rte_errno)); + DRV_LOG(WARNING, "port %u cannot change socket options: %s", + dev->data->port_id, strerror(rte_errno)); goto error; } ret = recvmsg(conn_sock, &msg, MSG_WAITALL); if (ret < 0) { ret = errno; - WARN("port %u received an empty message: %s", - dev->data->port_id, strerror(rte_errno)); + DRV_LOG(WARNING, "port %u received an empty message: %s", + dev->data->port_id, strerror(rte_errno)); goto error; } /* Expect to receive credentials only. */ cmsg = CMSG_FIRSTHDR(&msg); if (cmsg == NULL) { - WARN("port %u no message", dev->data->port_id); + DRV_LOG(WARNING, "port %u no message", dev->data->port_id); goto error; } if ((cmsg->cmsg_type == SCM_CREDENTIALS) && @@ -189,13 +191,15 @@ mlx5_socket_handle(struct rte_eth_dev *dev) } cmsg = CMSG_NXTHDR(&msg, cmsg); if (cmsg != NULL) { - WARN("port %u message wrongly formatted", dev->data->port_id); + DRV_LOG(WARNING, "port %u message wrongly formatted", + dev->data->port_id); goto error; } /* Make sure all the ancillary data was received and valid. */ if ((cred == NULL) || (cred->uid != getuid()) || (cred->gid != getgid())) { - WARN("port %u wrong credentials", dev->data->port_id); + DRV_LOG(WARNING, "port %u wrong credentials", + dev->data->port_id); goto error; } /* Set-up the ancillary data. */ @@ -208,7 +212,8 @@ mlx5_socket_handle(struct rte_eth_dev *dev) *fd = priv->ctx->cmd_fd; ret = sendmsg(conn_sock, &msg, 0); if (ret < 0) - WARN("port %u cannot send response", dev->data->port_id); + DRV_LOG(WARNING, "port %u cannot send response", + dev->data->port_id); error: close(conn_sock); } @@ -250,7 +255,8 @@ mlx5_socket_connect(struct rte_eth_dev *dev) ret = socket(AF_UNIX, SOCK_STREAM, 0); if (ret < 0) { rte_errno = errno; - WARN("port %u cannot connect to primary", dev->data->port_id); + DRV_LOG(WARNING, "port %u cannot connect to primary", + dev->data->port_id); goto error; } socket_fd = ret; @@ -259,13 +265,15 @@ mlx5_socket_connect(struct rte_eth_dev *dev) ret = connect(socket_fd, (const struct sockaddr *)&sun, sizeof(sun)); if (ret < 0) { rte_errno = errno; - WARN("port %u cannot connect to primary", dev->data->port_id); + DRV_LOG(WARNING, "port %u cannot connect to primary", + dev->data->port_id); goto error; } cmsg = CMSG_FIRSTHDR(&msg); if (cmsg == NULL) { rte_errno = EINVAL; - DEBUG("port %u cannot get first message", dev->data->port_id); + DRV_LOG(DEBUG, "port %u cannot get first message", + dev->data->port_id); goto error; } cmsg->cmsg_level = SOL_SOCKET; @@ -274,7 +282,8 @@ mlx5_socket_connect(struct rte_eth_dev *dev) cred = (struct ucred *)CMSG_DATA(cmsg); if (cred == NULL) { rte_errno = EINVAL; - DEBUG("port %u no credentials received", dev->data->port_id); + DRV_LOG(DEBUG, "port %u no credentials received", + dev->data->port_id); goto error; } cred->pid = getpid(); @@ -283,27 +292,29 @@ mlx5_socket_connect(struct rte_eth_dev *dev) ret = sendmsg(socket_fd, &msg, MSG_DONTWAIT); if (ret < 0) { rte_errno = errno; - WARN("port %u cannot send credentials to primary: %s", - dev->data->port_id, strerror(errno)); + DRV_LOG(WARNING, + "port %u cannot send credentials to primary: %s", + dev->data->port_id, strerror(errno)); goto error; } ret = recvmsg(socket_fd, &msg, MSG_WAITALL); if (ret <= 0) { rte_errno = errno; - WARN("port %u no message from primary: %s", - dev->data->port_id, strerror(errno)); + DRV_LOG(WARNING, "port %u no message from primary: %s", + dev->data->port_id, strerror(errno)); goto error; } cmsg = CMSG_FIRSTHDR(&msg); if (cmsg == NULL) { rte_errno = EINVAL; - WARN("port %u no file descriptor received", dev->data->port_id); + DRV_LOG(WARNING, "port %u no file descriptor received", + dev->data->port_id); goto error; } fd = (int *)CMSG_DATA(cmsg); if (*fd < 0) { - WARN("port %u no file descriptor received: %s", - dev->data->port_id, strerror(errno)); + DRV_LOG(WARNING, "port %u no file descriptor received: %s", + dev->data->port_id, strerror(errno)); rte_errno = *fd; goto error; } diff --git a/drivers/net/mlx5/mlx5_stats.c b/drivers/net/mlx5/mlx5_stats.c index cd8a94a48..7dda2691d 100644 --- a/drivers/net/mlx5/mlx5_stats.c +++ b/drivers/net/mlx5/mlx5_stats.c @@ -160,8 +160,9 @@ mlx5_read_dev_counters(struct rte_eth_dev *dev, uint64_t *stats) ifr.ifr_data = (caddr_t)et_stats; ret = mlx5_ifreq(dev, SIOCETHTOOL, &ifr); if (ret) { - WARN("port %u unable to read statistic values from device", - dev->data->port_id); + DRV_LOG(WARNING, + "port %u unable to read statistic values from device", + dev->data->port_id); return ret; } for (i = 0; i != xstats_n; ++i) { @@ -207,8 +208,8 @@ mlx5_ethtool_get_stats_n(struct rte_eth_dev *dev) { ifr.ifr_data = (caddr_t)&drvinfo; ret = mlx5_ifreq(dev, SIOCETHTOOL, &ifr); if (ret) { - WARN("port %u unable to query number of statistics", - dev->data->port_id); + DRV_LOG(WARNING, "port %u unable to query number of statistics", + dev->data->port_id); return ret; } return drvinfo.n_stats; @@ -235,8 +236,8 @@ mlx5_xstats_init(struct rte_eth_dev *dev) ret = mlx5_ethtool_get_stats_n(dev); if (ret < 0) { - WARN("port %u no extended statistics available", - dev->data->port_id); + DRV_LOG(WARNING, "port %u no extended statistics available", + dev->data->port_id); return; } dev_stats_n = ret; @@ -247,7 +248,7 @@ mlx5_xstats_init(struct rte_eth_dev *dev) rte_malloc("xstats_strings", str_sz + sizeof(struct ethtool_gstrings), 0); if (!strings) { - WARN("port %u unable to allocate memory for xstats", + DRV_LOG(WARNING, "port %u unable to allocate memory for xstats", dev->data->port_id); return; } @@ -257,8 +258,8 @@ mlx5_xstats_init(struct rte_eth_dev *dev) ifr.ifr_data = (caddr_t)strings; ret = mlx5_ifreq(dev, SIOCETHTOOL, &ifr); if (ret) { - WARN("port %u unable to get statistic names", - dev->data->port_id); + DRV_LOG(WARNING, "port %u unable to get statistic names", + dev->data->port_id); goto free; } for (j = 0; j != xstats_n; ++j) @@ -279,9 +280,10 @@ mlx5_xstats_init(struct rte_eth_dev *dev) if (mlx5_counters_init[j].ib) continue; if (xstats_ctrl->dev_table_idx[j] >= dev_stats_n) { - WARN("port %u counter \"%s\" is not recognized", - dev->data->port_id, - mlx5_counters_init[j].dpdk_name); + DRV_LOG(WARNING, + "port %u counter \"%s\" is not recognized", + dev->data->port_id, + mlx5_counters_init[j].dpdk_name); goto free; } } @@ -289,8 +291,8 @@ mlx5_xstats_init(struct rte_eth_dev *dev) assert(xstats_n <= MLX5_MAX_XSTATS); ret = mlx5_read_dev_counters(dev, xstats_ctrl->base); if (ret) - ERROR("port %u cannot read device counters: %s", - dev->data->port_id, strerror(rte_errno)); + DRV_LOG(ERR, "port %u cannot read device counters: %s", + dev->data->port_id, strerror(rte_errno)); free: rte_free(strings); } @@ -457,16 +459,16 @@ mlx5_xstats_reset(struct rte_eth_dev *dev) stats_n = mlx5_ethtool_get_stats_n(dev); if (stats_n < 0) { - ERROR("port %u cannot get stats: %s", dev->data->port_id, - strerror(-stats_n)); + DRV_LOG(ERR, "port %u cannot get stats: %s", dev->data->port_id, + strerror(-stats_n)); return; } if (xstats_ctrl->stats_n != stats_n) mlx5_xstats_init(dev); ret = mlx5_read_dev_counters(dev, counters); if (ret) { - ERROR("port %u cannot read device counters: %s", - dev->data->port_id, strerror(rte_errno)); + DRV_LOG(ERR, "port %u cannot read device counters: %s", + dev->data->port_id, strerror(rte_errno)); return; } for (i = 0; i != n; ++i) diff --git a/drivers/net/mlx5/mlx5_trigger.c b/drivers/net/mlx5/mlx5_trigger.c index fd9b62251..b83c2b900 100644 --- a/drivers/net/mlx5/mlx5_trigger.c +++ b/drivers/net/mlx5/mlx5_trigger.c @@ -177,39 +177,39 @@ mlx5_dev_start(struct rte_eth_dev *dev) dev->data->dev_started = 1; ret = mlx5_flow_create_drop_queue(dev); if (ret) { - ERROR("port %u drop queue allocation failed: %s", - dev->data->port_id, strerror(rte_errno)); + DRV_LOG(ERR, "port %u drop queue allocation failed: %s", + dev->data->port_id, strerror(rte_errno)); goto error; } - DEBUG("port %u allocating and configuring hash Rx queues", - dev->data->port_id); + DRV_LOG(DEBUG, "port %u allocating and configuring hash Rx queues", + dev->data->port_id); rte_mempool_walk(mlx5_mp2mr_iter, priv); ret = mlx5_txq_start(dev); if (ret) { - ERROR("port %u Tx queue allocation failed: %s", - dev->data->port_id, strerror(rte_errno)); + DRV_LOG(ERR, "port %u Tx queue allocation failed: %s", + dev->data->port_id, strerror(rte_errno)); goto error; } ret = mlx5_rxq_start(dev); if (ret) { - ERROR("port %u Rx queue allocation failed: %s", - dev->data->port_id, strerror(rte_errno)); + DRV_LOG(ERR, "port %u Rx queue allocation failed: %s", + dev->data->port_id, strerror(rte_errno)); goto error; } ret = mlx5_rx_intr_vec_enable(dev); if (ret) { - ERROR("port %u Rx interrupt vector creation failed", - dev->data->port_id); + DRV_LOG(ERR, "port %u Rx interrupt vector creation failed", + dev->data->port_id); goto error; } mlx5_xstats_init(dev); /* Update link status and Tx/Rx callbacks for the first time. */ memset(&dev->data->dev_link, 0, sizeof(struct rte_eth_link)); - INFO("port %u forcing link to be up", dev->data->port_id); + DRV_LOG(INFO, "forcing port %u link to be up", dev->data->port_id); ret = mlx5_force_link_status_change(dev, ETH_LINK_UP); if (ret) { - DEBUG("failed to set port %u link to be up", - dev->data->port_id); + DRV_LOG(DEBUG, "failed to set port %u link to be up", + dev->data->port_id); goto error; } mlx5_dev_interrupt_handler_install(dev); @@ -249,8 +249,8 @@ mlx5_dev_stop(struct rte_eth_dev *dev) dev->tx_pkt_burst = removed_tx_burst; rte_wmb(); usleep(1000 * priv->rxqs_n); - DEBUG("port %u cleaning up and destroying hash Rx queues", - dev->data->port_id); + DRV_LOG(DEBUG, "port %u cleaning up and destroying hash Rx queues", + dev->data->port_id); mlx5_flow_stop(dev, &priv->flows); mlx5_traffic_disable(dev); mlx5_rx_intr_vec_disable(dev); diff --git a/drivers/net/mlx5/mlx5_txq.c b/drivers/net/mlx5/mlx5_txq.c index b6d0066fc..4e54ff33d 100644 --- a/drivers/net/mlx5/mlx5_txq.c +++ b/drivers/net/mlx5/mlx5_txq.c @@ -74,8 +74,8 @@ txq_alloc_elts(struct mlx5_txq_ctrl *txq_ctrl) for (i = 0; (i != elts_n); ++i) (*txq_ctrl->txq.elts)[i] = NULL; - DEBUG("port %u Tx queue %u allocated and configured %u WRs", - txq_ctrl->priv->dev->data->port_id, txq_ctrl->idx, elts_n); + DRV_LOG(DEBUG, "port %u Tx queue %u allocated and configured %u WRs", + txq_ctrl->priv->dev->data->port_id, txq_ctrl->idx, elts_n); txq_ctrl->txq.elts_head = 0; txq_ctrl->txq.elts_tail = 0; txq_ctrl->txq.elts_comp = 0; @@ -96,8 +96,8 @@ txq_free_elts(struct mlx5_txq_ctrl *txq_ctrl) uint16_t elts_tail = txq_ctrl->txq.elts_tail; struct rte_mbuf *(*elts)[elts_n] = txq_ctrl->txq.elts; - DEBUG("port %u Tx queue %u freeing WRs", - txq_ctrl->priv->dev->data->port_id, txq_ctrl->idx); + DRV_LOG(DEBUG, "port %u Tx queue %u freeing WRs", + txq_ctrl->priv->dev->data->port_id, txq_ctrl->idx); txq_ctrl->txq.elts_head = 0; txq_ctrl->txq.elts_tail = 0; txq_ctrl->txq.elts_comp = 0; @@ -144,40 +144,43 @@ mlx5_tx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, container_of(txq, struct mlx5_txq_ctrl, txq); if (desc <= MLX5_TX_COMP_THRESH) { - WARN("port %u number of descriptors requested for Tx queue %u" - " must be higher than MLX5_TX_COMP_THRESH, using" - " %u instead of %u", - dev->data->port_id, idx, MLX5_TX_COMP_THRESH + 1, desc); + DRV_LOG(WARNING, + "port %u number of descriptors requested for Tx queue" + " %u must be higher than MLX5_TX_COMP_THRESH, using %u" + " instead of %u", + dev->data->port_id, idx, MLX5_TX_COMP_THRESH + 1, desc); desc = MLX5_TX_COMP_THRESH + 1; } if (!rte_is_power_of_2(desc)) { desc = 1 << log2above(desc); - WARN("port %u increased number of descriptors in Tx queue %u" - " to the next power of two (%d)", - dev->data->port_id, idx, desc); + DRV_LOG(WARNING, + "port %u increased number of descriptors in Tx queue" + " %u to the next power of two (%d)", + dev->data->port_id, idx, desc); } - DEBUG("port %u configuring queue %u for %u descriptors", - dev->data->port_id, idx, desc); + DRV_LOG(DEBUG, "port %u configuring queue %u for %u descriptors", + dev->data->port_id, idx, desc); if (idx >= priv->txqs_n) { - ERROR("port %u Tx queue index out of range (%u >= %u)", - dev->data->port_id, idx, priv->txqs_n); + DRV_LOG(ERR, "port %u Tx queue index out of range (%u >= %u)", + dev->data->port_id, idx, priv->txqs_n); rte_errno = EOVERFLOW; return -rte_errno; } if (!mlx5_txq_releasable(dev, idx)) { rte_errno = EBUSY; - ERROR("port %u unable to release queue index %u", - dev->data->port_id, idx); + DRV_LOG(ERR, "port %u unable to release queue index %u", + dev->data->port_id, idx); return -rte_errno; } mlx5_txq_release(dev, idx); txq_ctrl = mlx5_txq_new(dev, idx, desc, socket, conf); if (!txq_ctrl) { - ERROR("port %u unable to allocate queue index %u", - dev->data->port_id, idx); + DRV_LOG(ERR, "port %u unable to allocate queue index %u", + dev->data->port_id, idx); return -rte_errno; } - DEBUG("port %u adding Tx queue %u to list", dev->data->port_id, idx); + DRV_LOG(DEBUG, "port %u adding Tx queue %u to list", + dev->data->port_id, idx); (*priv->txqs)[idx] = &txq_ctrl->txq; return 0; } @@ -203,8 +206,8 @@ mlx5_tx_queue_release(void *dpdk_txq) for (i = 0; (i != priv->txqs_n); ++i) if ((*priv->txqs)[i] == txq) { mlx5_txq_release(priv->dev, i); - DEBUG("port %u removing Tx queue %u from list", - priv->dev->data->port_id, txq_ctrl->idx); + DRV_LOG(DEBUG, "port %u removing Tx queue %u from list", + priv->dev->data->port_id, txq_ctrl->idx); break; } } @@ -275,9 +278,10 @@ mlx5_tx_uar_remap(struct rte_eth_dev *dev, int fd) txq_ctrl->uar_mmap_offset); if (ret != addr) { /* fixed mmap have to return same address */ - ERROR("port %u call to mmap failed on UAR for" - " txq %u", dev->data->port_id, - txq_ctrl->idx); + DRV_LOG(ERR, + "port %u call to mmap failed on UAR" + " for txq %u", + dev->data->port_id, txq_ctrl->idx); rte_errno = ENXIO; return -rte_errno; } @@ -328,8 +332,9 @@ mlx5_txq_ibv_new(struct rte_eth_dev *dev, uint16_t idx) priv->verbs_alloc_ctx.type = MLX5_VERBS_ALLOC_TYPE_TX_QUEUE; priv->verbs_alloc_ctx.obj = txq_ctrl; if (mlx5_getenv_int("MLX5_ENABLE_CQE_COMPRESSION")) { - ERROR("port %u MLX5_ENABLE_CQE_COMPRESSION must never be set", - dev->data->port_id); + DRV_LOG(ERR, + "port %u MLX5_ENABLE_CQE_COMPRESSION must never be set", + dev->data->port_id); rte_errno = EINVAL; return NULL; } @@ -344,8 +349,8 @@ mlx5_txq_ibv_new(struct rte_eth_dev *dev, uint16_t idx) cqe_n += MLX5_TX_COMP_THRESH_INLINE_DIV; tmpl.cq = ibv_create_cq(priv->ctx, cqe_n, NULL, NULL, 0); if (tmpl.cq == NULL) { - ERROR("port %u Tx queue %u CQ creation failure", - dev->data->port_id, idx); + DRV_LOG(ERR, "port %u Tx queue %u CQ creation failure", + dev->data->port_id, idx); rte_errno = errno; goto error; } @@ -387,8 +392,8 @@ mlx5_txq_ibv_new(struct rte_eth_dev *dev, uint16_t idx) } tmpl.qp = ibv_create_qp_ex(priv->ctx, &attr.init); if (tmpl.qp == NULL) { - ERROR("port %u Tx queue %u QP creation failure", - dev->data->port_id, idx); + DRV_LOG(ERR, "port %u Tx queue %u QP creation failure", + dev->data->port_id, idx); rte_errno = errno; goto error; } @@ -400,8 +405,9 @@ mlx5_txq_ibv_new(struct rte_eth_dev *dev, uint16_t idx) }; ret = ibv_modify_qp(tmpl.qp, &attr.mod, (IBV_QP_STATE | IBV_QP_PORT)); if (ret) { - ERROR("port %u Tx queue %u QP state to IBV_QPS_INIT failed", - dev->data->port_id, idx); + DRV_LOG(ERR, + "port %u Tx queue %u QP state to IBV_QPS_INIT failed", + dev->data->port_id, idx); rte_errno = errno; goto error; } @@ -410,24 +416,26 @@ mlx5_txq_ibv_new(struct rte_eth_dev *dev, uint16_t idx) }; ret = ibv_modify_qp(tmpl.qp, &attr.mod, IBV_QP_STATE); if (ret) { - ERROR("port %u Tx queue %u QP state to IBV_QPS_RTR failed", - dev->data->port_id, idx); + DRV_LOG(ERR, + "port %u Tx queue %u QP state to IBV_QPS_RTR failed", + dev->data->port_id, idx); rte_errno = errno; goto error; } attr.mod.qp_state = IBV_QPS_RTS; ret = ibv_modify_qp(tmpl.qp, &attr.mod, IBV_QP_STATE); if (ret) { - ERROR("port %u Tx queue %u QP state to IBV_QPS_RTS failed", - dev->data->port_id, idx); + DRV_LOG(ERR, + "port %u Tx queue %u QP state to IBV_QPS_RTS failed", + dev->data->port_id, idx); rte_errno = errno; goto error; } txq_ibv = rte_calloc_socket(__func__, 1, sizeof(struct mlx5_txq_ibv), 0, txq_ctrl->socket); if (!txq_ibv) { - ERROR("port %u Tx queue %u cannot allocate memory", - dev->data->port_id, idx); + DRV_LOG(ERR, "port %u Tx queue %u cannot allocate memory", + dev->data->port_id, idx); rte_errno = ENOMEM; goto error; } @@ -441,9 +449,10 @@ mlx5_txq_ibv_new(struct rte_eth_dev *dev, uint16_t idx) goto error; } if (cq_info.cqe_size != RTE_CACHE_LINE_SIZE) { - ERROR("port %u wrong MLX5_CQE_SIZE environment variable value: " - "it should be set to %u", dev->data->port_id, - RTE_CACHE_LINE_SIZE); + DRV_LOG(ERR, + "port %u wrong MLX5_CQE_SIZE environment variable" + " value: it should be set to %u", + dev->data->port_id, RTE_CACHE_LINE_SIZE); rte_errno = EINVAL; goto error; } @@ -467,13 +476,15 @@ mlx5_txq_ibv_new(struct rte_eth_dev *dev, uint16_t idx) if (qp.comp_mask & MLX5DV_QP_MASK_UAR_MMAP_OFFSET) { txq_ctrl->uar_mmap_offset = qp.uar_mmap_offset; } else { - ERROR("port %u failed to retrieve UAR info, invalid libmlx5.so", - dev->data->port_id); + DRV_LOG(ERR, + "port %u failed to retrieve UAR info, invalid" + " libmlx5.so", + dev->data->port_id); rte_errno = EINVAL; goto error; } - DEBUG("port %u Verbs Tx queue %u: refcnt %d", dev->data->port_id, idx, - rte_atomic32_read(&txq_ibv->refcnt)); + DRV_LOG(DEBUG, "port %u Verbs Tx queue %u: refcnt %d", + dev->data->port_id, idx, rte_atomic32_read(&txq_ibv->refcnt)); LIST_INSERT_HEAD(&priv->txqsibv, txq_ibv, next); txq_ibv->txq_ctrl = txq_ctrl; priv->verbs_alloc_ctx.type = MLX5_VERBS_ALLOC_TYPE_NONE; @@ -513,8 +524,8 @@ mlx5_txq_ibv_get(struct rte_eth_dev *dev, uint16_t idx) txq_ctrl = container_of((*priv->txqs)[idx], struct mlx5_txq_ctrl, txq); if (txq_ctrl->ibv) { rte_atomic32_inc(&txq_ctrl->ibv->refcnt); - DEBUG("port %u Verbs Tx queue %u: refcnt %d", - dev->data->port_id, txq_ctrl->idx, + DRV_LOG(DEBUG, "port %u Verbs Tx queue %u: refcnt %d", + dev->data->port_id, txq_ctrl->idx, rte_atomic32_read(&txq_ctrl->ibv->refcnt)); } return txq_ctrl->ibv; @@ -533,9 +544,9 @@ int mlx5_txq_ibv_release(struct mlx5_txq_ibv *txq_ibv) { assert(txq_ibv); - DEBUG("port %u Verbs Tx queue %u: refcnt %d", - txq_ibv->txq_ctrl->priv->dev->data->port_id, - txq_ibv->txq_ctrl->idx, rte_atomic32_read(&txq_ibv->refcnt)); + DRV_LOG(DEBUG, "port %u Verbs Tx queue %u: refcnt %d", + txq_ibv->txq_ctrl->priv->dev->data->port_id, + txq_ibv->txq_ctrl->idx, rte_atomic32_read(&txq_ibv->refcnt)); if (rte_atomic32_dec_and_test(&txq_ibv->refcnt)) { claim_zero(ibv_destroy_qp(txq_ibv->qp)); claim_zero(ibv_destroy_cq(txq_ibv->cq)); @@ -576,9 +587,8 @@ mlx5_txq_ibv_verify(struct rte_eth_dev *dev) struct mlx5_txq_ibv *txq_ibv; LIST_FOREACH(txq_ibv, &priv->txqsibv, next) { - DEBUG("port %u Verbs Tx queue %u still referenced", - dev->data->port_id, - txq_ibv->txq_ctrl->idx); + DRV_LOG(DEBUG, "port %u Verbs Tx queue %u still referenced", + dev->data->port_id, txq_ibv->txq_ctrl->idx); ++ret; } return ret; @@ -628,10 +638,10 @@ mlx5_txq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, if (priv->mps == MLX5_MPW_ENHANCED) tmpl->txq.mpw_hdr_dseg = priv->mpw_hdr_dseg; /* MRs will be registered in mp2mr[] later. */ - DEBUG("port %u priv->device_attr.max_qp_wr is %d", dev->data->port_id, - priv->device_attr.orig_attr.max_qp_wr); - DEBUG("port %u priv->device_attr.max_sge is %d", dev->data->port_id, - priv->device_attr.orig_attr.max_sge); + DRV_LOG(DEBUG, "port %u priv->device_attr.max_qp_wr is %d", + dev->data->port_id, priv->device_attr.orig_attr.max_qp_wr); + DRV_LOG(DEBUG, "port %u priv->device_attr.max_sge is %d", + dev->data->port_id, priv->device_attr.orig_attr.max_sge); if (priv->txq_inline && (priv->txqs_n >= priv->txqs_inline)) { unsigned int ds_cnt; @@ -682,10 +692,11 @@ mlx5_txq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, max_inline = max_inline - (max_inline % RTE_CACHE_LINE_SIZE); - WARN("port %u txq inline is too large (%d) setting it" - " to the maximum possible: %d\n", - priv->dev->data->port_id, priv->txq_inline, - max_inline); + DRV_LOG(WARNING, + "port %u txq inline is too large (%d) setting it" + " to the maximum possible: %d\n", + priv->dev->data->port_id, priv->txq_inline, + max_inline); tmpl->txq.max_inline = max_inline / RTE_CACHE_LINE_SIZE; } } @@ -701,8 +712,8 @@ mlx5_txq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, (struct rte_mbuf *(*)[1 << tmpl->txq.elts_n])(tmpl + 1); tmpl->txq.stats.idx = idx; rte_atomic32_inc(&tmpl->refcnt); - DEBUG("port %u Tx queue %u: refcnt %d", dev->data->port_id, - idx, rte_atomic32_read(&tmpl->refcnt)); + DRV_LOG(DEBUG, "port %u Tx queue %u: refcnt %d", dev->data->port_id, + idx, rte_atomic32_read(&tmpl->refcnt)); LIST_INSERT_HEAD(&priv->txqsctrl, tmpl, next); return tmpl; } @@ -737,8 +748,9 @@ mlx5_txq_get(struct rte_eth_dev *dev, uint16_t idx) ctrl->txq.mp2mr[i]->mp)); } rte_atomic32_inc(&ctrl->refcnt); - DEBUG("port %u Tx queue %u refcnt %d", dev->data->port_id, - ctrl->idx, rte_atomic32_read(&ctrl->refcnt)); + DRV_LOG(DEBUG, "port %u Tx queue %u refcnt %d", + dev->data->port_id, + ctrl->idx, rte_atomic32_read(&ctrl->refcnt)); } return ctrl; } @@ -765,8 +777,8 @@ mlx5_txq_release(struct rte_eth_dev *dev, uint16_t idx) if (!(*priv->txqs)[idx]) return 0; txq = container_of((*priv->txqs)[idx], struct mlx5_txq_ctrl, txq); - DEBUG("port %u Tx queue %u: refcnt %d", dev->data->port_id, - txq->idx, rte_atomic32_read(&txq->refcnt)); + DRV_LOG(DEBUG, "port %u Tx queue %u: refcnt %d", dev->data->port_id, + txq->idx, rte_atomic32_read(&txq->refcnt)); if (txq->ibv && !mlx5_txq_ibv_release(txq->ibv)) txq->ibv = NULL; for (i = 0; i != MLX5_PMD_TX_MP_CACHE; ++i) { @@ -828,8 +840,8 @@ mlx5_txq_verify(struct rte_eth_dev *dev) int ret = 0; LIST_FOREACH(txq, &priv->txqsctrl, next) { - DEBUG("port %u Tx queue %u still referenced", - dev->data->port_id, txq->idx); + DRV_LOG(DEBUG, "port %u Tx queue %u still referenced", + dev->data->port_id, txq->idx); ++ret; } return ret; diff --git a/drivers/net/mlx5/mlx5_utils.h b/drivers/net/mlx5/mlx5_utils.h index 2fbd10b18..6c85c0739 100644 --- a/drivers/net/mlx5/mlx5_utils.h +++ b/drivers/net/mlx5/mlx5_utils.h @@ -89,14 +89,21 @@ pmd_drv_log_basename(const char *s) return s; } +extern int mlx5_logtype; + +#define PMD_DRV_LOG___(level, ...) \ + rte_log(RTE_LOG_ ## level, \ + mlx5_logtype, \ + RTE_FMT(MLX5_DRIVER_NAME ": " \ + RTE_FMT_HEAD(__VA_ARGS__,), \ + RTE_FMT_TAIL(__VA_ARGS__,))) + /* * When debugging is enabled (NDEBUG not defined), file, line and function * information replace the driver name (MLX5_DRIVER_NAME) in log messages. */ #ifndef NDEBUG -#define PMD_DRV_LOG___(level, ...) \ - ERRNO_SAFE(RTE_LOG(level, PMD, __VA_ARGS__)) #define PMD_DRV_LOG__(level, ...) \ PMD_DRV_LOG___(level, "%s:%u: %s(): " __VA_ARGS__) #define PMD_DRV_LOG_(level, s, ...) \ @@ -108,9 +115,6 @@ pmd_drv_log_basename(const char *s) __VA_ARGS__) #else /* NDEBUG */ - -#define PMD_DRV_LOG___(level, ...) \ - ERRNO_SAFE(RTE_LOG(level, PMD, MLX5_DRIVER_NAME ": " __VA_ARGS__)) #define PMD_DRV_LOG__(level, ...) \ PMD_DRV_LOG___(level, __VA_ARGS__) #define PMD_DRV_LOG_(level, s, ...) \ @@ -119,33 +123,24 @@ pmd_drv_log_basename(const char *s) #endif /* NDEBUG */ /* Generic printf()-like logging macro with automatic line feed. */ -#define PMD_DRV_LOG(level, ...) \ +#define DRV_LOG(level, ...) \ PMD_DRV_LOG_(level, \ __VA_ARGS__ PMD_DRV_LOG_STRIP PMD_DRV_LOG_OPAREN, \ PMD_DRV_LOG_CPAREN) -/* - * Like assert(), DEBUG() becomes a no-op and claim_zero() does not perform - * any check when debugging is disabled. - */ +/* claim_zero() does not perform any check when debugging is disabled. */ #ifndef NDEBUG -#define DEBUG(...) PMD_DRV_LOG(DEBUG, __VA_ARGS__) #define claim_zero(...) assert((__VA_ARGS__) == 0) #define claim_nonzero(...) assert((__VA_ARGS__) != 0) #else /* NDEBUG */ -#define DEBUG(...) (void)0 #define claim_zero(...) (__VA_ARGS__) #define claim_nonzero(...) (__VA_ARGS__) #endif /* NDEBUG */ -#define INFO(...) PMD_DRV_LOG(INFO, __VA_ARGS__) -#define WARN(...) PMD_DRV_LOG(WARNING, __VA_ARGS__) -#define ERROR(...) PMD_DRV_LOG(ERR, __VA_ARGS__) - /* Convenience macros for accessing mbuf fields. */ #define NEXT(m) ((m)->next) #define DATA_LEN(m) ((m)->data_len) diff --git a/drivers/net/mlx5/mlx5_vlan.c b/drivers/net/mlx5/mlx5_vlan.c index 377d06884..dbfa8a0c9 100644 --- a/drivers/net/mlx5/mlx5_vlan.c +++ b/drivers/net/mlx5/mlx5_vlan.c @@ -62,8 +62,8 @@ mlx5_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on) struct priv *priv = dev->data->dev_private; unsigned int i; - DEBUG("port %u %s VLAN filter ID %" PRIu16, - dev->data->port_id, (on ? "enable" : "disable"), vlan_id); + DRV_LOG(DEBUG, "port %u %s VLAN filter ID %" PRIu16, + dev->data->port_id, (on ? "enable" : "disable"), vlan_id); assert(priv->vlan_filter_n <= RTE_DIM(priv->vlan_filter)); for (i = 0; (i != priv->vlan_filter_n); ++i) if (priv->vlan_filter[i] == vlan_id) @@ -125,18 +125,18 @@ mlx5_vlan_strip_queue_set(struct rte_eth_dev *dev, uint16_t queue, int on) /* Validate hw support */ if (!priv->hw_vlan_strip) { - ERROR("port %u VLAN stripping is not supported", - dev->data->port_id); + DRV_LOG(ERR, "port %u VLAN stripping is not supported", + dev->data->port_id); return; } /* Validate queue number */ if (queue >= priv->rxqs_n) { - ERROR("port %u VLAN stripping, invalid queue number %d", - dev->data->port_id, queue); + DRV_LOG(ERR, "port %u VLAN stripping, invalid queue number %d", + dev->data->port_id, queue); return; } - DEBUG("port %u set VLAN offloads 0x%x for port %uqueue %d", - dev->data->port_id, vlan_offloads, rxq->port_id, queue); + DRV_LOG(DEBUG, "port %u set VLAN offloads 0x%x for port %uqueue %d", + dev->data->port_id, vlan_offloads, rxq->port_id, queue); if (!rxq_ctrl->ibv) { /* Update related bits in RX queue. */ rxq->vlan_strip = !!on; @@ -149,8 +149,8 @@ mlx5_vlan_strip_queue_set(struct rte_eth_dev *dev, uint16_t queue, int on) }; ret = ibv_modify_wq(rxq_ctrl->ibv->wq, &mod); if (ret) { - ERROR("port %u failed to modified stripping mode: %s", - dev->data->port_id, strerror(rte_errno)); + DRV_LOG(ERR, "port %u failed to modified stripping mode: %s", + dev->data->port_id, strerror(rte_errno)); return; } /* Update related bits in RX queue. */ @@ -178,8 +178,8 @@ mlx5_vlan_offload_set(struct rte_eth_dev *dev, int mask) int hw_vlan_strip = !!dev->data->dev_conf.rxmode.hw_vlan_strip; if (!priv->hw_vlan_strip) { - ERROR("port %u VLAN stripping is not supported", - dev->data->port_id); + DRV_LOG(ERR, "port %u VLAN stripping is not supported", + dev->data->port_id); return 0; } /* Run on every RX queue and set/reset VLAN stripping. */ -- 2.11.0