From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 62B8FA2E1B for ; Tue, 3 Sep 2019 16:00:41 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id B94F51EB70; Tue, 3 Sep 2019 15:58:34 +0200 (CEST) Received: from dispatch1-us1.ppe-hosted.com (dispatch1-us1.ppe-hosted.com [148.163.129.52]) by dpdk.org (Postfix) with ESMTP id 234DE1E949 for ; Tue, 3 Sep 2019 15:57:58 +0200 (CEST) X-Virus-Scanned: Proofpoint Essentials engine Received: from webmail.solarflare.com (webmail.solarflare.com [12.187.104.26]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-SHA384 (256/256 bits)) (No client certificate requested) by mx1-us4.ppe-hosted.com (PPE Hosted ESMTP Server) with ESMTPS id DF0F6BC005A; Tue, 3 Sep 2019 13:57:56 +0000 (UTC) Received: from ocex03.SolarFlarecom.com (10.20.40.36) by ocex03.SolarFlarecom.com (10.20.40.36) with Microsoft SMTP Server (TLS) id 15.0.1395.4; Tue, 3 Sep 2019 06:57:51 -0700 Received: from opal.uk.solarflarecom.com (10.17.10.1) by ocex03.SolarFlarecom.com (10.20.40.36) with Microsoft SMTP Server (TLS) id 15.0.1395.4 via Frontend Transport; Tue, 3 Sep 2019 06:57:51 -0700 Received: from ukv-loginhost.uk.solarflarecom.com (ukv-loginhost.uk.solarflarecom.com [10.17.10.39]) by opal.uk.solarflarecom.com (8.13.8/8.13.8) with ESMTP id x83DvoRU000426; Tue, 3 Sep 2019 14:57:50 +0100 Received: from ukv-loginhost.uk.solarflarecom.com (localhost [127.0.0.1]) by ukv-loginhost.uk.solarflarecom.com (Postfix) with ESMTP id 79B1E1613D1; Tue, 3 Sep 2019 14:57:50 +0100 (BST) From: Andrew Rybchenko To: Chas Williams CC: , Ivan Ilchenko Date: Tue, 3 Sep 2019 14:56:47 +0100 Message-ID: <1567519051-28189-14-git-send-email-arybchenko@solarflare.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1567519051-28189-1-git-send-email-arybchenko@solarflare.com> References: <1566915962-5472-1-git-send-email-arybchenko@solarflare.com> <1567519051-28189-1-git-send-email-arybchenko@solarflare.com> MIME-Version: 1.0 Content-Type: text/plain X-TM-AS-Product-Ver: SMEX-12.5.0.1300-8.5.1010-24886.005 X-TM-AS-Result: No-0.027300-4.000000-10 X-TMASE-MatchedRID: vAcx410tmlfDOgXZFRFV8+b3p4cnIXGNAPiR4btCEeYd0WOKRkwsh0kx APClfhN02tmoqiGUvaLMdHm3E7kFwCHhSBQfglfsA9lly13c/gE/zzQo1aNM72AMM0WKD4as4I/ tgYen2+WPeXBZDCu6JZf1EF8dUCKB3EFLWHIZiz8pJ8SpVT9O/8WLeiJW8Z+pC4p+8w9U67Lic9 PyNm/U0eLzNWBegCW2oq1o0yfqNxkLbigRnpKlKVHxEBQar9Jn5rd20XKsT4AcVOxPuHGIXX3D4 uCV/vGcWUHV/z9MpMFpjRwTqB3ZIHM4ATEGZUR1XItSR9LcM8MFIbwAq07LAdmZhKeTns+15APD q2B3cAxn2JWP8qFf9FKehBzm9vnO5B2Qzud0EsI7z/PzND407cC+ksT6a9fy X-TM-AS-User-Approved-Sender: No X-TM-AS-User-Blocked-Sender: No X-TMASE-Result: 10-0.027300-4.000000 X-TMASE-Version: SMEX-12.5.0.1300-8.5.1010-24886.005 X-MDID: 1567519077-xUpNI3PlgvE1 Subject: [dpdk-dev] [PATCH v2 13/54] net/bonding: check status of getting ethdev info X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Ivan Ilchenko rte_eth_dev_info_get() return value was changed from void to int, so this patch modify rte_eth_dev_info_get() usage across net/bonding according to its new return type. Signed-off-by: Ivan Ilchenko Signed-off-by: Andrew Rybchenko --- drivers/net/bonding/rte_eth_bond_api.c | 10 +++++++++- drivers/net/bonding/rte_eth_bond_pmd.c | 36 ++++++++++++++++++++++++++++++---- 2 files changed, 41 insertions(+), 5 deletions(-) diff --git a/drivers/net/bonding/rte_eth_bond_api.c b/drivers/net/bonding/rte_eth_bond_api.c index 0fc4c5e..e2e27e9 100644 --- a/drivers/net/bonding/rte_eth_bond_api.c +++ b/drivers/net/bonding/rte_eth_bond_api.c @@ -452,6 +452,7 @@ struct bond_dev_private *internals; struct rte_eth_link link_props; struct rte_eth_dev_info dev_info; + int ret; bonded_eth_dev = &rte_eth_devices[bonded_port_id]; internals = bonded_eth_dev->data->dev_private; @@ -465,7 +466,14 @@ return -1; } - rte_eth_dev_info_get(slave_port_id, &dev_info); + ret = rte_eth_dev_info_get(slave_port_id, &dev_info); + if (ret != 0) { + RTE_BOND_LOG(ERR, + "%s: Error during getting device (port %u) info: %s\n", + __func__, slave_port_id, strerror(-ret)); + + return ret; + } if (dev_info.max_rx_pktlen < internals->max_rx_pktlen) { RTE_BOND_LOG(ERR, "Slave (port %u) max_rx_pktlen too small", slave_port_id); diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c b/drivers/net/bonding/rte_eth_bond_pmd.c index 97ab3f2..a1b5014 100644 --- a/drivers/net/bonding/rte_eth_bond_pmd.c +++ b/drivers/net/bonding/rte_eth_bond_pmd.c @@ -186,7 +186,15 @@ return -1; } - rte_eth_dev_info_get(slave_port, &slave_info); + ret = rte_eth_dev_info_get(slave_port, &slave_info); + if (ret != 0) { + RTE_BOND_LOG(ERR, + "%s: Error during getting device (port %u) info: %s\n", + __func__, slave_port, strerror(-ret)); + + return ret; + } + if (slave_info.max_rx_queues < bond_dev->data->nb_rx_queues || slave_info.max_tx_queues < bond_dev->data->nb_tx_queues) { RTE_BOND_LOG(ERR, @@ -204,10 +212,19 @@ struct bond_dev_private *internals = bond_dev->data->dev_private; struct rte_eth_dev_info bond_info; uint16_t idx; + int ret; /* Verify if all slaves in bonding supports flow director and */ if (internals->slave_count > 0) { - rte_eth_dev_info_get(bond_dev->data->port_id, &bond_info); + ret = rte_eth_dev_info_get(bond_dev->data->port_id, &bond_info); + if (ret != 0) { + RTE_BOND_LOG(ERR, + "%s: Error during getting device (port %u) info: %s\n", + __func__, bond_dev->data->port_id, + strerror(-ret)); + + return ret; + } internals->mode4.dedicated_queues.rx_qid = bond_info.nb_rx_queues; internals->mode4.dedicated_queues.tx_qid = bond_info.nb_tx_queues; @@ -2102,6 +2119,8 @@ struct bwg_slave { bond_ethdev_info(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) { struct bond_dev_private *internals = dev->data->dev_private; + struct bond_slave_details slave; + int ret; uint16_t max_nb_rx_queues = UINT16_MAX; uint16_t max_nb_tx_queues = UINT16_MAX; @@ -2123,8 +2142,17 @@ struct bwg_slave { uint16_t idx; for (idx = 0; idx < internals->slave_count; idx++) { - rte_eth_dev_info_get(internals->slaves[idx].port_id, - &slave_info); + slave = internals->slaves[idx]; + ret = rte_eth_dev_info_get(slave.port_id, &slave_info); + if (ret != 0) { + RTE_BOND_LOG(ERR, + "%s: Error during getting device (port %u) info: %s\n", + __func__, + slave.port_id, + strerror(-ret)); + + return; + } if (slave_info.max_rx_queues < max_nb_rx_queues) max_nb_rx_queues = slave_info.max_rx_queues; -- 1.8.3.1