From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 8BD22A0C41; Wed, 6 Oct 2021 13:18:52 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 7074841137; Wed, 6 Oct 2021 13:18:52 +0200 (CEST) Received: from NAM12-DM6-obe.outbound.protection.outlook.com (mail-dm6nam12on2059.outbound.protection.outlook.com [40.107.243.59]) by mails.dpdk.org (Postfix) with ESMTP id A112741130 for ; Wed, 6 Oct 2021 13:18:51 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=G7ojStSB5CK2uc2XIE0AIx9D5I5E15v1DrpLTHf5k8mzjpiXYb8qvjuhupo+8wb4/J1ZuQRI1+lPH/FxKrss+WAw78/tBCIhmwXmeJuQZkRrn8UI7NkfVTvdWQtzrfAht9LHtbcLWaaRc/Irmc2n9ehR02E2gtJnqM91I3g0z+rNGq3jxIHD9wSxdng+jlEmSWB25ZFlTl1oRodfWuPGn9GKIxndYEqI0j3bzzSs/L4MNBGterplMyuGaS6gznuWKXBmLbTilFSXbsDn1rpmp/z0iT6PW5LjVJpjQ9FkDwXa3BFM7uuSEnPW5A648+BFD2GbxYcbutLYV/n8eoUQfA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=7Ci0lgY4zJ5ZrLGhAU6ip1KxFmkutCwmuuVJlFibMpU=; b=bl0vatc6JB1IRH1l8V3JOFgLxO1cIY71q8Sa5x/EGCU/8+xMpilzlAr/nYQA7u6jWwrZN3W2zx9Y46w6QRKOcHGn42GQae7qUnQqROfhEvpOw2HvgxSUoWTd9nRVmH1B7oCvXNcWHoWI0i1XeaaNcHOlSKd6UbHyv/M/iBe66cJNsxuHtLZ1MuDQNPJpj2EIZYmILkdmf1mPrg5Bbwdl9cCeeESaaeLvcHbzgCLrC/VHXqKAd79w5c3FX7pey/LwfjIBUWMOppaOihGWJTn99UUJX3NzSd259iHw/IQYXCnXOwWZQ9qT8o2TcB2+AhLzGv7hGYmaB0DIeo1wgzA+ag== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=intel.com smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=7Ci0lgY4zJ5ZrLGhAU6ip1KxFmkutCwmuuVJlFibMpU=; b=Z/7722wyB9k/ya6j7cRTavPtp280hYoH5grdR+fG+NcpjGLwunu0+GD9g8ei9/JJ13lZ36t+aWdUye+XJMDDDUTg9BtWRZ5Eq7kqRptw1e9OWjyK+2He9vn/Z+CptGyKsaI2JZgaXBawG1FiRV0ZuAa51FVTsu2Gn0J12u3/ws543P0+Ll5gw8zWZT99dz6/Srl5LLzNBUn8GAUVBGEKCcfmuSSrOuXyFskMVVdDcEVi4vp3InbgpsHrc2suQIsMJCoMx1lM46qv97Leo/T2Q9R8ZFkCVyY094n9DbduTTf3WpwamJR96UmyEMXENDQ7CmmgSHM+WlqQuxpPmGYp4Q== Received: from DM6PR06CA0035.namprd06.prod.outlook.com (2603:10b6:5:120::48) by BN9PR12MB5210.namprd12.prod.outlook.com (2603:10b6:408:11b::23) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4566.15; Wed, 6 Oct 2021 11:18:49 +0000 Received: from DM6NAM11FT060.eop-nam11.prod.protection.outlook.com (2603:10b6:5:120:cafe::96) by DM6PR06CA0035.outlook.office365.com (2603:10b6:5:120::48) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4587.18 via Frontend Transport; Wed, 6 Oct 2021 11:18:49 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; intel.com; dkim=none (message not signed) header.d=none;intel.com; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by DM6NAM11FT060.mail.protection.outlook.com (10.13.173.63) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4587.18 via Frontend Transport; Wed, 6 Oct 2021 11:18:48 +0000 Received: from nvidia.com (172.20.187.5) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Wed, 6 Oct 2021 11:18:43 +0000 From: Xueming Li To: CC: , Ferruh Yigit , "Andrew Rybchenko" , Singh Aman Deep , Thomas Monjalon , "John W. Linville" , Ciara Loftus , "Qi Zhang" , Hemant Agrawal , "Sachin Saxena" , Rosen Xu , "Gagandeep Singh" , Bruce Richardson , Maxime Coquelin , Chenbo Xia Date: Wed, 6 Oct 2021 19:18:21 +0800 Message-ID: <20211006111822.437298-2-xuemingl@nvidia.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20211006111822.437298-1-xuemingl@nvidia.com> References: <20210727034134.20556-1-xuemingl@nvidia.com> <20211006111822.437298-1-xuemingl@nvidia.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Originating-IP: [172.20.187.5] X-ClientProxiedBy: HQMAIL105.nvidia.com (172.20.187.12) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 1bf6eecf-1eba-49e1-c0a8-08d988bb11a6 X-MS-TrafficTypeDiagnostic: BN9PR12MB5210: X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:126; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: OtO3H754Fl4uVgfpOC3KCtTJHFC1lk2QleqqYWxgk68GefeY6sqc1vdtvAQ9TtiBauz/ToSP08X53WBJ3XJZcueuLdijzWg4+3FObaCb/8ArVs1tIYCZDc6K9q0Zm9dfBoVK8Wf27M/4VhDpGeN/4oFy2cbwSccjcfhNm8h0KXacQZqaKcJZoOM49+qub8KgqbgODv9mLpVjK1Sst1k2BvG7nN8Ss8Uw//Z/jYxOkhhAse2r1NsyzWzeQ4JFj9zfrVRem9Y2dQ1uyMQksm+rtB2+YBe1dep3TqdLRNHk5u2WpzmA4zM3hAqcepMHkgfqKpwSpUdP2+9SWY0i//MFkdkMxchyGEJ0Bvd41Zs7zLJ8A/cP9yFby6wKBDIIHVOIERoW2jH0UVV/hjUO523atJkDNKTjhQqx0WwLZFNxXTcQLeq0n/+LIhiY/dU25VPo6V2kPHdi42xxzpQ3m8CKDHhcczt5XXN0B/YiJk7Yk9wGJu0i8pNYCLCD0fBEcEJKlrZOTz4nqaQDY/rXhzZn92oj/DAi9oBPoHEQAqie+pCiKTwr+wZCaP6eH8mAMegE2etu75s2YND72B8XkH14nFPKr2uQIWOZcoAdr3GVtZ0gfCH4huAOimubSx98+wRJJFCF39F/scRBgcI2JSvGmf+y5rEw6hSA5VCwk+3spqw37WwJmWahloBn0FokGrAYPeys3tQco0ipyipbsBvaNw== X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(46966006)(36840700001)(36860700001)(6286002)(16526019)(6666004)(316002)(186003)(54906003)(70586007)(70206006)(55016002)(4326008)(26005)(7696005)(36756003)(47076005)(30864003)(86362001)(2906002)(6916009)(2616005)(8936002)(83380400001)(5660300002)(508600001)(7636003)(426003)(336012)(8676002)(1076003)(7416002)(82310400003)(356005); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Oct 2021 11:18:48.5040 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 1bf6eecf-1eba-49e1-c0a8-08d988bb11a6 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT060.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN9PR12MB5210 Subject: [dpdk-dev] [PATCH v7 1/2] ethdev: make queue release callback optional X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Some drivers don't need Rx and Tx queue release callback, make them optional. Clean up empty queue release callbacks for some drivers. Signed-off-by: Xueming Li Reviewed-by: Andrew Rybchenko Acked-by: Ferruh Yigit Acked-by: Thomas Monjalon --- app/test/virtual_pmd.c | 12 ---- drivers/net/af_packet/rte_eth_af_packet.c | 7 -- drivers/net/af_xdp/rte_eth_af_xdp.c | 7 -- drivers/net/dpaa/dpaa_ethdev.c | 13 ---- drivers/net/dpaa2/dpaa2_ethdev.c | 7 -- drivers/net/ipn3ke/ipn3ke_representor.c | 12 ---- drivers/net/kni/rte_eth_kni.c | 7 -- drivers/net/pcap/pcap_ethdev.c | 7 -- drivers/net/pfe/pfe_ethdev.c | 14 ---- drivers/net/ring/rte_eth_ring.c | 4 -- drivers/net/virtio/virtio_ethdev.c | 8 --- lib/ethdev/rte_ethdev.c | 86 ++++++++++------------- 12 files changed, 36 insertions(+), 148 deletions(-) diff --git a/app/test/virtual_pmd.c b/app/test/virtual_pmd.c index 7036f401ed9..7e15b47eb0f 100644 --- a/app/test/virtual_pmd.c +++ b/app/test/virtual_pmd.c @@ -163,16 +163,6 @@ virtual_ethdev_tx_queue_setup_fail(struct rte_eth_dev *dev __rte_unused, return -1; } -static void -virtual_ethdev_rx_queue_release(void *q __rte_unused) -{ -} - -static void -virtual_ethdev_tx_queue_release(void *q __rte_unused) -{ -} - static int virtual_ethdev_link_update_success(struct rte_eth_dev *bonded_eth_dev, int wait_to_complete __rte_unused) @@ -243,8 +233,6 @@ static const struct eth_dev_ops virtual_ethdev_default_dev_ops = { .dev_infos_get = virtual_ethdev_info_get, .rx_queue_setup = virtual_ethdev_rx_queue_setup_success, .tx_queue_setup = virtual_ethdev_tx_queue_setup_success, - .rx_queue_release = virtual_ethdev_rx_queue_release, - .tx_queue_release = virtual_ethdev_tx_queue_release, .link_update = virtual_ethdev_link_update_success, .mac_addr_set = virtual_ethdev_mac_address_set, .stats_get = virtual_ethdev_stats_get, diff --git a/drivers/net/af_packet/rte_eth_af_packet.c b/drivers/net/af_packet/rte_eth_af_packet.c index fcd80903995..c73d2ec5c86 100644 --- a/drivers/net/af_packet/rte_eth_af_packet.c +++ b/drivers/net/af_packet/rte_eth_af_packet.c @@ -427,11 +427,6 @@ eth_dev_close(struct rte_eth_dev *dev) return 0; } -static void -eth_queue_release(void *q __rte_unused) -{ -} - static int eth_link_update(struct rte_eth_dev *dev __rte_unused, int wait_to_complete __rte_unused) @@ -594,8 +589,6 @@ static const struct eth_dev_ops ops = { .promiscuous_disable = eth_dev_promiscuous_disable, .rx_queue_setup = eth_rx_queue_setup, .tx_queue_setup = eth_tx_queue_setup, - .rx_queue_release = eth_queue_release, - .tx_queue_release = eth_queue_release, .link_update = eth_link_update, .stats_get = eth_stats_get, .stats_reset = eth_stats_reset, diff --git a/drivers/net/af_xdp/rte_eth_af_xdp.c b/drivers/net/af_xdp/rte_eth_af_xdp.c index 9bea0a895a3..a619dd218d0 100644 --- a/drivers/net/af_xdp/rte_eth_af_xdp.c +++ b/drivers/net/af_xdp/rte_eth_af_xdp.c @@ -989,11 +989,6 @@ eth_dev_close(struct rte_eth_dev *dev) return 0; } -static void -eth_queue_release(void *q __rte_unused) -{ -} - static int eth_link_update(struct rte_eth_dev *dev __rte_unused, int wait_to_complete __rte_unused) @@ -1474,8 +1469,6 @@ static const struct eth_dev_ops ops = { .promiscuous_disable = eth_dev_promiscuous_disable, .rx_queue_setup = eth_rx_queue_setup, .tx_queue_setup = eth_tx_queue_setup, - .rx_queue_release = eth_queue_release, - .tx_queue_release = eth_queue_release, .link_update = eth_link_update, .stats_get = eth_stats_get, .stats_reset = eth_stats_reset, diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c index 36d8f9249df..2c12956ff6b 100644 --- a/drivers/net/dpaa/dpaa_ethdev.c +++ b/drivers/net/dpaa/dpaa_ethdev.c @@ -1233,12 +1233,6 @@ dpaa_eth_eventq_detach(const struct rte_eth_dev *dev, return 0; } -static -void dpaa_eth_rx_queue_release(void *rxq __rte_unused) -{ - PMD_INIT_FUNC_TRACE(); -} - static int dpaa_eth_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, uint16_t nb_desc __rte_unused, @@ -1272,11 +1266,6 @@ int dpaa_eth_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, return 0; } -static void dpaa_eth_tx_queue_release(void *txq __rte_unused) -{ - PMD_INIT_FUNC_TRACE(); -} - static uint32_t dpaa_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id) { @@ -1571,8 +1560,6 @@ static struct eth_dev_ops dpaa_devops = { .rx_queue_setup = dpaa_eth_rx_queue_setup, .tx_queue_setup = dpaa_eth_tx_queue_setup, - .rx_queue_release = dpaa_eth_rx_queue_release, - .tx_queue_release = dpaa_eth_tx_queue_release, .rx_burst_mode_get = dpaa_dev_rx_burst_mode_get, .tx_burst_mode_get = dpaa_dev_tx_burst_mode_get, .rxq_info_get = dpaa_rxq_info_get, diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c index c12169578e2..48ffbf6c214 100644 --- a/drivers/net/dpaa2/dpaa2_ethdev.c +++ b/drivers/net/dpaa2/dpaa2_ethdev.c @@ -1004,12 +1004,6 @@ dpaa2_dev_rx_queue_release(void *q __rte_unused) } } -static void -dpaa2_dev_tx_queue_release(void *q __rte_unused) -{ - PMD_INIT_FUNC_TRACE(); -} - static uint32_t dpaa2_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id) { @@ -2427,7 +2421,6 @@ static struct eth_dev_ops dpaa2_ethdev_ops = { .rx_queue_setup = dpaa2_dev_rx_queue_setup, .rx_queue_release = dpaa2_dev_rx_queue_release, .tx_queue_setup = dpaa2_dev_tx_queue_setup, - .tx_queue_release = dpaa2_dev_tx_queue_release, .rx_burst_mode_get = dpaa2_dev_rx_burst_mode_get, .tx_burst_mode_get = dpaa2_dev_tx_burst_mode_get, .flow_ctrl_get = dpaa2_flow_ctrl_get, diff --git a/drivers/net/ipn3ke/ipn3ke_representor.c b/drivers/net/ipn3ke/ipn3ke_representor.c index 589d9fa5877..694435a4ae2 100644 --- a/drivers/net/ipn3ke/ipn3ke_representor.c +++ b/drivers/net/ipn3ke/ipn3ke_representor.c @@ -288,11 +288,6 @@ ipn3ke_rpst_rx_queue_setup(__rte_unused struct rte_eth_dev *dev, return 0; } -static void -ipn3ke_rpst_rx_queue_release(__rte_unused void *rxq) -{ -} - static int ipn3ke_rpst_tx_queue_setup(__rte_unused struct rte_eth_dev *dev, __rte_unused uint16_t queue_idx, __rte_unused uint16_t nb_desc, @@ -302,11 +297,6 @@ ipn3ke_rpst_tx_queue_setup(__rte_unused struct rte_eth_dev *dev, return 0; } -static void -ipn3ke_rpst_tx_queue_release(__rte_unused void *txq) -{ -} - /* Statistics collected by each port, VSI, VEB, and S-channel */ struct ipn3ke_rpst_eth_stats { uint64_t tx_bytes; /* gotc */ @@ -2865,9 +2855,7 @@ static const struct eth_dev_ops ipn3ke_rpst_dev_ops = { .tx_queue_start = ipn3ke_rpst_tx_queue_start, .tx_queue_stop = ipn3ke_rpst_tx_queue_stop, .rx_queue_setup = ipn3ke_rpst_rx_queue_setup, - .rx_queue_release = ipn3ke_rpst_rx_queue_release, .tx_queue_setup = ipn3ke_rpst_tx_queue_setup, - .tx_queue_release = ipn3ke_rpst_tx_queue_release, .dev_set_link_up = ipn3ke_rpst_dev_set_link_up, .dev_set_link_down = ipn3ke_rpst_dev_set_link_down, diff --git a/drivers/net/kni/rte_eth_kni.c b/drivers/net/kni/rte_eth_kni.c index 871d11c4133..cb9f7c8e820 100644 --- a/drivers/net/kni/rte_eth_kni.c +++ b/drivers/net/kni/rte_eth_kni.c @@ -284,11 +284,6 @@ eth_kni_tx_queue_setup(struct rte_eth_dev *dev, return 0; } -static void -eth_kni_queue_release(void *q __rte_unused) -{ -} - static int eth_kni_link_update(struct rte_eth_dev *dev __rte_unused, int wait_to_complete __rte_unused) @@ -362,8 +357,6 @@ static const struct eth_dev_ops eth_kni_ops = { .dev_infos_get = eth_kni_dev_info, .rx_queue_setup = eth_kni_rx_queue_setup, .tx_queue_setup = eth_kni_tx_queue_setup, - .rx_queue_release = eth_kni_queue_release, - .tx_queue_release = eth_kni_queue_release, .link_update = eth_kni_link_update, .stats_get = eth_kni_stats_get, .stats_reset = eth_kni_stats_reset, diff --git a/drivers/net/pcap/pcap_ethdev.c b/drivers/net/pcap/pcap_ethdev.c index 3566aea0105..d695c5eef7b 100644 --- a/drivers/net/pcap/pcap_ethdev.c +++ b/drivers/net/pcap/pcap_ethdev.c @@ -857,11 +857,6 @@ eth_dev_close(struct rte_eth_dev *dev) return 0; } -static void -eth_queue_release(void *q __rte_unused) -{ -} - static int eth_link_update(struct rte_eth_dev *dev __rte_unused, int wait_to_complete __rte_unused) @@ -1006,8 +1001,6 @@ static const struct eth_dev_ops ops = { .tx_queue_start = eth_tx_queue_start, .rx_queue_stop = eth_rx_queue_stop, .tx_queue_stop = eth_tx_queue_stop, - .rx_queue_release = eth_queue_release, - .tx_queue_release = eth_queue_release, .link_update = eth_link_update, .stats_get = eth_stats_get, .stats_reset = eth_stats_reset, diff --git a/drivers/net/pfe/pfe_ethdev.c b/drivers/net/pfe/pfe_ethdev.c index feec4d10a26..4c7f568bf42 100644 --- a/drivers/net/pfe/pfe_ethdev.c +++ b/drivers/net/pfe/pfe_ethdev.c @@ -494,18 +494,6 @@ pfe_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, return 0; } -static void -pfe_rx_queue_release(void *q __rte_unused) -{ - PMD_INIT_FUNC_TRACE(); -} - -static void -pfe_tx_queue_release(void *q __rte_unused) -{ - PMD_INIT_FUNC_TRACE(); -} - static int pfe_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, @@ -759,9 +747,7 @@ static const struct eth_dev_ops ops = { .dev_configure = pfe_eth_configure, .dev_infos_get = pfe_eth_info, .rx_queue_setup = pfe_rx_queue_setup, - .rx_queue_release = pfe_rx_queue_release, .tx_queue_setup = pfe_tx_queue_setup, - .tx_queue_release = pfe_tx_queue_release, .dev_supported_ptypes_get = pfe_supported_ptypes_get, .link_update = pfe_eth_link_update, .promiscuous_enable = pfe_promiscuous_enable, diff --git a/drivers/net/ring/rte_eth_ring.c b/drivers/net/ring/rte_eth_ring.c index 1faf38a714c..0440019e07e 100644 --- a/drivers/net/ring/rte_eth_ring.c +++ b/drivers/net/ring/rte_eth_ring.c @@ -225,8 +225,6 @@ eth_mac_addr_add(struct rte_eth_dev *dev __rte_unused, return 0; } -static void -eth_queue_release(void *q __rte_unused) { ; } static int eth_link_update(struct rte_eth_dev *dev __rte_unused, int wait_to_complete __rte_unused) { return 0; } @@ -272,8 +270,6 @@ static const struct eth_dev_ops ops = { .dev_infos_get = eth_dev_info, .rx_queue_setup = eth_rx_queue_setup, .tx_queue_setup = eth_tx_queue_setup, - .rx_queue_release = eth_queue_release, - .tx_queue_release = eth_queue_release, .link_update = eth_link_update, .stats_get = eth_stats_get, .stats_reset = eth_stats_reset, diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c index b60eeb24abe..6aa36b3f394 100644 --- a/drivers/net/virtio/virtio_ethdev.c +++ b/drivers/net/virtio/virtio_ethdev.c @@ -370,12 +370,6 @@ virtio_set_multiple_queues(struct rte_eth_dev *dev, uint16_t nb_queues) return 0; } -static void -virtio_dev_queue_release(void *queue __rte_unused) -{ - /* do nothing */ -} - static uint16_t virtio_get_nr_vq(struct virtio_hw *hw) { @@ -981,9 +975,7 @@ static const struct eth_dev_ops virtio_eth_dev_ops = { .rx_queue_setup = virtio_dev_rx_queue_setup, .rx_queue_intr_enable = virtio_dev_rx_queue_intr_enable, .rx_queue_intr_disable = virtio_dev_rx_queue_intr_disable, - .rx_queue_release = virtio_dev_queue_release, .tx_queue_setup = virtio_dev_tx_queue_setup, - .tx_queue_release = virtio_dev_queue_release, /* collect stats per queue */ .queue_stats_mapping_set = virtio_dev_queue_stats_mapping_set, .vlan_filter_set = virtio_vlan_filter_set, diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c index daf5ca92422..4439ad336e2 100644 --- a/lib/ethdev/rte_ethdev.c +++ b/lib/ethdev/rte_ethdev.c @@ -889,6 +889,32 @@ eth_err(uint16_t port_id, int ret) return ret; } +static void +eth_dev_rxq_release(struct rte_eth_dev *dev, uint16_t qid) +{ + void **rxq = dev->data->rx_queues; + + if (rxq[qid] == NULL) + return; + + if (dev->dev_ops->rx_queue_release != NULL) + (*dev->dev_ops->rx_queue_release)(rxq[qid]); + rxq[qid] = NULL; +} + +static void +eth_dev_txq_release(struct rte_eth_dev *dev, uint16_t qid) +{ + void **txq = dev->data->tx_queues; + + if (txq[qid] == NULL) + return; + + if (dev->dev_ops->tx_queue_release != NULL) + (*dev->dev_ops->tx_queue_release)(txq[qid]); + txq[qid] = NULL; +} + static int eth_dev_rx_queue_config(struct rte_eth_dev *dev, uint16_t nb_queues) { @@ -905,12 +931,10 @@ eth_dev_rx_queue_config(struct rte_eth_dev *dev, uint16_t nb_queues) return -(ENOMEM); } } else if (dev->data->rx_queues != NULL && nb_queues != 0) { /* re-configure */ - RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_release, -ENOTSUP); + for (i = nb_queues; i < old_nb_queues; i++) + eth_dev_rxq_release(dev, i); rxq = dev->data->rx_queues; - - for (i = nb_queues; i < old_nb_queues; i++) - (*dev->dev_ops->rx_queue_release)(rxq[i]); rxq = rte_realloc(rxq, sizeof(rxq[0]) * nb_queues, RTE_CACHE_LINE_SIZE); if (rxq == NULL) @@ -925,12 +949,8 @@ eth_dev_rx_queue_config(struct rte_eth_dev *dev, uint16_t nb_queues) dev->data->rx_queues = rxq; } else if (dev->data->rx_queues != NULL && nb_queues == 0) { - RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_release, -ENOTSUP); - - rxq = dev->data->rx_queues; - for (i = nb_queues; i < old_nb_queues; i++) - (*dev->dev_ops->rx_queue_release)(rxq[i]); + eth_dev_rxq_release(dev, i); rte_free(dev->data->rx_queues); dev->data->rx_queues = NULL; @@ -1145,12 +1165,10 @@ eth_dev_tx_queue_config(struct rte_eth_dev *dev, uint16_t nb_queues) return -(ENOMEM); } } else if (dev->data->tx_queues != NULL && nb_queues != 0) { /* re-configure */ - RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->tx_queue_release, -ENOTSUP); + for (i = nb_queues; i < old_nb_queues; i++) + eth_dev_txq_release(dev, i); txq = dev->data->tx_queues; - - for (i = nb_queues; i < old_nb_queues; i++) - (*dev->dev_ops->tx_queue_release)(txq[i]); txq = rte_realloc(txq, sizeof(txq[0]) * nb_queues, RTE_CACHE_LINE_SIZE); if (txq == NULL) @@ -1165,12 +1183,8 @@ eth_dev_tx_queue_config(struct rte_eth_dev *dev, uint16_t nb_queues) dev->data->tx_queues = txq; } else if (dev->data->tx_queues != NULL && nb_queues == 0) { - RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->tx_queue_release, -ENOTSUP); - - txq = dev->data->tx_queues; - for (i = nb_queues; i < old_nb_queues; i++) - (*dev->dev_ops->tx_queue_release)(txq[i]); + eth_dev_txq_release(dev, i); rte_free(dev->data->tx_queues); dev->data->tx_queues = NULL; @@ -2006,7 +2020,6 @@ rte_eth_rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id, struct rte_eth_dev *dev; struct rte_eth_dev_info dev_info; struct rte_eth_rxconf local_conf; - void **rxq; RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); dev = &rte_eth_devices[port_id]; @@ -2110,13 +2123,7 @@ rte_eth_rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id, RTE_ETH_QUEUE_STATE_STOPPED)) return -EBUSY; - rxq = dev->data->rx_queues; - if (rxq[rx_queue_id]) { - RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_release, - -ENOTSUP); - (*dev->dev_ops->rx_queue_release)(rxq[rx_queue_id]); - rxq[rx_queue_id] = NULL; - } + eth_dev_rxq_release(dev, rx_queue_id); if (rx_conf == NULL) rx_conf = &dev_info.default_rxconf; @@ -2189,7 +2196,6 @@ rte_eth_rx_hairpin_queue_setup(uint16_t port_id, uint16_t rx_queue_id, int ret; struct rte_eth_dev *dev; struct rte_eth_hairpin_cap cap; - void **rxq; int i; int count; @@ -2246,13 +2252,7 @@ rte_eth_rx_hairpin_queue_setup(uint16_t port_id, uint16_t rx_queue_id, } if (dev->data->dev_started) return -EBUSY; - rxq = dev->data->rx_queues; - if (rxq[rx_queue_id] != NULL) { - RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_release, - -ENOTSUP); - (*dev->dev_ops->rx_queue_release)(rxq[rx_queue_id]); - rxq[rx_queue_id] = NULL; - } + eth_dev_rxq_release(dev, rx_queue_id); ret = (*dev->dev_ops->rx_hairpin_queue_setup)(dev, rx_queue_id, nb_rx_desc, conf); if (ret == 0) @@ -2269,7 +2269,6 @@ rte_eth_tx_queue_setup(uint16_t port_id, uint16_t tx_queue_id, struct rte_eth_dev *dev; struct rte_eth_dev_info dev_info; struct rte_eth_txconf local_conf; - void **txq; int ret; RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); @@ -2314,13 +2313,7 @@ rte_eth_tx_queue_setup(uint16_t port_id, uint16_t tx_queue_id, RTE_ETH_QUEUE_STATE_STOPPED)) return -EBUSY; - txq = dev->data->tx_queues; - if (txq[tx_queue_id]) { - RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->tx_queue_release, - -ENOTSUP); - (*dev->dev_ops->tx_queue_release)(txq[tx_queue_id]); - txq[tx_queue_id] = NULL; - } + eth_dev_txq_release(dev, tx_queue_id); if (tx_conf == NULL) tx_conf = &dev_info.default_txconf; @@ -2368,7 +2361,6 @@ rte_eth_tx_hairpin_queue_setup(uint16_t port_id, uint16_t tx_queue_id, { struct rte_eth_dev *dev; struct rte_eth_hairpin_cap cap; - void **txq; int i; int count; int ret; @@ -2426,13 +2418,7 @@ rte_eth_tx_hairpin_queue_setup(uint16_t port_id, uint16_t tx_queue_id, } if (dev->data->dev_started) return -EBUSY; - txq = dev->data->tx_queues; - if (txq[tx_queue_id] != NULL) { - RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->tx_queue_release, - -ENOTSUP); - (*dev->dev_ops->tx_queue_release)(txq[tx_queue_id]); - txq[tx_queue_id] = NULL; - } + eth_dev_txq_release(dev, tx_queue_id); ret = (*dev->dev_ops->tx_hairpin_queue_setup) (dev, tx_queue_id, nb_tx_desc, conf); if (ret == 0) -- 2.33.0