From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <dev-bounces@dpdk.org>
Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124])
	by inbox.dpdk.org (Postfix) with ESMTP id D0D17A00C2;
	Mon, 22 Aug 2022 12:59:03 +0200 (CEST)
Received: from [217.70.189.124] (localhost [127.0.0.1])
	by mails.dpdk.org (Postfix) with ESMTP id 6FB8E40DFD;
	Mon, 22 Aug 2022 12:59:03 +0200 (CEST)
Received: from mga07.intel.com (mga07.intel.com [134.134.136.100])
 by mails.dpdk.org (Postfix) with ESMTP id 94ECA40694;
 Mon, 22 Aug 2022 12:59:01 +0200 (CEST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple;
 d=intel.com; i=@intel.com; q=dns/txt; s=Intel;
 t=1661165942; x=1692701942;
 h=from:to:cc:subject:date:message-id:in-reply-to:
 references:mime-version:content-transfer-encoding;
 bh=CMLLSR/FSXFeMqkpUCwDkO7m9QMVfJDCvH8BV8RY4aY=;
 b=Al8AoZi3VAkvfLA7nH+nbFSIxCK5Gc8fFOWFiN2LC0WWL8V0hqFrKlcT
 i01cJU+/lT+3krt1xo/BRSwoAr90i3NiOxbxL2AnxtOvRp9g1q9/ehDC2
 ORXVyZIrqmDTHXMoSXS9v0EOuKaWArZxOGnNB24qOq7pYNmt2OdRUVFYY
 h+Ma7ry0P/TsquUrOE0bFAAqIGyAiP9wzd1lfERnWTwmp90BZOHKtxZmG
 FTLwNSrkQWodvXCVXyq9aw2c51dQikJSvuaLJ+CTbGb/nwDUp6qprcq0w
 bbC8XFQGejqI6IK7/GYF3PNyEfFCH6xaJpPwzjnmzeXMjmejYSsEVENiv A==;
X-IronPort-AV: E=McAfee;i="6500,9779,10446"; a="357365397"
X-IronPort-AV: E=Sophos;i="5.93,254,1654585200"; d="scan'208";a="357365397"
Received: from fmsmga005.fm.intel.com ([10.253.24.32])
 by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 22 Aug 2022 03:59:00 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="5.93,254,1654585200"; d="scan'208";a="936993658"
Received: from silpixa00401090.ir.intel.com ([10.55.129.56])
 by fmsmga005.fm.intel.com with ESMTP; 22 Aug 2022 03:58:58 -0700
From: Reshma Pattan <reshma.pattan@intel.com>
To: dev@dpdk.org,
	david.hunt@intel.com
Cc: Hamza Khan <hamza.khan@intel.com>, alan.carew@intel.com, stable@dpdk.org,
 Reshma Pattan <reshma.pattan@intel.com>,
 Reshma Pattan <reshma.pattan@intel.cm>
Subject: [PATCH v4] examples/vm_power_manager: use safe version of list
 iterator
Date: Mon, 22 Aug 2022 11:58:55 +0100
Message-Id: <20220822105855.6180-1-reshma.pattan@intel.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20220708085141.925246-1-hamza.khan@intel.com>
References: <20220708085141.925246-1-hamza.khan@intel.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-BeenThere: dev@dpdk.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: DPDK patches and discussions <dev.dpdk.org>
List-Unsubscribe: <https://mails.dpdk.org/options/dev>,
 <mailto:dev-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://mails.dpdk.org/archives/dev/>
List-Post: <mailto:dev@dpdk.org>
List-Help: <mailto:dev-request@dpdk.org?subject=help>
List-Subscribe: <https://mails.dpdk.org/listinfo/dev>,
 <mailto:dev-request@dpdk.org?subject=subscribe>
Errors-To: dev-bounces@dpdk.org

From: Hamza Khan <hamza.khan@intel.com>

Currently, when vm_power_manager exits, we are using a LIST_FOREACH
macro to iterate over VM info structures while freeing them. This
leads to use-after-free error. To address this, replace all usages of
LIST_* with TAILQ_* macros, and use the RTE_TAILQ_FOREACH_SAFE macro
to iterate and delete VM info structures.

* The change is small and doesn’t affect other code
* Testing was performed on the patch

Fixes: e8ae9b662506 ("examples/vm_power: channel manager and monitor in host")
Cc: alan.carew@intel.com
Cc: stable@dpdk.org

Signed-off-by: Hamza Khan <hamza.khan@intel.com>
Reviewed-by: Reshma Pattan <reshma.pattan@intel.com>
Acked-by: Reshma Pattan <reshma.pattan@intel.cm>
Signed-off-by: Reshma Pattan <reshma.pattan@intel.com>
---
v4: fix header file inclusion
---
 examples/vm_power_manager/channel_manager.c | 20 +++++++++++---------
 1 file changed, 11 insertions(+), 9 deletions(-)

diff --git a/examples/vm_power_manager/channel_manager.c b/examples/vm_power_manager/channel_manager.c
index 838465ab4b..cb872ad2d5 100644
--- a/examples/vm_power_manager/channel_manager.c
+++ b/examples/vm_power_manager/channel_manager.c
@@ -22,6 +22,7 @@
 #include <rte_mempool.h>
 #include <rte_log.h>
 #include <rte_spinlock.h>
+#include <rte_tailq.h>
 
 #include <libvirt/libvirt.h>
 
@@ -30,6 +31,7 @@
 #include "power_manager.h"
 
 
+
 #define RTE_LOGTYPE_CHANNEL_MANAGER RTE_LOGTYPE_USER1
 
 struct libvirt_vm_info lvm_info[MAX_CLIENTS];
@@ -58,16 +60,16 @@ struct virtual_machine_info {
 	virDomainInfo info;
 	rte_spinlock_t config_spinlock;
 	int allow_query;
-	LIST_ENTRY(virtual_machine_info) vms_info;
+	RTE_TAILQ_ENTRY(virtual_machine_info) vms_info;
 };
 
-LIST_HEAD(, virtual_machine_info) vm_list_head;
+RTE_TAILQ_HEAD(, virtual_machine_info) vm_list_head;
 
 static struct virtual_machine_info *
 find_domain_by_name(const char *name)
 {
 	struct virtual_machine_info *info;
-	LIST_FOREACH(info, &vm_list_head, vms_info) {
+	RTE_TAILQ_FOREACH(info, &vm_list_head, vms_info) {
 		if (!strncmp(info->name, name, CHANNEL_MGR_MAX_NAME_LEN-1))
 			return info;
 	}
@@ -878,7 +880,7 @@ add_vm(const char *vm_name)
 
 	new_domain->allow_query = 0;
 	rte_spinlock_init(&(new_domain->config_spinlock));
-	LIST_INSERT_HEAD(&vm_list_head, new_domain, vms_info);
+	TAILQ_INSERT_HEAD(&vm_list_head, new_domain, vms_info);
 	return 0;
 }
 
@@ -900,7 +902,7 @@ remove_vm(const char *vm_name)
 		rte_spinlock_unlock(&vm_info->config_spinlock);
 		return -1;
 	}
-	LIST_REMOVE(vm_info, vms_info);
+	TAILQ_REMOVE(&vm_list_head, vm_info, vms_info);
 	rte_spinlock_unlock(&vm_info->config_spinlock);
 	rte_free(vm_info);
 	return 0;
@@ -953,7 +955,7 @@ channel_manager_init(const char *path __rte_unused)
 {
 	virNodeInfo info;
 
-	LIST_INIT(&vm_list_head);
+	TAILQ_INIT(&vm_list_head);
 	if (connect_hypervisor(path) < 0) {
 		global_n_host_cpus = 64;
 		global_hypervisor_available = 0;
@@ -1005,9 +1007,9 @@ channel_manager_exit(void)
 {
 	unsigned i;
 	char mask[RTE_MAX_LCORE];
-	struct virtual_machine_info *vm_info;
+	struct virtual_machine_info *vm_info, *tmp;
 
-	LIST_FOREACH(vm_info, &vm_list_head, vms_info) {
+	RTE_TAILQ_FOREACH_SAFE(vm_info, &vm_list_head, vms_info, tmp) {
 
 		rte_spinlock_lock(&(vm_info->config_spinlock));
 
@@ -1022,7 +1024,7 @@ channel_manager_exit(void)
 		}
 		rte_spinlock_unlock(&(vm_info->config_spinlock));
 
-		LIST_REMOVE(vm_info, vms_info);
+		TAILQ_REMOVE(&vm_list_head, vm_info, vms_info);
 		rte_free(vm_info);
 	}
 
-- 
2.25.1