From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <dev-bounces@dpdk.org>
Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124])
	by inbox.dpdk.org (Postfix) with ESMTP id 84801A00C2;
	Thu, 13 Oct 2022 08:43:07 +0200 (CEST)
Received: from [217.70.189.124] (localhost [127.0.0.1])
	by mails.dpdk.org (Postfix) with ESMTP id 2E7E442D4A;
	Thu, 13 Oct 2022 08:43:03 +0200 (CEST)
Received: from mga02.intel.com (mga02.intel.com [134.134.136.20])
 by mails.dpdk.org (Postfix) with ESMTP id D29DC42D4A
 for <dev@dpdk.org>; Thu, 13 Oct 2022 08:43:01 +0200 (CEST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple;
 d=intel.com; i=@intel.com; q=dns/txt; s=Intel;
 t=1665643382; x=1697179382;
 h=from:to:cc:subject:date:message-id:in-reply-to: references;
 bh=nDZnfjvwPwY9YVCI/qx8xhz6KPnoG/ACHZN/bhrWCPk=;
 b=e1RORY+b2LShLPqSpmaCwRzTYE9/BAmARzZ7Z40TIt05vOoNi1Dg/yHt
 1GBMJgfu5R914uO3UtmONdr2feDiiOzXoqfYFSFQAAl9ZUA+vAPD+inMZ
 tm67aj14ruNfU1s5LQpuZUfpGUdys212uo+K70cD2Y38eBSx+QxrtQkxY
 J4YN2cW9O4MPjqxgczq6gxQkPtGrGq6xPV32g1eE3TYxLPNdcffAJZCyz
 /TPS8fTBuORCV4eHhlTWXRw1bDV7BLx2MhZxlsxMB45jig/zPO48Y0+Vf
 p8t0H2v0fjTtIYMB+tSkzmxvxb86Em1c6KlHXk7jX7gUZ7pi+UlgL5COd Q==;
X-IronPort-AV: E=McAfee;i="6500,9779,10498"; a="292319299"
X-IronPort-AV: E=Sophos;i="5.95,180,1661842800"; d="scan'208";a="292319299"
Received: from orsmga007.jf.intel.com ([10.7.209.58])
 by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 12 Oct 2022 23:43:01 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=McAfee;i="6500,9779,10498"; a="622044249"
X-IronPort-AV: E=Sophos;i="5.95,180,1661842800"; d="scan'208";a="622044249"
Received: from npg-dpdk-xuan-cbdma.sh.intel.com ([10.67.111.102])
 by orsmga007.jf.intel.com with ESMTP; 12 Oct 2022 23:42:58 -0700
From: xuan.ding@intel.com
To: maxime.coquelin@redhat.com,
	chenbo.xia@intel.com
Cc: dev@dpdk.org, jiayu.hu@intel.com, xingguang.he@intel.com,
 yvonnex.yang@intel.com, cheng1.jiang@intel.com, yuanx.wang@intel.com,
 wenwux.ma@intel.com, Xuan Ding <xuan.ding@intel.com>
Subject: [PATCH v4 2/2] examples/vhost: unconfigure DMA vchannel
Date: Thu, 13 Oct 2022 06:40:40 +0000
Message-Id: <20221013064040.98489-3-xuan.ding@intel.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20221013064040.98489-1-xuan.ding@intel.com>
References: <20220814140442.82525-1-xuan.ding@intel.com>
 <20221013064040.98489-1-xuan.ding@intel.com>
X-BeenThere: dev@dpdk.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: DPDK patches and discussions <dev.dpdk.org>
List-Unsubscribe: <https://mails.dpdk.org/options/dev>,
 <mailto:dev-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://mails.dpdk.org/archives/dev/>
List-Post: <mailto:dev@dpdk.org>
List-Help: <mailto:dev-request@dpdk.org?subject=help>
List-Subscribe: <https://mails.dpdk.org/listinfo/dev>,
 <mailto:dev-request@dpdk.org?subject=subscribe>
Errors-To: dev-bounces@dpdk.org

From: Xuan Ding <xuan.ding@intel.com>

This patch applies rte_vhost_async_dma_unconfigure() to manually
free DMA vchannels. Before unconfiguration, need make sure the
specified DMA device is no longer used by any vhost ports.

Signed-off-by: Xuan Ding <xuan.ding@intel.com>
---
 examples/vhost/main.c | 38 +++++++++++++++++++++++++-------------
 1 file changed, 25 insertions(+), 13 deletions(-)

diff --git a/examples/vhost/main.c b/examples/vhost/main.c
index ac78704d79..bfeb808dcc 100644
--- a/examples/vhost/main.c
+++ b/examples/vhost/main.c
@@ -73,6 +73,7 @@ static int total_num_mbufs = NUM_MBUFS_DEFAULT;
 
 struct dma_for_vhost dma_bind[RTE_MAX_VHOST_DEVICE];
 int16_t dmas_id[RTE_DMADEV_DEFAULT_MAX];
+int16_t dma_ref_count[RTE_DMADEV_DEFAULT_MAX];
 static int dma_count;
 
 /* mask of enabled ports */
@@ -371,6 +372,7 @@ open_dma(const char *value)
 done:
 		(dma_info + socketid)->dmas[vring_id].dev_id = dev_id;
 		(dma_info + socketid)->async_flag |= async_flag;
+		dma_ref_count[dev_id]++;
 		i++;
 	}
 out:
@@ -1562,6 +1564,27 @@ vhost_clear_queue(struct vhost_dev *vdev, uint16_t queue_id)
 	}
 }
 
+static void
+vhost_clear_async(struct vhost_dev *vdev, int vid, uint16_t queue_id)
+{
+	int16_t dma_id;
+
+	if (dma_bind[vid].dmas[queue_id].async_enabled) {
+		vhost_clear_queue(vdev, queue_id);
+		rte_vhost_async_channel_unregister(vid, queue_id);
+		dma_bind[vid].dmas[queue_id].async_enabled = false;
+
+		dma_id = dma_bind[vid2socketid[vdev->vid]].dmas[queue_id].dev_id;
+		dma_ref_count[dma_id]--;
+
+		if (dma_ref_count[dma_id] == 0) {
+			if (rte_vhost_async_dma_unconfigure(dma_id, 0) < 0)
+				RTE_LOG(ERR, VHOST_CONFIG,
+				       "Failed to unconfigure DMA %d in vhost.\n", dma_id);
+		}
+	}
+}
+
 /*
  * Remove a device from the specific data core linked list and from the
  * main linked list. Synchronization  occurs through the use of the
@@ -1618,17 +1641,8 @@ destroy_device(int vid)
 		"(%d) device has been removed from data core\n",
 		vdev->vid);
 
-	if (dma_bind[vid].dmas[VIRTIO_RXQ].async_enabled) {
-		vhost_clear_queue(vdev, VIRTIO_RXQ);
-		rte_vhost_async_channel_unregister(vid, VIRTIO_RXQ);
-		dma_bind[vid].dmas[VIRTIO_RXQ].async_enabled = false;
-	}
-
-	if (dma_bind[vid].dmas[VIRTIO_TXQ].async_enabled) {
-		vhost_clear_queue(vdev, VIRTIO_TXQ);
-		rte_vhost_async_channel_unregister(vid, VIRTIO_TXQ);
-		dma_bind[vid].dmas[VIRTIO_TXQ].async_enabled = false;
-	}
+	vhost_clear_async(vdev, vid, VIRTIO_RXQ);
+	vhost_clear_async(vdev, vid, VIRTIO_TXQ);
 
 	rte_free(vdev);
 }
@@ -1690,8 +1704,6 @@ vhost_async_channel_register(int vid)
 	return rx_ret | tx_ret;
 }
 
-
-
 /*
  * A new device is added to a data core. First the device is added to the main linked list
  * and then allocated to a specific data core.
-- 
2.17.1