From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <dev-bounces@dpdk.org>
Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124])
	by inbox.dpdk.org (Postfix) with ESMTP id D759CA0547;
	Fri, 29 Oct 2021 16:49:14 +0200 (CEST)
Received: from [217.70.189.124] (localhost [127.0.0.1])
	by mails.dpdk.org (Postfix) with ESMTP id E528942704;
	Fri, 29 Oct 2021 16:48:46 +0200 (CEST)
Received: from NAM12-MW2-obe.outbound.protection.outlook.com
 (mail-mw2nam12on2043.outbound.protection.outlook.com [40.107.244.43])
 by mails.dpdk.org (Postfix) with ESMTP id 8D75D426F0
 for <dev@dpdk.org>; Fri, 29 Oct 2021 16:48:44 +0200 (CEST)
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=BlLmWBhpjBphgdkMBFClbRSdDRLg2cBz49k73DEzaI1tf/hSg9RFyExnWKtCciofHXvjMF3mRBsQ7oObYRAEhsmV632AZJ8LduMsHv+LtSA+RJ9+4tG7FkPzaAZc3QRiGG9pKW4B70kQ30jLB5QvAVoNfpPyamLsFO1snx6RwYJ/iVkhoMguDAOGen/hP6gWN4+EYUKxCVHOxJDfmcR3ls5Pjh3MKTxqdjViQW94i0w1+9bFAPBYDqsr5Ry2rNyIeLexDic15llMVp/U2s8nPkZQ7ZCiu7xBP2Ghdck3Bo4r2dTwuKEW+VsDpIOAFD+hyOalQZrytfM3W1GkQmc3Jg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=vfNpzZGRF3YIvA8/XfgZcBBiZ4UQXIeQjDQnNKRbVUE=;
 b=Z2z2qhmOQ5eMzcELA9/6VMenWPi0bkxf8rdon9xVJ4ag6+CcIHn2ep9bh9c4LsvZ5u7ELJSTe+d67Yz+VKM2i6oTtwwkiDACohdxSnH8dx4cY2YIpO9WESFbyesM+A09k7eLf1ZoBrpLOTjZ9KQFRBWDJXcyv1RNWDUy/4Cmqh1nIGL4diOCEZiA2fO94yk49/PwYEoVK/9ZfEYUysH2rjjV3fxL7/pZIuzuzJ7KlABLK6VceJGqf44zADOQowij0fl67SleLinqxPVbfCfCG5cXq3L2D9d1WX/+WVjkg0Md+Og9UohnV3oE7qNY/42uU8z+sgpUHNDKw/F9vGFrKQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 149.199.62.198) smtp.rcpttodomain=dpdk.org smtp.mailfrom=xilinx.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=xilinx.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=xilinx.onmicrosoft.com; s=selector2-xilinx-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=vfNpzZGRF3YIvA8/XfgZcBBiZ4UQXIeQjDQnNKRbVUE=;
 b=dQV1EFFKUcVZ5rkqn9nznfEp8WLl0661ThgUBG6m4eVoUwiSWCNf73pjlysJKV51XzQMm5/2oaGTAx1jxsKyKI4Pdt3nfbZL+Pe+cgli25mfZfENH5fEX+IYVhjVPnq1spx1pOXUq1C39ULE9xPB3CfX0M8vwCGGM4CjUK8PHyM=
Received: from DM5PR05CA0013.namprd05.prod.outlook.com (2603:10b6:3:d4::23) by
 SN6PR02MB5487.namprd02.prod.outlook.com (2603:10b6:805:e7::26) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4628.18; Fri, 29 Oct 2021 14:48:41 +0000
Received: from DM3NAM02FT054.eop-nam02.prod.protection.outlook.com
 (2603:10b6:3:d4:cafe::1e) by DM5PR05CA0013.outlook.office365.com
 (2603:10b6:3:d4::23) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4669.5 via Frontend
 Transport; Fri, 29 Oct 2021 14:48:41 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 149.199.62.198)
 smtp.mailfrom=xilinx.com; dpdk.org; dkim=none (message not signed)
 header.d=none;dpdk.org; dmarc=pass action=none header.from=xilinx.com;
Received-SPF: Pass (protection.outlook.com: domain of xilinx.com designates
 149.199.62.198 as permitted sender) receiver=protection.outlook.com;
 client-ip=149.199.62.198; helo=xsj-pvapexch02.xlnx.xilinx.com;
Received: from xsj-pvapexch02.xlnx.xilinx.com (149.199.62.198) by
 DM3NAM02FT054.mail.protection.outlook.com (10.13.5.135) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.4649.14 via Frontend Transport; Fri, 29 Oct 2021 14:48:41 +0000
Received: from xsj-pvapexch02.xlnx.xilinx.com (172.19.86.41) by
 xsj-pvapexch02.xlnx.xilinx.com (172.19.86.41) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.2176.14; Fri, 29 Oct 2021 07:48:37 -0700
Received: from smtp.xilinx.com (172.19.127.96) by
 xsj-pvapexch02.xlnx.xilinx.com (172.19.86.41) with Microsoft SMTP Server id
 15.1.2176.14 via Frontend Transport; Fri, 29 Oct 2021 07:48:37 -0700
Envelope-to: dev@dpdk.org, maxime.coquelin@redhat.com, chenbo.xia@intel.com,
 andrew.rybchenko@oktetlabs.ru
Received: from [10.170.66.108] (port=36706 helo=xndengvm004108.xilinx.com)
 by smtp.xilinx.com with esmtp (Exim 4.90)
 (envelope-from <vijay.srivastava@xilinx.com>)
 id 1mgTBN-0008YO-8K; Fri, 29 Oct 2021 07:48:37 -0700
From: Vijay Srivastava <vijay.srivastava@xilinx.com>
To: <dev@dpdk.org>
CC: <maxime.coquelin@redhat.com>, <chenbo.xia@intel.com>,
 <andrew.rybchenko@oktetlabs.ru>, Vijay Kumar Srivastava <vsrivast@xilinx.com>
Date: Fri, 29 Oct 2021 20:16:41 +0530
Message-ID: <20211029144645.30295-7-vsrivast@xilinx.com>
X-Mailer: git-send-email 2.25.0
In-Reply-To: <20211029144645.30295-1-vsrivast@xilinx.com>
References: <20210706164418.32615-1-vsrivast@xilinx.com>
 <20211029144645.30295-1-vsrivast@xilinx.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 6195c707-06e5-4684-90ca-08d99aeb32fd
X-MS-TrafficTypeDiagnostic: SN6PR02MB5487:
X-Microsoft-Antispam-PRVS: <SN6PR02MB548758C262D97C277AC7FD81B9879@SN6PR02MB5487.namprd02.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:183;
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: ZEs5Ifv5kFdGvBoRAU1jUl8q+ke8s9Sj0QLLailuo+MK8lg9Sp5m91FJl9S8+0H9rAkplUWADRTRaQq/0yJzx3j7x+1EoG+pMmcQBok/LH8GwvoZIu/1dHYYpQpODZt6SQ3AzsFE7+2oVqflPfEalvA3/ZoRcf1uPKmZR42V3hAWTlvTQdrd95KQ+D0TytJK2NKhojzadQwKFs64brgKc0Fc/3c6VioE8okIE/9D0OYebFcdEJRTiIJQDQ4xLc/vUq9ePmvzCkoTDY4PB5cnd4R/YZ3FkqGUV198FkaIpnQO0kZWZSXjk8x9hZK+/JkCNHNciErm1EuD0B3QpmoBnjMTbTm4AxgBAq1bLJ8fTkEDRyKjnVEh6yFPNnzdw8RoPpS3g2bzT6mue2tdG2oih3yq/Stm7nZOpROHXr1UdewBruViYN/mg5NkExramE0lWfS5Dn08lbjvaIuyRoAD3Sh5FmjI+Y76Xy7sDbMoHUqVz36Ebsaatkg1fTGCbTW+mxIzAe4oyRM8Wa0qsKBtqzd/CaMe/oIKCF3d1b6eCsVxcZ0X1xtrz3pLuQ2q3/Jdhy1AaNnnqWW+SK40GUboK0niPdI4gvl/Dm9hmjXQ7OO2JNA0TzG3jiKsMBfss92tetXAIlwez42Vg5iH5bCMKlX35KJXfutCJJd4ogCVkAWrP/q5uMfukqq6vjaKfPnPjoYJLbye6PLlJg8RtjW/eekSxxHSOy1OUIcCt2R5VTc=
X-Forefront-Antispam-Report: CIP:149.199.62.198; CTRY:US; LANG:en; SCL:1; SRV:;
 IPV:NLI; SFV:NSPM; H:xsj-pvapexch02.xlnx.xilinx.com;
 PTR:unknown-62-198.xilinx.com; CAT:NONE;
 SFS:(36840700001)(46966006)(6916009)(186003)(2906002)(30864003)(5660300002)(7696005)(82310400003)(83380400001)(8676002)(426003)(70586007)(4326008)(26005)(36906005)(7636003)(44832011)(508600001)(47076005)(9786002)(1076003)(2616005)(36860700001)(356005)(70206006)(54906003)(107886003)(336012)(8936002)(316002)(6666004)(36756003)(102446001);
 DIR:OUT; SFP:1101; 
X-OriginatorOrg: xilinx.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 29 Oct 2021 14:48:41.2206 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 6195c707-06e5-4684-90ca-08d99aeb32fd
X-MS-Exchange-CrossTenant-Id: 657af505-d5df-48d0-8300-c31994686c5c
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=657af505-d5df-48d0-8300-c31994686c5c; Ip=[149.199.62.198];
 Helo=[xsj-pvapexch02.xlnx.xilinx.com]
X-MS-Exchange-CrossTenant-AuthSource: DM3NAM02FT054.eop-nam02.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN6PR02MB5487
Subject: [dpdk-dev] [PATCH v3 06/10] vdpa/sfc: add support for dev conf and
 dev close ops
X-BeenThere: dev@dpdk.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: DPDK patches and discussions <dev.dpdk.org>
List-Unsubscribe: <https://mails.dpdk.org/options/dev>,
 <mailto:dev-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://mails.dpdk.org/archives/dev/>
List-Post: <mailto:dev@dpdk.org>
List-Help: <mailto:dev-request@dpdk.org?subject=help>
List-Subscribe: <https://mails.dpdk.org/listinfo/dev>,
 <mailto:dev-request@dpdk.org?subject=subscribe>
Errors-To: dev-bounces@dpdk.org
Sender: "dev" <dev-bounces@dpdk.org>

From: Vijay Kumar Srivastava <vsrivast@xilinx.com>

Implement vDPA ops dev_conf and dev_close for DMA mapping,
interrupt and virtqueue configurations.

Signed-off-by: Vijay Kumar Srivastava <vsrivast@xilinx.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
---
v2:
* Removed redundant null check while calling free().
* Added error handling for rte_vhost_get_vhost_vring().

 drivers/vdpa/sfc/sfc_vdpa.c     |   6 +
 drivers/vdpa/sfc/sfc_vdpa.h     |  43 ++++
 drivers/vdpa/sfc/sfc_vdpa_hw.c  |  69 ++++++
 drivers/vdpa/sfc/sfc_vdpa_ops.c | 530 ++++++++++++++++++++++++++++++++++++++--
 drivers/vdpa/sfc/sfc_vdpa_ops.h |  28 +++
 5 files changed, 656 insertions(+), 20 deletions(-)

diff --git a/drivers/vdpa/sfc/sfc_vdpa.c b/drivers/vdpa/sfc/sfc_vdpa.c
index ccbd243..b3c82e5 100644
--- a/drivers/vdpa/sfc/sfc_vdpa.c
+++ b/drivers/vdpa/sfc/sfc_vdpa.c
@@ -246,6 +246,8 @@ struct sfc_vdpa_ops_data *
 
 	sfc_vdpa_log_init(sva, "entry");
 
+	sfc_vdpa_adapter_lock_init(sva);
+
 	sfc_vdpa_log_init(sva, "vfio init");
 	if (sfc_vdpa_vfio_setup(sva) < 0) {
 		sfc_vdpa_err(sva, "failed to setup device %s", pci_dev->name);
@@ -280,6 +282,8 @@ struct sfc_vdpa_ops_data *
 	sfc_vdpa_vfio_teardown(sva);
 
 fail_vfio_setup:
+	sfc_vdpa_adapter_lock_fini(sva);
+
 fail_set_log_prefix:
 	rte_free(sva);
 
@@ -311,6 +315,8 @@ struct sfc_vdpa_ops_data *
 
 	sfc_vdpa_vfio_teardown(sva);
 
+	sfc_vdpa_adapter_lock_fini(sva);
+
 	rte_free(sva);
 
 	return 0;
diff --git a/drivers/vdpa/sfc/sfc_vdpa.h b/drivers/vdpa/sfc/sfc_vdpa.h
index c10c3d3..1bf96e7 100644
--- a/drivers/vdpa/sfc/sfc_vdpa.h
+++ b/drivers/vdpa/sfc/sfc_vdpa.h
@@ -80,10 +80,53 @@ struct sfc_vdpa_ops_data *
 void
 sfc_vdpa_dma_free(struct sfc_vdpa_adapter *sva, efsys_mem_t *esmp);
 
+int
+sfc_vdpa_dma_map(struct sfc_vdpa_ops_data *vdpa_data, bool do_map);
+
 static inline struct sfc_vdpa_adapter *
 sfc_vdpa_adapter_by_dev_handle(void *dev_handle)
 {
 	return (struct sfc_vdpa_adapter *)dev_handle;
 }
 
+/*
+ * Add wrapper functions to acquire/release lock to be able to remove or
+ * change the lock in one place.
+ */
+static inline void
+sfc_vdpa_adapter_lock_init(struct sfc_vdpa_adapter *sva)
+{
+	rte_spinlock_init(&sva->lock);
+}
+
+static inline int
+sfc_vdpa_adapter_is_locked(struct sfc_vdpa_adapter *sva)
+{
+	return rte_spinlock_is_locked(&sva->lock);
+}
+
+static inline void
+sfc_vdpa_adapter_lock(struct sfc_vdpa_adapter *sva)
+{
+	rte_spinlock_lock(&sva->lock);
+}
+
+static inline int
+sfc_vdpa_adapter_trylock(struct sfc_vdpa_adapter *sva)
+{
+	return rte_spinlock_trylock(&sva->lock);
+}
+
+static inline void
+sfc_vdpa_adapter_unlock(struct sfc_vdpa_adapter *sva)
+{
+	rte_spinlock_unlock(&sva->lock);
+}
+
+static inline void
+sfc_vdpa_adapter_lock_fini(__rte_unused struct sfc_vdpa_adapter *sva)
+{
+	/* Just for symmetry of the API */
+}
+
 #endif  /* _SFC_VDPA_H */
diff --git a/drivers/vdpa/sfc/sfc_vdpa_hw.c b/drivers/vdpa/sfc/sfc_vdpa_hw.c
index 7a67bd8..b473708 100644
--- a/drivers/vdpa/sfc/sfc_vdpa_hw.c
+++ b/drivers/vdpa/sfc/sfc_vdpa_hw.c
@@ -8,6 +8,7 @@
 #include <rte_common.h>
 #include <rte_errno.h>
 #include <rte_vfio.h>
+#include <rte_vhost.h>
 
 #include "efx.h"
 #include "sfc_vdpa.h"
@@ -109,6 +110,74 @@
 	memset(esmp, 0, sizeof(*esmp));
 }
 
+int
+sfc_vdpa_dma_map(struct sfc_vdpa_ops_data *ops_data, bool do_map)
+{
+	uint32_t i, j;
+	int rc;
+	struct rte_vhost_memory *vhost_mem = NULL;
+	struct rte_vhost_mem_region *mem_reg = NULL;
+	int vfio_container_fd;
+	void *dev;
+
+	dev = ops_data->dev_handle;
+	vfio_container_fd =
+		sfc_vdpa_adapter_by_dev_handle(dev)->vfio_container_fd;
+
+	rc = rte_vhost_get_mem_table(ops_data->vid, &vhost_mem);
+	if (rc < 0) {
+		sfc_vdpa_err(dev,
+			     "failed to get VM memory layout");
+		goto error;
+	}
+
+	for (i = 0; i < vhost_mem->nregions; i++) {
+		mem_reg = &vhost_mem->regions[i];
+
+		if (do_map) {
+			rc = rte_vfio_container_dma_map(vfio_container_fd,
+						mem_reg->host_user_addr,
+						mem_reg->guest_phys_addr,
+						mem_reg->size);
+			if (rc < 0) {
+				sfc_vdpa_err(dev,
+					     "DMA map failed : %s",
+					     rte_strerror(rte_errno));
+				goto failed_vfio_dma_map;
+			}
+		} else {
+			rc = rte_vfio_container_dma_unmap(vfio_container_fd,
+						mem_reg->host_user_addr,
+						mem_reg->guest_phys_addr,
+						mem_reg->size);
+			if (rc < 0) {
+				sfc_vdpa_err(dev,
+					     "DMA unmap failed : %s",
+					     rte_strerror(rte_errno));
+				goto error;
+			}
+		}
+	}
+
+	free(vhost_mem);
+
+	return 0;
+
+failed_vfio_dma_map:
+	for (j = 0; j < i; j++) {
+		mem_reg = &vhost_mem->regions[j];
+		rc = rte_vfio_container_dma_unmap(vfio_container_fd,
+						  mem_reg->host_user_addr,
+						  mem_reg->guest_phys_addr,
+						  mem_reg->size);
+	}
+
+error:
+	free(vhost_mem);
+
+	return rc;
+}
+
 static int
 sfc_vdpa_mem_bar_init(struct sfc_vdpa_adapter *sva,
 		      const efx_bar_region_t *mem_ebrp)
diff --git a/drivers/vdpa/sfc/sfc_vdpa_ops.c b/drivers/vdpa/sfc/sfc_vdpa_ops.c
index 5253adb..de1c81a 100644
--- a/drivers/vdpa/sfc/sfc_vdpa_ops.c
+++ b/drivers/vdpa/sfc/sfc_vdpa_ops.c
@@ -3,10 +3,13 @@
  * Copyright(c) 2020-2021 Xilinx, Inc.
  */
 
+#include <sys/ioctl.h>
+
 #include <rte_errno.h>
 #include <rte_malloc.h>
 #include <rte_vdpa.h>
 #include <rte_vdpa_dev.h>
+#include <rte_vfio.h>
 #include <rte_vhost.h>
 
 #include "efx.h"
@@ -28,24 +31,12 @@
 #define SFC_VDPA_DEFAULT_FEATURES \
 		(1ULL << VHOST_USER_F_PROTOCOL_FEATURES)
 
-static int
-sfc_vdpa_get_queue_num(struct rte_vdpa_device *vdpa_dev, uint32_t *queue_num)
-{
-	struct sfc_vdpa_ops_data *ops_data;
-	void *dev;
-
-	ops_data = sfc_vdpa_get_data_by_dev(vdpa_dev);
-	if (ops_data == NULL)
-		return -1;
-
-	dev = ops_data->dev_handle;
-	*queue_num = sfc_vdpa_adapter_by_dev_handle(dev)->max_queue_count;
+#define SFC_VDPA_MSIX_IRQ_SET_BUF_LEN \
+		(sizeof(struct vfio_irq_set) + \
+		sizeof(int) * (SFC_VDPA_MAX_QUEUE_PAIRS * 2 + 1))
 
-	sfc_vdpa_info(dev, "vDPA ops get_queue_num :: supported queue num : %d",
-		      *queue_num);
-
-	return 0;
-}
+/* It will be used for target VF when calling function is not PF */
+#define SFC_VDPA_VF_NULL		0xFFFF
 
 static int
 sfc_vdpa_get_device_features(struct sfc_vdpa_ops_data *ops_data)
@@ -74,6 +65,441 @@
 	return 0;
 }
 
+static uint64_t
+hva_to_gpa(int vid, uint64_t hva)
+{
+	struct rte_vhost_memory *vhost_mem = NULL;
+	struct rte_vhost_mem_region *mem_reg = NULL;
+	uint32_t i;
+	uint64_t gpa = 0;
+
+	if (rte_vhost_get_mem_table(vid, &vhost_mem) < 0)
+		goto error;
+
+	for (i = 0; i < vhost_mem->nregions; i++) {
+		mem_reg = &vhost_mem->regions[i];
+
+		if (hva >= mem_reg->host_user_addr &&
+				hva < mem_reg->host_user_addr + mem_reg->size) {
+			gpa = (hva - mem_reg->host_user_addr) +
+				mem_reg->guest_phys_addr;
+			break;
+		}
+	}
+
+error:
+	free(vhost_mem);
+	return gpa;
+}
+
+static int
+sfc_vdpa_enable_vfio_intr(struct sfc_vdpa_ops_data *ops_data)
+{
+	int rc;
+	int *irq_fd_ptr;
+	int vfio_dev_fd;
+	uint32_t i, num_vring;
+	struct rte_vhost_vring vring;
+	struct vfio_irq_set *irq_set;
+	struct rte_pci_device *pci_dev;
+	char irq_set_buf[SFC_VDPA_MSIX_IRQ_SET_BUF_LEN];
+	void *dev;
+
+	num_vring = rte_vhost_get_vring_num(ops_data->vid);
+	dev = ops_data->dev_handle;
+	vfio_dev_fd = sfc_vdpa_adapter_by_dev_handle(dev)->vfio_dev_fd;
+	pci_dev = sfc_vdpa_adapter_by_dev_handle(dev)->pdev;
+
+	irq_set = (struct vfio_irq_set *)irq_set_buf;
+	irq_set->argsz = sizeof(irq_set_buf);
+	irq_set->count = num_vring + 1;
+	irq_set->flags = VFIO_IRQ_SET_DATA_EVENTFD |
+			 VFIO_IRQ_SET_ACTION_TRIGGER;
+	irq_set->index = VFIO_PCI_MSIX_IRQ_INDEX;
+	irq_set->start = 0;
+	irq_fd_ptr = (int *)&irq_set->data;
+	irq_fd_ptr[RTE_INTR_VEC_ZERO_OFFSET] =
+		rte_intr_fd_get(pci_dev->intr_handle);
+
+	for (i = 0; i < num_vring; i++) {
+		rc = rte_vhost_get_vhost_vring(ops_data->vid, i, &vring);
+		if (rc)
+			return -1;
+
+		irq_fd_ptr[RTE_INTR_VEC_RXTX_OFFSET + i] = vring.callfd;
+	}
+
+	rc = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+	if (rc) {
+		sfc_vdpa_err(ops_data->dev_handle,
+			     "error enabling MSI-X interrupts: %s",
+			     strerror(errno));
+		return -1;
+	}
+
+	return 0;
+}
+
+static int
+sfc_vdpa_disable_vfio_intr(struct sfc_vdpa_ops_data *ops_data)
+{
+	int rc;
+	int vfio_dev_fd;
+	struct vfio_irq_set *irq_set;
+	char irq_set_buf[SFC_VDPA_MSIX_IRQ_SET_BUF_LEN];
+	void *dev;
+
+	dev = ops_data->dev_handle;
+	vfio_dev_fd = sfc_vdpa_adapter_by_dev_handle(dev)->vfio_dev_fd;
+
+	irq_set = (struct vfio_irq_set *)irq_set_buf;
+	irq_set->argsz = sizeof(irq_set_buf);
+	irq_set->count = 0;
+	irq_set->flags = VFIO_IRQ_SET_DATA_NONE | VFIO_IRQ_SET_ACTION_TRIGGER;
+	irq_set->index = VFIO_PCI_MSIX_IRQ_INDEX;
+	irq_set->start = 0;
+
+	rc = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+	if (rc) {
+		sfc_vdpa_err(ops_data->dev_handle,
+			     "error disabling MSI-X interrupts: %s",
+			     strerror(errno));
+		return -1;
+	}
+
+	return 0;
+}
+
+static int
+sfc_vdpa_get_vring_info(struct sfc_vdpa_ops_data *ops_data,
+			int vq_num, struct sfc_vdpa_vring_info *vring)
+{
+	int rc;
+	uint64_t gpa;
+	struct rte_vhost_vring vq;
+
+	rc = rte_vhost_get_vhost_vring(ops_data->vid, vq_num, &vq);
+	if (rc < 0) {
+		sfc_vdpa_err(ops_data->dev_handle,
+			     "get vhost vring failed: %s", rte_strerror(rc));
+		return rc;
+	}
+
+	gpa = hva_to_gpa(ops_data->vid, (uint64_t)(uintptr_t)vq.desc);
+	if (gpa == 0) {
+		sfc_vdpa_err(ops_data->dev_handle,
+			     "fail to get GPA for descriptor ring.");
+		goto fail_vring_map;
+	}
+	vring->desc = gpa;
+
+	gpa = hva_to_gpa(ops_data->vid, (uint64_t)(uintptr_t)vq.avail);
+	if (gpa == 0) {
+		sfc_vdpa_err(ops_data->dev_handle,
+			     "fail to get GPA for available ring.");
+		goto fail_vring_map;
+	}
+	vring->avail = gpa;
+
+	gpa = hva_to_gpa(ops_data->vid, (uint64_t)(uintptr_t)vq.used);
+	if (gpa == 0) {
+		sfc_vdpa_err(ops_data->dev_handle,
+			     "fail to get GPA for used ring.");
+		goto fail_vring_map;
+	}
+	vring->used = gpa;
+
+	vring->size = vq.size;
+
+	rc = rte_vhost_get_vring_base(ops_data->vid, vq_num,
+				      &vring->last_avail_idx,
+				      &vring->last_used_idx);
+
+	return rc;
+
+fail_vring_map:
+	return -1;
+}
+
+static int
+sfc_vdpa_virtq_start(struct sfc_vdpa_ops_data *ops_data, int vq_num)
+{
+	int rc;
+	efx_virtio_vq_t *vq;
+	struct sfc_vdpa_vring_info vring;
+	efx_virtio_vq_cfg_t vq_cfg;
+	efx_virtio_vq_dyncfg_t vq_dyncfg;
+
+	vq = ops_data->vq_cxt[vq_num].vq;
+	if (vq == NULL)
+		return -1;
+
+	rc = sfc_vdpa_get_vring_info(ops_data, vq_num, &vring);
+	if (rc < 0) {
+		sfc_vdpa_err(ops_data->dev_handle,
+			     "get vring info failed: %s", rte_strerror(rc));
+		goto fail_vring_info;
+	}
+
+	vq_cfg.evvc_target_vf = SFC_VDPA_VF_NULL;
+
+	/* even virtqueue for RX and odd for TX */
+	if (vq_num % 2) {
+		vq_cfg.evvc_type = EFX_VIRTIO_VQ_TYPE_NET_TXQ;
+		sfc_vdpa_info(ops_data->dev_handle,
+			      "configure virtqueue # %d (TXQ)", vq_num);
+	} else {
+		vq_cfg.evvc_type = EFX_VIRTIO_VQ_TYPE_NET_RXQ;
+		sfc_vdpa_info(ops_data->dev_handle,
+			      "configure virtqueue # %d (RXQ)", vq_num);
+	}
+
+	vq_cfg.evvc_vq_num = vq_num;
+	vq_cfg.evvc_desc_tbl_addr   = vring.desc;
+	vq_cfg.evvc_avail_ring_addr = vring.avail;
+	vq_cfg.evvc_used_ring_addr  = vring.used;
+	vq_cfg.evvc_vq_size = vring.size;
+
+	vq_dyncfg.evvd_vq_pidx = vring.last_used_idx;
+	vq_dyncfg.evvd_vq_cidx = vring.last_avail_idx;
+
+	/* MSI-X vector is function-relative */
+	vq_cfg.evvc_msix_vector = RTE_INTR_VEC_RXTX_OFFSET + vq_num;
+	if (ops_data->vdpa_context == SFC_VDPA_AS_VF)
+		vq_cfg.evvc_pas_id = 0;
+	vq_cfg.evcc_features = ops_data->dev_features &
+			       ops_data->req_features;
+
+	/* Start virtqueue */
+	rc = efx_virtio_qstart(vq, &vq_cfg, &vq_dyncfg);
+	if (rc != 0) {
+		/* destroy virtqueue */
+		sfc_vdpa_err(ops_data->dev_handle,
+			     "virtqueue start failed: %s",
+			     rte_strerror(rc));
+		efx_virtio_qdestroy(vq);
+		goto fail_virtio_qstart;
+	}
+
+	sfc_vdpa_info(ops_data->dev_handle,
+		      "virtqueue started successfully for vq_num %d", vq_num);
+
+	ops_data->vq_cxt[vq_num].enable = B_TRUE;
+
+	return rc;
+
+fail_virtio_qstart:
+fail_vring_info:
+	return rc;
+}
+
+static int
+sfc_vdpa_virtq_stop(struct sfc_vdpa_ops_data *ops_data, int vq_num)
+{
+	int rc;
+	efx_virtio_vq_dyncfg_t vq_idx;
+	efx_virtio_vq_t *vq;
+
+	if (ops_data->vq_cxt[vq_num].enable != B_TRUE)
+		return -1;
+
+	vq = ops_data->vq_cxt[vq_num].vq;
+	if (vq == NULL)
+		return -1;
+
+	/* stop the vq */
+	rc = efx_virtio_qstop(vq, &vq_idx);
+	if (rc == 0) {
+		ops_data->vq_cxt[vq_num].cidx = vq_idx.evvd_vq_cidx;
+		ops_data->vq_cxt[vq_num].pidx = vq_idx.evvd_vq_pidx;
+	}
+	ops_data->vq_cxt[vq_num].enable = B_FALSE;
+
+	return rc;
+}
+
+static int
+sfc_vdpa_configure(struct sfc_vdpa_ops_data *ops_data)
+{
+	int rc, i;
+	int nr_vring;
+	int max_vring_cnt;
+	efx_virtio_vq_t *vq;
+	efx_nic_t *nic;
+	void *dev;
+
+	dev = ops_data->dev_handle;
+	nic = sfc_vdpa_adapter_by_dev_handle(dev)->nic;
+
+	SFC_EFX_ASSERT(ops_data->state == SFC_VDPA_STATE_INITIALIZED);
+
+	ops_data->state = SFC_VDPA_STATE_CONFIGURING;
+
+	nr_vring = rte_vhost_get_vring_num(ops_data->vid);
+	max_vring_cnt =
+		(sfc_vdpa_adapter_by_dev_handle(dev)->max_queue_count * 2);
+
+	/* number of vring should not be more than supported max vq count */
+	if (nr_vring > max_vring_cnt) {
+		sfc_vdpa_err(dev,
+			     "nr_vring (%d) is > max vring count (%d)",
+			     nr_vring, max_vring_cnt);
+		goto fail_vring_num;
+	}
+
+	rc = sfc_vdpa_dma_map(ops_data, true);
+	if (rc) {
+		sfc_vdpa_err(dev,
+			     "DMA map failed: %s", rte_strerror(rc));
+		goto fail_dma_map;
+	}
+
+	for (i = 0; i < nr_vring; i++) {
+		rc = efx_virtio_qcreate(nic, &vq);
+		if ((rc != 0) || (vq == NULL)) {
+			sfc_vdpa_err(dev,
+				     "virtqueue create failed: %s",
+				     rte_strerror(rc));
+			goto fail_vq_create;
+		}
+
+		/* store created virtqueue context */
+		ops_data->vq_cxt[i].vq = vq;
+	}
+
+	ops_data->vq_count = i;
+
+	ops_data->state = SFC_VDPA_STATE_CONFIGURED;
+
+	return 0;
+
+fail_vq_create:
+	sfc_vdpa_dma_map(ops_data, false);
+
+fail_dma_map:
+fail_vring_num:
+	ops_data->state = SFC_VDPA_STATE_INITIALIZED;
+
+	return -1;
+}
+
+static void
+sfc_vdpa_close(struct sfc_vdpa_ops_data *ops_data)
+{
+	int i;
+
+	if (ops_data->state != SFC_VDPA_STATE_CONFIGURED)
+		return;
+
+	ops_data->state = SFC_VDPA_STATE_CLOSING;
+
+	for (i = 0; i < ops_data->vq_count; i++) {
+		if (ops_data->vq_cxt[i].vq == NULL)
+			continue;
+
+		efx_virtio_qdestroy(ops_data->vq_cxt[i].vq);
+	}
+
+	sfc_vdpa_dma_map(ops_data, false);
+
+	ops_data->state = SFC_VDPA_STATE_INITIALIZED;
+}
+
+static void
+sfc_vdpa_stop(struct sfc_vdpa_ops_data *ops_data)
+{
+	int i;
+	int rc;
+
+	if (ops_data->state != SFC_VDPA_STATE_STARTED)
+		return;
+
+	ops_data->state = SFC_VDPA_STATE_STOPPING;
+
+	for (i = 0; i < ops_data->vq_count; i++) {
+		rc = sfc_vdpa_virtq_stop(ops_data, i);
+		if (rc != 0)
+			continue;
+	}
+
+	sfc_vdpa_disable_vfio_intr(ops_data);
+
+	ops_data->state = SFC_VDPA_STATE_CONFIGURED;
+}
+
+static int
+sfc_vdpa_start(struct sfc_vdpa_ops_data *ops_data)
+{
+	int i, j;
+	int rc;
+
+	SFC_EFX_ASSERT(ops_data->state == SFC_VDPA_STATE_CONFIGURED);
+
+	sfc_vdpa_log_init(ops_data->dev_handle, "entry");
+
+	ops_data->state = SFC_VDPA_STATE_STARTING;
+
+	sfc_vdpa_log_init(ops_data->dev_handle, "enable interrupts");
+	rc = sfc_vdpa_enable_vfio_intr(ops_data);
+	if (rc < 0) {
+		sfc_vdpa_err(ops_data->dev_handle,
+			     "vfio intr allocation failed: %s",
+			     rte_strerror(rc));
+		goto fail_enable_vfio_intr;
+	}
+
+	rte_vhost_get_negotiated_features(ops_data->vid,
+					  &ops_data->req_features);
+
+	sfc_vdpa_info(ops_data->dev_handle,
+		      "negotiated feature : 0x%" PRIx64,
+		      ops_data->req_features);
+
+	for (i = 0; i < ops_data->vq_count; i++) {
+		sfc_vdpa_log_init(ops_data->dev_handle,
+				  "starting vq# %d", i);
+		rc = sfc_vdpa_virtq_start(ops_data, i);
+		if (rc != 0)
+			goto fail_vq_start;
+	}
+
+	ops_data->state = SFC_VDPA_STATE_STARTED;
+
+	sfc_vdpa_log_init(ops_data->dev_handle, "done");
+
+	return 0;
+
+fail_vq_start:
+	/* stop already started virtqueues */
+	for (j = 0; j < i; j++)
+		sfc_vdpa_virtq_stop(ops_data, j);
+	sfc_vdpa_disable_vfio_intr(ops_data);
+
+fail_enable_vfio_intr:
+	ops_data->state = SFC_VDPA_STATE_CONFIGURED;
+
+	return rc;
+}
+
+static int
+sfc_vdpa_get_queue_num(struct rte_vdpa_device *vdpa_dev, uint32_t *queue_num)
+{
+	struct sfc_vdpa_ops_data *ops_data;
+	void *dev;
+
+	ops_data = sfc_vdpa_get_data_by_dev(vdpa_dev);
+	if (ops_data == NULL)
+		return -1;
+
+	dev = ops_data->dev_handle;
+	*queue_num = sfc_vdpa_adapter_by_dev_handle(dev)->max_queue_count;
+
+	sfc_vdpa_info(dev, "vDPA ops get_queue_num :: supported queue num : %d",
+		      *queue_num);
+
+	return 0;
+}
+
 static int
 sfc_vdpa_get_features(struct rte_vdpa_device *vdpa_dev, uint64_t *features)
 {
@@ -114,7 +540,53 @@
 static int
 sfc_vdpa_dev_config(int vid)
 {
-	RTE_SET_USED(vid);
+	struct rte_vdpa_device *vdpa_dev;
+	int rc;
+	struct sfc_vdpa_ops_data *ops_data;
+
+	vdpa_dev = rte_vhost_get_vdpa_device(vid);
+
+	ops_data = sfc_vdpa_get_data_by_dev(vdpa_dev);
+	if (ops_data == NULL) {
+		sfc_vdpa_err(ops_data->dev_handle,
+			     "invalid vDPA device : %p, vid : %d",
+			     vdpa_dev, vid);
+		return -1;
+	}
+
+	sfc_vdpa_log_init(ops_data->dev_handle, "entry");
+
+	ops_data->vid = vid;
+
+	sfc_vdpa_adapter_lock(ops_data->dev_handle);
+
+	sfc_vdpa_log_init(ops_data->dev_handle, "configuring");
+	rc = sfc_vdpa_configure(ops_data);
+	if (rc != 0)
+		goto fail_vdpa_config;
+
+	sfc_vdpa_log_init(ops_data->dev_handle, "starting");
+	rc = sfc_vdpa_start(ops_data);
+	if (rc != 0)
+		goto fail_vdpa_start;
+
+	sfc_vdpa_adapter_unlock(ops_data->dev_handle);
+
+	sfc_vdpa_log_init(ops_data->dev_handle, "vhost notifier ctrl");
+	if (rte_vhost_host_notifier_ctrl(vid, RTE_VHOST_QUEUE_ALL, true) != 0)
+		sfc_vdpa_info(ops_data->dev_handle,
+			      "vDPA (%s): software relay for notify is used.",
+			      vdpa_dev->device->name);
+
+	sfc_vdpa_log_init(ops_data->dev_handle, "done");
+
+	return 0;
+
+fail_vdpa_start:
+	sfc_vdpa_close(ops_data);
+
+fail_vdpa_config:
+	sfc_vdpa_adapter_unlock(ops_data->dev_handle);
 
 	return -1;
 }
@@ -122,9 +594,27 @@
 static int
 sfc_vdpa_dev_close(int vid)
 {
-	RTE_SET_USED(vid);
+	struct rte_vdpa_device *vdpa_dev;
+	struct sfc_vdpa_ops_data *ops_data;
 
-	return -1;
+	vdpa_dev = rte_vhost_get_vdpa_device(vid);
+
+	ops_data = sfc_vdpa_get_data_by_dev(vdpa_dev);
+	if (ops_data == NULL) {
+		sfc_vdpa_err(ops_data->dev_handle,
+			     "invalid vDPA device : %p, vid : %d",
+			     vdpa_dev, vid);
+		return -1;
+	}
+
+	sfc_vdpa_adapter_lock(ops_data->dev_handle);
+
+	sfc_vdpa_stop(ops_data);
+	sfc_vdpa_close(ops_data);
+
+	sfc_vdpa_adapter_unlock(ops_data->dev_handle);
+
+	return 0;
 }
 
 static int
diff --git a/drivers/vdpa/sfc/sfc_vdpa_ops.h b/drivers/vdpa/sfc/sfc_vdpa_ops.h
index 21cbb73..8d553c5 100644
--- a/drivers/vdpa/sfc/sfc_vdpa_ops.h
+++ b/drivers/vdpa/sfc/sfc_vdpa_ops.h
@@ -18,17 +18,45 @@ enum sfc_vdpa_context {
 enum sfc_vdpa_state {
 	SFC_VDPA_STATE_UNINITIALIZED = 0,
 	SFC_VDPA_STATE_INITIALIZED,
+	SFC_VDPA_STATE_CONFIGURING,
+	SFC_VDPA_STATE_CONFIGURED,
+	SFC_VDPA_STATE_CLOSING,
+	SFC_VDPA_STATE_CLOSED,
+	SFC_VDPA_STATE_STARTING,
+	SFC_VDPA_STATE_STARTED,
+	SFC_VDPA_STATE_STOPPING,
 	SFC_VDPA_STATE_NSTATES
 };
 
+struct sfc_vdpa_vring_info {
+	uint64_t	desc;
+	uint64_t	avail;
+	uint64_t	used;
+	uint64_t	size;
+	uint16_t	last_avail_idx;
+	uint16_t	last_used_idx;
+};
+
+typedef struct sfc_vdpa_vq_context_s {
+	uint8_t				enable;
+	uint32_t			pidx;
+	uint32_t			cidx;
+	efx_virtio_vq_t			*vq;
+} sfc_vdpa_vq_context_t;
+
 struct sfc_vdpa_ops_data {
 	void				*dev_handle;
+	int				vid;
 	struct rte_vdpa_device		*vdpa_dev;
 	enum sfc_vdpa_context		vdpa_context;
 	enum sfc_vdpa_state		state;
 
 	uint64_t			dev_features;
 	uint64_t			drv_features;
+	uint64_t			req_features;
+
+	uint16_t			vq_count;
+	struct sfc_vdpa_vq_context_s	vq_cxt[SFC_VDPA_MAX_QUEUE_PAIRS * 2];
 };
 
 struct sfc_vdpa_ops_data *
-- 
1.8.3.1