From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 35F65A0524; Fri, 31 Jan 2020 08:34:51 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 86D6A1BFBE; Fri, 31 Jan 2020 08:34:50 +0100 (CET) Received: from EUR04-VI1-obe.outbound.protection.outlook.com (mail-eopbgr80053.outbound.protection.outlook.com [40.107.8.53]) by dpdk.org (Postfix) with ESMTP id A20571BFB0 for ; Fri, 31 Jan 2020 08:34:49 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=VqayX9ipJFJVZMOeF4D1sdyDw7FYkKzcpPeDgr75mDNiM2ttHO6nlYR8YChgrJSBf0aiJFcXu6kXKpOGu4DEjUa0Un31HAgGSQOqZv/hqYnHojv2vHzTVlbqNUrWn/giDwtC+ESR7JmDVo+GUYLbQyfJJTpRmzhtzy8u4pkroeujAjmbz4CAAf9gXYK+NEPRpCVoEvwnce+IOEMomNty/zAtKSOlEc0TrUA1IE0TN4xLiZ0nbTqHa0tYh/L5SFIAaG9/4D55XHKbvceABPuhcAt/NRkmj18zDaf2BFA/hF5eMYHGVg+pOX5kqYEbEWR0TS8eOXcMKGA2iDtS4IO1QQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=QXaWDnF4ulbK8dpAiL589abJNua68gutjUkjY+B89m8=; b=T/G3LoWQ1lEL5M6kB7lusfxo0uj9gPLvFFgHbTrjjCYq0X/u+6jGpIlFsTgPAClEcNpD7b43XjQiIPShmrsubeg6adRIlnx8vPlD/NkVADbjaaxT+aWpu0clS/4johtn5Rb0RIaoQw6Jcpfwm5xywjgq96of2/4RiF1SSUEMq+zEclbr6mUYogEyN5s4EZG1x8yZgnxHZx6ve8BotmpUf6onFSY70LlJ0hnf2PNIiMeuExHMDyXOL9sHD/IBlE2i7yCcrrsPZTnvruJluRLqgpWWtmHiwAbJ/7dULzXC18ZfSekw+gCrjK4Nthqe+odWfjGZ1DErN/uQrNaW2mV4gQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=mellanox.com; dmarc=pass action=none header.from=mellanox.com; dkim=pass header.d=mellanox.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Mellanox.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=QXaWDnF4ulbK8dpAiL589abJNua68gutjUkjY+B89m8=; b=jzo9bWDio4Quz+k0m5PaqGTCShhu0L/kC0kKMhMfmdLHxfw7Zs64jBC1q8xboV5nxthcmIN4vP+ADXggSHVhXKfdIvQmE8FNWTPYGoak4cctd/1APxUE6knphhfCDLOAljwOCTb4DZ/PiTJi4mI2nkYRWEzTNMsT9hc/KTF8rk8= Received: from AM0PR0502MB4019.eurprd05.prod.outlook.com (52.133.39.139) by AM0PR0502MB4068.eurprd05.prod.outlook.com (52.133.38.142) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.2665.24; Fri, 31 Jan 2020 07:34:48 +0000 Received: from AM0PR0502MB4019.eurprd05.prod.outlook.com ([fe80::e495:9258:db09:1de8]) by AM0PR0502MB4019.eurprd05.prod.outlook.com ([fe80::e495:9258:db09:1de8%7]) with mapi id 15.20.2665.027; Fri, 31 Jan 2020 07:34:48 +0000 From: Matan Azrad To: Maxime Coquelin , "dev@dpdk.org" , Slava Ovsiienko Thread-Topic: [PATCH v2 06/13] vdpa/mlx5: prepare virtio queues Thread-Index: AQHV16fr2zlXOtz4LUWD/VbwqlYhjKgEXEmA Date: Fri, 31 Jan 2020 07:34:48 +0000 Message-ID: References: <1579539790-3882-1-git-send-email-matan@mellanox.com> <1580292549-27439-1-git-send-email-matan@mellanox.com> <1580292549-27439-7-git-send-email-matan@mellanox.com> <2bdf2495-6d48-9d03-aa6a-a8a40507a020@redhat.com> In-Reply-To: <2bdf2495-6d48-9d03-aa6a-a8a40507a020@redhat.com> Accept-Language: en-US, he-IL Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: authentication-results: spf=none (sender IP is ) smtp.mailfrom=matan@mellanox.com; x-originating-ip: [77.125.13.245] x-ms-publictraffictype: Email x-ms-office365-filtering-ht: Tenant x-ms-office365-filtering-correlation-id: eca129ca-ff6a-4fe0-f825-08d7a6200d21 x-ms-traffictypediagnostic: AM0PR0502MB4068:|AM0PR0502MB4068: x-ms-exchange-transport-forked: True x-microsoft-antispam-prvs: x-ms-oob-tlc-oobclassifiers: OLM:130; x-forefront-prvs: 029976C540 x-forefront-antispam-report: SFV:NSPM; SFS:(10009020)(4636009)(396003)(366004)(346002)(136003)(376002)(39860400002)(189003)(199004)(30864003)(52536014)(110136005)(5660300002)(478600001)(9686003)(316002)(71200400001)(55016002)(66446008)(76116006)(81166006)(81156014)(7696005)(66556008)(186003)(8676002)(2906002)(8936002)(26005)(66946007)(6636002)(66476007)(64756008)(6506007)(86362001)(53546011)(33656002); DIR:OUT; SFP:1101; SCL:1; SRVR:AM0PR0502MB4068; H:AM0PR0502MB4019.eurprd05.prod.outlook.com; FPR:; SPF:None; LANG:en; PTR:InfoNoRecords; MX:1; A:1; received-spf: None (protection.outlook.com: mellanox.com does not designate permitted sender hosts) x-ms-exchange-senderadcheck: 1 x-microsoft-antispam: BCL:0; x-microsoft-antispam-message-info: 8Rqpg2CO21NaIhsMlAlKTPenFnkZEhT8YpGENmx3chI6d4w5dhtq3eX7iiCHh+JkNHBxTi6o0GGqg44C3UKEOOa4DX4Mig0OHWxcS0+hrKQVXGhYKopzgWY8OsiPYT4Kc4WlNnNcncBpsALTXuTNH6qL43vpawyj7y/aBfvD//x8TvyuUPX5xrQ5jz1diRbIXRdP9smCdrq/N6u3a92fRZ5mQY1LqhVyPcrD1k97BwntFT3XkZLnntoVTnJBhWiw3uBKljVsBIYJTVtOi6AQCtzpNY35EAYwRJ63bxOkySD5koY6zl7noApyX/KqcAhftbFjfnkVB7miXQVG+IEyrZT1DjNvfDvKb2kG51e3ZOGmt5YKHanD3tCOpgnQlqohj2gERaMHsgR3l0VwkHERH5PtXKyaVLyzDntckjh7trmuJd3GCAkKI8L9ABbQK5L0 x-ms-exchange-antispam-messagedata: I9E+jrpaovQ/fPnJxB8a8Db4D08rsS3kKDvldV2wVWpD4EtNeeqWCjqBAXI9D1qYJkDbTyvu1eOgWwpnVQ4pyz2Yo+Kh+h3vsXVU3RD0BjN8BCHasEvXSfkBA9WrED8kGKqtML9BjSZXw7gRXYfFlQ== Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-OriginatorOrg: Mellanox.com X-MS-Exchange-CrossTenant-Network-Message-Id: eca129ca-ff6a-4fe0-f825-08d7a6200d21 X-MS-Exchange-CrossTenant-originalarrivaltime: 31 Jan 2020 07:34:48.4719 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: a652971c-7d2e-4d9b-a6a4-d149256f461b X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: I9nRtdgVaQzfRuY8Dql4SdG8Ka2bg5ZcXVMFuNTYBeIB6CyDBCKp14kmCjzRkNpBQjxLr077VXZToRzD5Sa0Cw== X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR0502MB4068 Subject: Re: [dpdk-dev] [PATCH v2 06/13] vdpa/mlx5: prepare virtio queues X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Maxime Coquelin > On 1/29/20 11:09 AM, Matan Azrad wrote: > > The HW virtq object represents an emulated context for a VIRTIO_NET > > virtqueue which was created and managed by a VIRTIO_NET driver as > > defined in VIRTIO Specification. > > > > Add support to prepare and release all the basic HW resources needed > > the user virtqs emulation according to the rte_vhost configurations. > > > > This patch prepares the basic configurations needed by DevX commands > > to create a virtq. > > > > Add new file mlx5_vdpa_virtq.c to manage virtq operations. > > > > Signed-off-by: Matan Azrad > > Acked-by: Viacheslav Ovsiienko > > --- > > drivers/vdpa/mlx5/Makefile | 1 + > > drivers/vdpa/mlx5/meson.build | 1 + > > drivers/vdpa/mlx5/mlx5_vdpa.c | 1 + > > drivers/vdpa/mlx5/mlx5_vdpa.h | 36 ++++++ > > drivers/vdpa/mlx5/mlx5_vdpa_virtq.c | 212 > > ++++++++++++++++++++++++++++++++++++ > > 5 files changed, 251 insertions(+) > > create mode 100644 drivers/vdpa/mlx5/mlx5_vdpa_virtq.c > > > > diff --git a/drivers/vdpa/mlx5/Makefile b/drivers/vdpa/mlx5/Makefile > > index 7f13756..353e262 100644 > > --- a/drivers/vdpa/mlx5/Makefile > > +++ b/drivers/vdpa/mlx5/Makefile > > @@ -10,6 +10,7 @@ LIB =3D librte_pmd_mlx5_vdpa.a > > SRCS-$(CONFIG_RTE_LIBRTE_MLX5_VDPA_PMD) +=3D mlx5_vdpa.c > > SRCS-$(CONFIG_RTE_LIBRTE_MLX5_VDPA_PMD) +=3D mlx5_vdpa_mem.c > > SRCS-$(CONFIG_RTE_LIBRTE_MLX5_VDPA_PMD) +=3D mlx5_vdpa_event.c > > +SRCS-$(CONFIG_RTE_LIBRTE_MLX5_VDPA_PMD) +=3D mlx5_vdpa_virtq.c > > > > # Basic CFLAGS. > > CFLAGS +=3D -O3 > > diff --git a/drivers/vdpa/mlx5/meson.build > > b/drivers/vdpa/mlx5/meson.build index c609f7c..e017f95 100644 > > --- a/drivers/vdpa/mlx5/meson.build > > +++ b/drivers/vdpa/mlx5/meson.build > > @@ -14,6 +14,7 @@ sources =3D files( > > 'mlx5_vdpa.c', > > 'mlx5_vdpa_mem.c', > > 'mlx5_vdpa_event.c', > > + 'mlx5_vdpa_virtq.c', > > ) > > cflags_options =3D [ > > '-std=3Dc11', > > diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.c > > b/drivers/vdpa/mlx5/mlx5_vdpa.c index c67f93d..4d30b35 100644 > > --- a/drivers/vdpa/mlx5/mlx5_vdpa.c > > +++ b/drivers/vdpa/mlx5/mlx5_vdpa.c > > @@ -229,6 +229,7 @@ > > goto error; > > } > > SLIST_INIT(&priv->mr_list); > > + SLIST_INIT(&priv->virtq_list); > > pthread_mutex_lock(&priv_list_lock); > > TAILQ_INSERT_TAIL(&priv_list, priv, next); > > pthread_mutex_unlock(&priv_list_lock); > > diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.h > > b/drivers/vdpa/mlx5/mlx5_vdpa.h index 30030b7..a7e2185 100644 > > --- a/drivers/vdpa/mlx5/mlx5_vdpa.h > > +++ b/drivers/vdpa/mlx5/mlx5_vdpa.h > > @@ -53,6 +53,19 @@ struct mlx5_vdpa_query_mr { > > int is_indirect; > > }; > > > > +struct mlx5_vdpa_virtq { > > + SLIST_ENTRY(mlx5_vdpa_virtq) next; > > + uint16_t index; > > + uint16_t vq_size; > > + struct mlx5_devx_obj *virtq; > > + struct mlx5_vdpa_event_qp eqp; > > + struct { > > + struct mlx5dv_devx_umem *obj; > > + void *buf; > > + uint32_t size; > > + } umems[3]; > > +}; > > + > > struct mlx5_vdpa_priv { > > TAILQ_ENTRY(mlx5_vdpa_priv) next; > > int id; /* vDPA device id. */ > > @@ -69,6 +82,10 @@ struct mlx5_vdpa_priv { > > struct mlx5dv_devx_event_channel *eventc; > > struct mlx5dv_devx_uar *uar; > > struct rte_intr_handle intr_handle; > > + struct mlx5_devx_obj *td; > > + struct mlx5_devx_obj *tis; > > + uint16_t nr_virtqs; > > + SLIST_HEAD(virtq_list, mlx5_vdpa_virtq) virtq_list; > > SLIST_HEAD(mr_list, mlx5_vdpa_query_mr) mr_list; }; > > > > @@ -146,4 +163,23 @@ int mlx5_vdpa_event_qp_create(struct > mlx5_vdpa_priv *priv, uint16_t desc_n, > > */ > > void mlx5_vdpa_cqe_event_unset(struct mlx5_vdpa_priv *priv); > > > > +/** > > + * Release a virtq and all its related resources. > > + * > > + * @param[in] priv > > + * The vdpa driver private structure. > > + */ > > +void mlx5_vdpa_virtqs_release(struct mlx5_vdpa_priv *priv); > > + > > +/** > > + * Create all the HW virtqs resources and all their related resources. > > + * > > + * @param[in] priv > > + * The vdpa driver private structure. > > + * > > + * @return > > + * 0 on success, a negative errno value otherwise and rte_errno is s= et. > > + */ > > +int mlx5_vdpa_virtqs_prepare(struct mlx5_vdpa_priv *priv); > > + > > #endif /* RTE_PMD_MLX5_VDPA_H_ */ > > diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c > > b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c > > new file mode 100644 > > index 0000000..781bccf > > --- /dev/null > > +++ b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c > > @@ -0,0 +1,212 @@ > > +/* SPDX-License-Identifier: BSD-3-Clause > > + * Copyright 2019 Mellanox Technologies, Ltd */ #include > > + > > +#include > > +#include > > + > > +#include > > + > > +#include "mlx5_vdpa_utils.h" > > +#include "mlx5_vdpa.h" > > + > > + > > +static int > > +mlx5_vdpa_virtq_unset(struct mlx5_vdpa_virtq *virtq) { > > + int i; > > + > > + if (virtq->virtq) { > > + claim_zero(mlx5_devx_cmd_destroy(virtq->virtq)); > > + virtq->virtq =3D NULL; > > + } > > + for (i =3D 0; i < 3; ++i) { > > + if (virtq->umems[i].obj) > > + claim_zero(mlx5_glue->devx_umem_dereg > > + (virtq- > >umems[i].obj)); > > + if (virtq->umems[i].buf) > > + rte_free(virtq->umems[i].buf); > > + } > > + memset(&virtq->umems, 0, sizeof(virtq->umems)); > > + if (virtq->eqp.fw_qp) > > + mlx5_vdpa_event_qp_destroy(&virtq->eqp); > > + return 0; > > +} > > + > > +void > > +mlx5_vdpa_virtqs_release(struct mlx5_vdpa_priv *priv) { > > + struct mlx5_vdpa_virtq *entry; > > + struct mlx5_vdpa_virtq *next; > > + > > + entry =3D SLIST_FIRST(&priv->virtq_list); > > + while (entry) { > > + next =3D SLIST_NEXT(entry, next); > > + mlx5_vdpa_virtq_unset(entry); > > + SLIST_REMOVE(&priv->virtq_list, entry, mlx5_vdpa_virtq, > next); > > + rte_free(entry); > > + entry =3D next; > > + } > > + SLIST_INIT(&priv->virtq_list); > > + if (priv->tis) { > > + claim_zero(mlx5_devx_cmd_destroy(priv->tis)); > > + priv->tis =3D NULL; > > + } > > + if (priv->td) { > > + claim_zero(mlx5_devx_cmd_destroy(priv->td)); > > + priv->td =3D NULL; > > + } > > +} > > + > > +static uint64_t > > +mlx5_vdpa_hva_to_gpa(struct rte_vhost_memory *mem, uint64_t hva) { > > + struct rte_vhost_mem_region *reg; > > + uint32_t i; > > + uint64_t gpa =3D 0; > > + > > + for (i =3D 0; i < mem->nregions; i++) { > > + reg =3D &mem->regions[i]; > > + if (hva >=3D reg->host_user_addr && > > + hva < reg->host_user_addr + reg->size) { > > + gpa =3D hva - reg->host_user_addr + reg- > >guest_phys_addr; > > + break; > > + } > > + } > > + return gpa; > > +} >=20 > I think you may need a third parameter for the size to map. > Otherwise, you would be vulnerable to CVE-2018-1059. Yes, I just read it and understood that the virtio descriptor queues\packet= s data may be non continues in the guest physical memory and even maybe und= efined here in the rte_vhost library, Is it? Don't you think that the rte_vhost should validate it? at least, that all t= he queues memory are mapped? Can you extend more why it may happen? QEMU bug? In any case, >From Mellanox perspective, at least for the packet data, it is OK since if = the guest will try to access physical address which is not mapped the packe= t will be ignored by the HW. =20 > > + > > +static int > > +mlx5_vdpa_virtq_setup(struct mlx5_vdpa_priv *priv, > > + struct mlx5_vdpa_virtq *virtq, int index) { > > + struct rte_vhost_vring vq; > > + struct mlx5_devx_virtq_attr attr =3D {0}; > > + uint64_t gpa; > > + int ret; > > + int i; > > + uint16_t last_avail_idx; > > + uint16_t last_used_idx; > > + > > + ret =3D rte_vhost_get_vhost_vring(priv->vid, index, &vq); > > + if (ret) > > + return -1; > > + virtq->index =3D index; > > + virtq->vq_size =3D vq.size; > > + /* > > + * No need event QPs creation when the guest in poll mode or when > the > > + * capability allows it. > > + */ > > + attr.event_mode =3D vq.callfd !=3D -1 || !(priv->caps.event_mode & (1 > << > > + > MLX5_VIRTQ_EVENT_MODE_NO_MSIX)) ? > > + > MLX5_VIRTQ_EVENT_MODE_QP : > > + > MLX5_VIRTQ_EVENT_MODE_NO_MSIX; > > + if (attr.event_mode =3D=3D MLX5_VIRTQ_EVENT_MODE_QP) { > > + ret =3D mlx5_vdpa_event_qp_create(priv, vq.size, vq.callfd, > > + &virtq->eqp); > > + if (ret) { > > + DRV_LOG(ERR, "Failed to create event QPs for virtq > %d.", > > + index); > > + return -1; > > + } > > + attr.qp_id =3D virtq->eqp.fw_qp->id; > > + } else { > > + DRV_LOG(INFO, "Virtq %d is, for sure, working by poll mode, > no" > > + " need event QPs and event mechanism.", index); > > + } > > + /* Setup 3 UMEMs for each virtq. */ > > + for (i =3D 0; i < 3; ++i) { > > + virtq->umems[i].size =3D priv->caps.umems[i].a * vq.size + > > + priv- > >caps.umems[i].b; > > + virtq->umems[i].buf =3D rte_zmalloc(__func__, > > + virtq->umems[i].size, 4096); > > + if (!virtq->umems[i].buf) { > > + DRV_LOG(ERR, "Cannot allocate umem %d memory > for virtq" > > + " %u.", i, index); > > + goto error; > > + } > > + virtq->umems[i].obj =3D mlx5_glue->devx_umem_reg(priv- > >ctx, > > + virtq->umems[i].buf, > > + virtq->umems[i].size, > > + > IBV_ACCESS_LOCAL_WRITE); > > + if (!virtq->umems[i].obj) { > > + DRV_LOG(ERR, "Failed to register umem %d for virtq > %u.", > > + i, index); > > + goto error; > > + } > > + attr.umems[i].id =3D virtq->umems[i].obj->umem_id; > > + attr.umems[i].offset =3D 0; > > + attr.umems[i].size =3D virtq->umems[i].size; > > + } > > + gpa =3D mlx5_vdpa_hva_to_gpa(priv->vmem, > (uint64_t)(uintptr_t)vq.desc); > > + if (!gpa) { > > + DRV_LOG(ERR, "Fail to get GPA for descriptor ring."); > > + goto error; > > + } > > + attr.desc_addr =3D gpa; > > + gpa =3D mlx5_vdpa_hva_to_gpa(priv->vmem, > (uint64_t)(uintptr_t)vq.used); > > + if (!gpa) { > > + DRV_LOG(ERR, "Fail to get GPA for used ring."); > > + goto error; > > + } > > + attr.used_addr =3D gpa; > > + gpa =3D mlx5_vdpa_hva_to_gpa(priv->vmem, > (uint64_t)(uintptr_t)vq.avail); > > + if (!gpa) { > > + DRV_LOG(ERR, "Fail to get GPA for available ring."); > > + goto error; > > + } > > + attr.available_addr =3D gpa; > > + rte_vhost_get_vring_base(priv->vid, index, &last_avail_idx, > > + &last_used_idx); > > + DRV_LOG(INFO, "vid %d: Init last_avail_idx=3D%d, last_used_idx=3D%d > for " > > + "virtq %d.", priv->vid, last_avail_idx, last_used_idx, index); > > + attr.hw_available_index =3D last_avail_idx; > > + attr.hw_used_index =3D last_used_idx; > > + attr.q_size =3D vq.size; > > + attr.mkey =3D priv->gpa_mkey_index; > > + attr.tis_id =3D priv->tis->id; > > + attr.queue_index =3D index; > > + virtq->virtq =3D mlx5_devx_cmd_create_virtq(priv->ctx, &attr); > > + if (!virtq->virtq) > > + goto error; > > + return 0; > > +error: > > + mlx5_vdpa_virtq_unset(virtq); > > + return -1; > > +} > > + > > +int > > +mlx5_vdpa_virtqs_prepare(struct mlx5_vdpa_priv *priv) { > > + struct mlx5_devx_tis_attr tis_attr =3D {0}; > > + struct mlx5_vdpa_virtq *virtq; > > + uint32_t i; > > + uint16_t nr_vring =3D rte_vhost_get_vring_num(priv->vid); > > + > > + priv->td =3D mlx5_devx_cmd_create_td(priv->ctx); > > + if (!priv->td) { > > + DRV_LOG(ERR, "Failed to create transport domain."); > > + return -rte_errno; > > + } > > + tis_attr.transport_domain =3D priv->td->id; > > + priv->tis =3D mlx5_devx_cmd_create_tis(priv->ctx, &tis_attr); > > + if (!priv->tis) { > > + DRV_LOG(ERR, "Failed to create TIS."); > > + goto error; > > + } > > + for (i =3D 0; i < nr_vring; i++) { > > + virtq =3D rte_zmalloc(__func__, sizeof(*virtq), 0); > > + if (!virtq || mlx5_vdpa_virtq_setup(priv, virtq, i)) { > > + if (virtq) > > + rte_free(virtq); > > + goto error; > > + } > > + SLIST_INSERT_HEAD(&priv->virtq_list, virtq, next); > > + } > > + priv->nr_virtqs =3D nr_vring; > > + return 0; > > +error: > > + mlx5_vdpa_virtqs_release(priv); > > + return -1; > > +} > >