From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 2947AA0524; Thu, 30 Jan 2020 19:18:04 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id D4B051C033; Thu, 30 Jan 2020 19:18:02 +0100 (CET) Received: from us-smtp-1.mimecast.com (us-smtp-delivery-1.mimecast.com [207.211.31.120]) by dpdk.org (Postfix) with ESMTP id 1FC4F1C031 for ; Thu, 30 Jan 2020 19:18:00 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1580408280; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:autocrypt:autocrypt; bh=U2yYTaqrf6p3LKLjzKIK3JjebgaoIv+bYZVByJ4DKyo=; b=CSrfKeHjMX6sW0aPKdBVygahA1eyjgWO34iP6/P21FEbIApWNXPTsfNlkK+fUpNeRjkKLv 5jxYXRmRVanuTz72+CDjUpRTDim2NapsmqAOrCo5YLl1tPNBsyQHeS4S4EdzzeKbE8rQyX o7AXBBZZWKK91+hYScPgWAdzCW9rAyA= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-61-R0QFO5H_Ng2pQuGJSL9jCw-1; Thu, 30 Jan 2020 13:17:56 -0500 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com [10.5.11.15]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 919A8800D55; Thu, 30 Jan 2020 18:17:55 +0000 (UTC) Received: from [10.36.112.51] (ovpn-112-51.ams2.redhat.com [10.36.112.51]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 6AF8E77949; Thu, 30 Jan 2020 18:17:54 +0000 (UTC) To: Matan Azrad , dev@dpdk.org, Viacheslav Ovsiienko References: <1579539790-3882-1-git-send-email-matan@mellanox.com> <1580292549-27439-1-git-send-email-matan@mellanox.com> <1580292549-27439-6-git-send-email-matan@mellanox.com> From: Maxime Coquelin Autocrypt: addr=maxime.coquelin@redhat.com; keydata= mQINBFOEQQIBEADjNLYZZqghYuWv1nlLisptPJp+TSxE/KuP7x47e1Gr5/oMDJ1OKNG8rlNg kLgBQUki3voWhUbMb69ybqdMUHOl21DGCj0BTU3lXwapYXOAnsh8q6RRM+deUpasyT+Jvf3a gU35dgZcomRh5HPmKMU4KfeA38cVUebsFec1HuJAWzOb/UdtQkYyZR4rbzw8SbsOemtMtwOx YdXodneQD7KuRU9IhJKiEfipwqk2pufm2VSGl570l5ANyWMA/XADNhcEXhpkZ1Iwj3TWO7XR uH4xfvPl8nBsLo/EbEI7fbuUULcAnHfowQslPUm6/yaGv6cT5160SPXT1t8U9QDO6aTSo59N jH519JS8oeKZB1n1eLDslCfBpIpWkW8ZElGkOGWAN0vmpLfdyiqBNNyS3eGAfMkJ6b1A24un /TKc6j2QxM0QK4yZGfAxDxtvDv9LFXec8ENJYsbiR6WHRHq7wXl/n8guyh5AuBNQ3LIK44x0 KjGXP1FJkUhUuruGyZsMrDLBRHYi+hhDAgRjqHgoXi5XGETA1PAiNBNnQwMf5aubt+mE2Q5r qLNTgwSo2dpTU3+mJ3y3KlsIfoaxYI7XNsPRXGnZi4hbxmeb2NSXgdCXhX3nELUNYm4ArKBP LugOIT/zRwk0H0+RVwL2zHdMO1Tht1UOFGfOZpvuBF60jhMzbQARAQABtCxNYXhpbWUgQ29x dWVsaW4gPG1heGltZS5jb3F1ZWxpbkByZWRoYXQuY29tPokCOAQTAQIAIgUCV3u/5QIbAwYL CQgHAwIGFQgCCQoLBBYCAwECHgECF4AACgkQyjiNKEaHD4ma2g/+P+Hg9WkONPaY1J4AR7Uf kBneosS4NO3CRy0x4WYmUSLYMLx1I3VH6SVjqZ6uBoYy6Fs6TbF6SHNc7QbB6Qjo3neqnQR1 71Ua1MFvIob8vUEl3jAR/+oaE1UJKrxjWztpppQTukIk4oJOmXbL0nj3d8dA2QgHdTyttZ1H xzZJWWz6vqxCrUqHU7RSH9iWg9R2iuTzii4/vk1oi4Qz7y/q8ONOq6ffOy/t5xSZOMtZCspu Mll2Szzpc/trFO0pLH4LZZfz/nXh2uuUbk8qRIJBIjZH3ZQfACffgfNefLe2PxMqJZ8mFJXc RQO0ONZvwoOoHL6CcnFZp2i0P5ddduzwPdGsPq1bnIXnZqJSl3dUfh3xG5ArkliZ/++zGF1O wvpGvpIuOgLqjyCNNRoR7cP7y8F24gWE/HqJBXs1qzdj/5Hr68NVPV1Tu/l2D1KMOcL5sOrz 2jLXauqDWn1Okk9hkXAP7+0Cmi6QwAPuBT3i6t2e8UdtMtCE4sLesWS/XohnSFFscZR6Vaf3 gKdWiJ/fW64L6b9gjkWtHd4jAJBAIAx1JM6xcA1xMbAFsD8gA2oDBWogHGYcScY/4riDNKXi lw92d6IEHnSf6y7KJCKq8F+Jrj2BwRJiFKTJ6ChbOpyyR6nGTckzsLgday2KxBIyuh4w+hMq TGDSp2rmWGJjASq5Ag0EVPSbkwEQAMkaNc084Qvql+XW+wcUIY+Dn9A2D1gMr2BVwdSfVDN7 0ZYxo9PvSkzh6eQmnZNQtl8WSHl3VG3IEDQzsMQ2ftZn2sxjcCadexrQQv3Lu60Tgj7YVYRM H+fLYt9W5YuWduJ+FPLbjIKynBf6JCRMWr75QAOhhhaI0tsie3eDsKQBA0w7WCuPiZiheJaL 4MDe9hcH4rM3ybnRW7K2dLszWNhHVoYSFlZGYh+MGpuODeQKDS035+4H2rEWgg+iaOwqD7bg CQXwTZ1kSrm8NxIRVD3MBtzp9SZdUHLfmBl/tLVwDSZvHZhhvJHC6Lj6VL4jPXF5K2+Nn/Su CQmEBisOmwnXZhhu8ulAZ7S2tcl94DCo60ReheDoPBU8PR2TLg8rS5f9w6mLYarvQWL7cDtT d2eX3Z6TggfNINr/RTFrrAd7NHl5h3OnlXj7PQ1f0kfufduOeCQddJN4gsQfxo/qvWVB7PaE 1WTIggPmWS+Xxijk7xG6x9McTdmGhYaPZBpAxewK8ypl5+yubVsE9yOOhKMVo9DoVCjh5To5 aph7CQWfQsV7cd9PfSJjI2lXI0dhEXhQ7lRCFpf3V3mD6CyrhpcJpV6XVGjxJvGUale7+IOp sQIbPKUHpB2F+ZUPWds9yyVxGwDxD8WLqKKy0WLIjkkSsOb9UBNzgRyzrEC9lgQ/ABEBAAGJ Ah8EGAECAAkFAlT0m5MCGwwACgkQyjiNKEaHD4nU8hAAtt0xFJAy0sOWqSmyxTc7FUcX+pbD KVyPlpl6urKKMk1XtVMUPuae/+UwvIt0urk1mXi6DnrAN50TmQqvdjcPTQ6uoZ8zjgGeASZg jj0/bJGhgUr9U7oG7Hh2F8vzpOqZrdd65MRkxmc7bWj1k81tOU2woR/Gy8xLzi0k0KUa8ueB iYOcZcIGTcs9CssVwQjYaXRoeT65LJnTxYZif2pfNxfINFzCGw42s3EtZFteczClKcVSJ1+L +QUY/J24x0/ocQX/M1PwtZbB4c/2Pg/t5FS+s6UB1Ce08xsJDcwyOPIH6O3tccZuriHgvqKP yKz/Ble76+NFlTK1mpUlfM7PVhD5XzrDUEHWRTeTJSvJ8TIPL4uyfzhjHhlkCU0mw7Pscyxn DE8G0UYMEaNgaZap8dcGMYH/96EfE5s/nTX0M6MXV0yots7U2BDb4soLCxLOJz4tAFDtNFtA wLBhXRSvWhdBJZiig/9CG3dXmKfi2H+wdUCSvEFHRpgo7GK8/Kh3vGhgKmnnxhl8ACBaGy9n fxjSxjSO6rj4/MeenmlJw1yebzkX8ZmaSi8BHe+n6jTGEFNrbiOdWpJgc5yHIZZnwXaW54QT UhhSjDL1rV2B4F28w30jYmlRmm2RdN7iCZfbyP3dvFQTzQ4ySquuPkIGcOOHrvZzxbRjzMx1 Mwqu3GQ= Message-ID: <9e6103fc-e8b1-ba72-a6b8-08b5ba6754bd@redhat.com> Date: Thu, 30 Jan 2020 19:17:52 +0100 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.4.1 MIME-Version: 1.0 In-Reply-To: <1580292549-27439-6-git-send-email-matan@mellanox.com> Content-Language: en-US X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15 X-MC-Unique: R0QFO5H_Ng2pQuGJSL9jCw-1 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: 7bit Subject: Re: [dpdk-dev] [PATCH v2 05/13] vdpa/mlx5: prepare HW queues X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On 1/29/20 11:09 AM, Matan Azrad wrote: > As an arrangement to the vitrio queues creation, a 2 QPs and CQ may be > created for the virtio queue. > > The design is to trigger an event for the guest and for the vdpa driver > when a new CQE is posted by the HW after the packet transition. > > This patch add the basic operations to create and destroy the above HW > objects and to trigger the CQE events when a new CQE is posted. > > Signed-off-by: Matan Azrad > Acked-by: Viacheslav Ovsiienko > --- > drivers/common/mlx5/mlx5_prm.h | 4 + > drivers/vdpa/mlx5/Makefile | 1 + > drivers/vdpa/mlx5/meson.build | 1 + > drivers/vdpa/mlx5/mlx5_vdpa.h | 89 ++++++++ > drivers/vdpa/mlx5/mlx5_vdpa_event.c | 399 ++++++++++++++++++++++++++++++++++++ > 5 files changed, 494 insertions(+) > create mode 100644 drivers/vdpa/mlx5/mlx5_vdpa_event.c > > diff --git a/drivers/common/mlx5/mlx5_prm.h b/drivers/common/mlx5/mlx5_prm.h > index b48cd0a..b533798 100644 > --- a/drivers/common/mlx5/mlx5_prm.h > +++ b/drivers/common/mlx5/mlx5_prm.h > @@ -392,6 +392,10 @@ struct mlx5_cqe { > /* CQE format value. */ > #define MLX5_COMPRESSED 0x3 > > +/* CQ doorbell cmd types. */ > +#define MLX5_CQ_DBR_CMD_SOL_ONLY (1 << 24) > +#define MLX5_CQ_DBR_CMD_ALL (0 << 24) > + > /* Action type of header modification. */ > enum { > MLX5_MODIFICATION_TYPE_SET = 0x1, > diff --git a/drivers/vdpa/mlx5/Makefile b/drivers/vdpa/mlx5/Makefile > index 5472797..7f13756 100644 > --- a/drivers/vdpa/mlx5/Makefile > +++ b/drivers/vdpa/mlx5/Makefile > @@ -9,6 +9,7 @@ LIB = librte_pmd_mlx5_vdpa.a > # Sources. > SRCS-$(CONFIG_RTE_LIBRTE_MLX5_VDPA_PMD) += mlx5_vdpa.c > SRCS-$(CONFIG_RTE_LIBRTE_MLX5_VDPA_PMD) += mlx5_vdpa_mem.c > +SRCS-$(CONFIG_RTE_LIBRTE_MLX5_VDPA_PMD) += mlx5_vdpa_event.c > > # Basic CFLAGS. > CFLAGS += -O3 > diff --git a/drivers/vdpa/mlx5/meson.build b/drivers/vdpa/mlx5/meson.build > index 7e5dd95..c609f7c 100644 > --- a/drivers/vdpa/mlx5/meson.build > +++ b/drivers/vdpa/mlx5/meson.build > @@ -13,6 +13,7 @@ deps += ['hash', 'common_mlx5', 'vhost', 'bus_pci', 'eal', 'sched'] > sources = files( > 'mlx5_vdpa.c', > 'mlx5_vdpa_mem.c', > + 'mlx5_vdpa_event.c', > ) > cflags_options = [ > '-std=c11', > diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.h b/drivers/vdpa/mlx5/mlx5_vdpa.h > index e27baea..30030b7 100644 > --- a/drivers/vdpa/mlx5/mlx5_vdpa.h > +++ b/drivers/vdpa/mlx5/mlx5_vdpa.h > @@ -9,9 +9,40 @@ > > #include > #include > +#include > +#include > > #include > #include > +#include > + > + > +#define MLX5_VDPA_INTR_RETRIES 256 > +#define MLX5_VDPA_INTR_RETRIES_USEC 1000 > + > +struct mlx5_vdpa_cq { > + uint16_t log_desc_n; > + uint32_t cq_ci:24; > + uint32_t arm_sn:2; > + rte_spinlock_t sl; > + struct mlx5_devx_obj *cq; > + struct mlx5dv_devx_umem *umem_obj; > + union { > + volatile void *umem_buf; > + volatile struct mlx5_cqe *cqes; > + }; > + volatile uint32_t *db_rec; > + uint64_t errors; > +}; > + > +struct mlx5_vdpa_event_qp { > + struct mlx5_vdpa_cq cq; > + struct mlx5_devx_obj *fw_qp; > + struct mlx5_devx_obj *sw_qp; > + struct mlx5dv_devx_umem *umem_obj; > + void *umem_buf; > + volatile uint32_t *db_rec; > +}; > > struct mlx5_vdpa_query_mr { > SLIST_ENTRY(mlx5_vdpa_query_mr) next; > @@ -34,6 +65,10 @@ struct mlx5_vdpa_priv { > uint32_t gpa_mkey_index; > struct ibv_mr *null_mr; > struct rte_vhost_memory *vmem; > + uint32_t eqn; > + struct mlx5dv_devx_event_channel *eventc; > + struct mlx5dv_devx_uar *uar; > + struct rte_intr_handle intr_handle; > SLIST_HEAD(mr_list, mlx5_vdpa_query_mr) mr_list; > }; > > @@ -57,4 +92,58 @@ struct mlx5_vdpa_priv { > */ > int mlx5_vdpa_mem_register(struct mlx5_vdpa_priv *priv); > > + > +/** > + * Create an event QP and all its related resources. > + * > + * @param[in] priv > + * The vdpa driver private structure. > + * @param[in] desc_n > + * Number of descriptors. > + * @param[in] callfd > + * The guest notification file descriptor. > + * @param[in/out] eqp > + * Pointer to the event QP structure. > + * > + * @return > + * 0 on success, -1 otherwise and rte_errno is set. > + */ > +int mlx5_vdpa_event_qp_create(struct mlx5_vdpa_priv *priv, uint16_t desc_n, > + int callfd, struct mlx5_vdpa_event_qp *eqp); > + > +/** > + * Destroy an event QP and all its related resources. > + * > + * @param[in/out] eqp > + * Pointer to the event QP structure. > + */ > +void mlx5_vdpa_event_qp_destroy(struct mlx5_vdpa_event_qp *eqp); > + > +/** > + * Release all the event global resources. > + * > + * @param[in] priv > + * The vdpa driver private structure. > + */ > +void mlx5_vdpa_event_qp_global_release(struct mlx5_vdpa_priv *priv); > + > +/** > + * Setup CQE event. > + * > + * @param[in] priv > + * The vdpa driver private structure. > + * > + * @return > + * 0 on success, a negative errno value otherwise and rte_errno is set. > + */ > +int mlx5_vdpa_cqe_event_setup(struct mlx5_vdpa_priv *priv); > + > +/** > + * Unset CQE event . > + * > + * @param[in] priv > + * The vdpa driver private structure. > + */ > +void mlx5_vdpa_cqe_event_unset(struct mlx5_vdpa_priv *priv); > + > #endif /* RTE_PMD_MLX5_VDPA_H_ */ > diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_event.c b/drivers/vdpa/mlx5/mlx5_vdpa_event.c > new file mode 100644 > index 0000000..35518ad > --- /dev/null > +++ b/drivers/vdpa/mlx5/mlx5_vdpa_event.c > @@ -0,0 +1,399 @@ > +/* SPDX-License-Identifier: BSD-3-Clause > + * Copyright 2019 Mellanox Technologies, Ltd > + */ > +#include > +#include > +#include > + > +#include > +#include > +#include > +#include > +#include > +#include > + > +#include > + > +#include "mlx5_vdpa_utils.h" > +#include "mlx5_vdpa.h" > + > + > +void > +mlx5_vdpa_event_qp_global_release(struct mlx5_vdpa_priv *priv) > +{ > + if (priv->uar) { > + mlx5_glue->devx_free_uar(priv->uar); > + priv->uar = NULL; > + } > + if (priv->eventc) { > + mlx5_glue->devx_destroy_event_channel(priv->eventc); > + priv->eventc = NULL; > + } > + priv->eqn = 0; > +} > + > +/* Prepare all the global resources for all the event objects.*/ > +static int > +mlx5_vdpa_event_qp_global_prepare(struct mlx5_vdpa_priv *priv) > +{ > + uint32_t lcore; > + > + if (priv->eventc) > + return 0; > + lcore = (uint32_t)rte_lcore_to_cpu_id(-1); > + if (mlx5_glue->devx_query_eqn(priv->ctx, lcore, &priv->eqn)) { > + rte_errno = errno; > + DRV_LOG(ERR, "Failed to query EQ number %d.", rte_errno); > + return -1; > + } > + priv->eventc = mlx5_glue->devx_create_event_channel(priv->ctx, > + MLX5DV_DEVX_CREATE_EVENT_CHANNEL_FLAGS_OMIT_EV_DATA); > + if (!priv->eventc) { > + rte_errno = errno; > + DRV_LOG(ERR, "Failed to create event channel %d.", > + rte_errno); > + goto error; > + } > + priv->uar = mlx5_glue->devx_alloc_uar(priv->ctx, 0); > + if (!priv->uar) { > + rte_errno = errno; > + DRV_LOG(ERR, "Failed to allocate UAR."); > + goto error; > + } > + return 0; > +error: > + mlx5_vdpa_event_qp_global_release(priv); > + return -1; > +} > + > +static void > +mlx5_vdpa_cq_destroy(struct mlx5_vdpa_cq *cq) > +{ > + if (cq->cq) > + claim_zero(mlx5_devx_cmd_destroy(cq->cq)); > + if (cq->umem_obj) > + claim_zero(mlx5_glue->devx_umem_dereg(cq->umem_obj)); > + if (cq->umem_buf) > + rte_free((void *)(uintptr_t)cq->umem_buf); > + memset(cq, 0, sizeof(*cq)); > +} > + > +static inline void > +mlx5_vdpa_cq_arm(struct mlx5_vdpa_priv *priv, struct mlx5_vdpa_cq *cq) > +{ > + const unsigned int cqe_mask = (1 << cq->log_desc_n) - 1; > + uint32_t arm_sn = cq->arm_sn << MLX5_CQ_SQN_OFFSET; > + uint32_t cq_ci = cq->cq_ci & MLX5_CI_MASK & cqe_mask; > + uint32_t doorbell_hi = arm_sn | MLX5_CQ_DBR_CMD_ALL | cq_ci; > + uint64_t doorbell = ((uint64_t)doorbell_hi << 32) | cq->cq->id; > + uint64_t db_be = rte_cpu_to_be_64(doorbell); > + uint32_t *addr = RTE_PTR_ADD(priv->uar->base_addr, MLX5_CQ_DOORBELL); > + > + rte_io_wmb(); > + cq->db_rec[MLX5_CQ_ARM_DB] = rte_cpu_to_be_32(doorbell_hi); > + rte_wmb(); > +#ifdef RTE_ARCH_64 > + *(uint64_t *)addr = db_be; > +#else > + *(uint32_t *)addr = db_be; > + rte_io_wmb(); > + *((uint32_t *)addr + 1) = db_be >> 32; > +#endif > + cq->arm_sn++; > +} > + > +static int > +mlx5_vdpa_cq_create(struct mlx5_vdpa_priv *priv, uint16_t log_desc_n, > + int callfd, struct mlx5_vdpa_cq *cq) > +{ > + struct mlx5_devx_cq_attr attr; > + size_t pgsize = sysconf(_SC_PAGESIZE); > + uint32_t umem_size; > + int ret; > + uint16_t event_nums[1] = {0}; > + > + cq->log_desc_n = log_desc_n; > + umem_size = sizeof(struct mlx5_cqe) * (1 << log_desc_n) + > + sizeof(*cq->db_rec) * 2; > + cq->umem_buf = rte_zmalloc(__func__, umem_size, 4096); > + if (!cq->umem_buf) { > + DRV_LOG(ERR, "Failed to allocate memory for CQ."); > + rte_errno = ENOMEM; > + return -ENOMEM; > + } > + cq->umem_obj = mlx5_glue->devx_umem_reg(priv->ctx, > + (void *)(uintptr_t)cq->umem_buf, > + umem_size, > + IBV_ACCESS_LOCAL_WRITE); > + if (!cq->umem_obj) { > + DRV_LOG(ERR, "Failed to register umem for CQ."); > + goto error; > + } > + attr.q_umem_valid = 1; > + attr.db_umem_valid = 1; > + attr.use_first_only = 0; > + attr.overrun_ignore = 0; > + attr.uar_page_id = priv->uar->page_id; > + attr.q_umem_id = cq->umem_obj->umem_id; > + attr.q_umem_offset = 0; > + attr.db_umem_id = cq->umem_obj->umem_id; > + attr.db_umem_offset = sizeof(struct mlx5_cqe) * (1 << log_desc_n); > + attr.eqn = priv->eqn; > + attr.log_cq_size = log_desc_n; > + attr.log_page_size = rte_log2_u32(pgsize); > + cq->cq = mlx5_devx_cmd_create_cq(priv->ctx, &attr); > + if (!cq->cq) > + goto error; > + cq->db_rec = RTE_PTR_ADD(cq->umem_buf, (uintptr_t)attr.db_umem_offset); > + cq->cq_ci = 0; > + rte_spinlock_init(&cq->sl); > + /* Subscribe CQ event to the event channel controlled by the driver. */ > + ret = mlx5_glue->devx_subscribe_devx_event(priv->eventc, cq->cq->obj, > + sizeof(event_nums), > + event_nums, > + (uint64_t)(uintptr_t)cq); > + if (ret) { > + DRV_LOG(ERR, "Failed to subscribe CQE event."); > + rte_errno = errno; > + goto error; > + } > + /* Subscribe CQ event to the guest FD only if it is not in poll mode. */ > + if (callfd != -1) { > + ret = mlx5_glue->devx_subscribe_devx_event_fd(priv->eventc, > + callfd, > + cq->cq->obj, 0); > + if (ret) { > + DRV_LOG(ERR, "Failed to subscribe CQE event fd."); > + rte_errno = errno; > + goto error; > + } > + } > + /* First arming. */ > + mlx5_vdpa_cq_arm(priv, cq); > + return 0; > +error: > + mlx5_vdpa_cq_destroy(cq); > + return -1; > +} > + > +static inline void __rte_unused > +mlx5_vdpa_cq_poll(struct mlx5_vdpa_priv *priv __rte_unused, > + struct mlx5_vdpa_cq *cq) > +{ > + struct mlx5_vdpa_event_qp *eqp = > + container_of(cq, struct mlx5_vdpa_event_qp, cq); > + const unsigned int cqe_size = 1 << cq->log_desc_n; > + const unsigned int cqe_mask = cqe_size - 1; > + int ret; > + > + do { > + volatile struct mlx5_cqe *cqe = cq->cqes + (cq->cq_ci & > + cqe_mask); > + > + ret = check_cqe(cqe, cqe_size, cq->cq_ci); > + switch (ret) { > + case MLX5_CQE_STATUS_ERR: > + cq->errors++; > + /*fall-through*/ > + case MLX5_CQE_STATUS_SW_OWN: > + cq->cq_ci++; > + break; > + case MLX5_CQE_STATUS_HW_OWN: > + default: > + break; > + } > + } while (ret != MLX5_CQE_STATUS_HW_OWN); Isn't there a risk of endless loop here? > + rte_io_wmb(); > + /* Ring CQ doorbell record. */ > + cq->db_rec[0] = rte_cpu_to_be_32(cq->cq_ci); > + rte_io_wmb(); > + /* Ring SW QP doorbell record. */ > + eqp->db_rec[0] = rte_cpu_to_be_32(cq->cq_ci + cqe_size); > +} > +