From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 57F7EA0524; Mon, 24 Feb 2020 17:55:16 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 55E391BE85; Mon, 24 Feb 2020 17:55:15 +0100 (CET) Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by dpdk.org (Postfix) with ESMTP id 4C7AD1F1C for ; Mon, 24 Feb 2020 17:55:14 +0100 (CET) Received: from Internal Mail-Server by MTLPINE1 (envelope-from asafp@mellanox.com) with ESMTPS (AES256-SHA encrypted); 24 Feb 2020 18:55:10 +0200 Received: from pegasus07.mtr.labs.mlnx (pegasus07.mtr.labs.mlnx [10.210.16.112]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 01OGtARl023173; Mon, 24 Feb 2020 18:55:10 +0200 From: Matan Azrad To: dev@dpdk.org Cc: Viacheslav Ovsiienko , Thomas Monjalon , Maxime Coquelin Date: Mon, 24 Feb 2020 16:55:06 +0000 Message-Id: <1582563307-24184-1-git-send-email-matan@mellanox.com> X-Mailer: git-send-email 1.8.3.1 Subject: [dpdk-dev] [PATCH 1/2] vdpa/mlx5: fix guest notification timing X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" When the HW finishes to consume the guest Rx descriptors, it creates a CQE in the CQ. The mlx5 driver arms the CQ to get notifications when a specific CQE index is created - the index to be armed is the next CQE index which should be polled by the driver. The mlx5 driver configured the kernel driver to send notification to the guest callfd in the same time it arrives to the mlx5 driver. It means that the guest was notified only for each first CQE in a poll cycle, so if the driver polled CQEs of all the virtio queue available descriptors, the guest was not notified again for the rest because there was no any new cycle for polling. Hence, the Rx queues might be stuck when the guest didn't work with poll mode. Move the guest notification to be after the driver consumes all the SW own CQEs. By this way, guest will be notified only after all the SW CQEs are polled. Also init the CQ to be with HW owner in the start. Fixes: 8395927cdfaf ("vdpa/mlx5: prepare HW queues") Signed-off-by: Matan Azrad --- drivers/vdpa/mlx5/mlx5_vdpa.h | 1 + drivers/vdpa/mlx5/mlx5_vdpa_event.c | 18 +++++++----------- 2 files changed, 8 insertions(+), 11 deletions(-) diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.h b/drivers/vdpa/mlx5/mlx5_vdpa.h index faeb54a..3324c9d 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa.h +++ b/drivers/vdpa/mlx5/mlx5_vdpa.h @@ -39,6 +39,7 @@ struct mlx5_vdpa_cq { uint16_t log_desc_n; uint32_t cq_ci:24; uint32_t arm_sn:2; + int callfd; rte_spinlock_t sl; struct mlx5_devx_obj *cq; struct mlx5dv_devx_umem *umem_obj; diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_event.c b/drivers/vdpa/mlx5/mlx5_vdpa_event.c index 17fd9dd..16276f5 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa_event.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa_event.c @@ -4,6 +4,7 @@ #include #include #include +#include #include #include @@ -156,17 +157,9 @@ rte_errno = errno; goto error; } - /* Subscribe CQ event to the guest FD only if it is not in poll mode. */ - if (callfd != -1) { - ret = mlx5_glue->devx_subscribe_devx_event_fd(priv->eventc, - callfd, - cq->cq->obj, 0); - if (ret) { - DRV_LOG(ERR, "Failed to subscribe CQE event fd."); - rte_errno = errno; - goto error; - } - } + cq->callfd = callfd; + /* Init CQ to ones to be in HW owner in the start. */ + memset((void *)(uintptr_t)cq->umem_buf, 0xFF, attr.db_umem_offset); /* First arming. */ mlx5_vdpa_cq_arm(priv, cq); return 0; @@ -231,6 +224,9 @@ rte_spinlock_lock(&cq->sl); mlx5_vdpa_cq_poll(priv, cq); mlx5_vdpa_cq_arm(priv, cq); + if (cq->callfd != -1) + /* Notify guest for descriptors consuming. */ + eventfd_write(cq->callfd, (eventfd_t)1); rte_spinlock_unlock(&cq->sl); DRV_LOG(DEBUG, "CQ %d event: new cq_ci = %u.", cq->cq->id, cq->cq_ci); -- 1.8.3.1