From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga06.intel.com (mga06.intel.com [134.134.136.31]) by dpdk.org (Postfix) with ESMTP id ABBDA3254 for ; Tue, 5 Sep 2017 09:10:41 +0200 (CEST) Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orsmga104.jf.intel.com with ESMTP; 05 Sep 2017 00:10:40 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.41,479,1498546800"; d="scan'208";a="1169173104" Received: from debian-zgviawfucg.sh.intel.com (HELO debian-ZGViaWFuCg) ([10.67.104.160]) by orsmga001.jf.intel.com with ESMTP; 05 Sep 2017 00:10:39 -0700 Date: Tue, 5 Sep 2017 15:11:06 +0800 From: Tiwei Bie To: Maxime Coquelin Cc: dev@dpdk.org, yliu@fridaylinux.org, jfreiman@redhat.com, mst@redhat.com, vkaplans@redhat.com, jasowang@redhat.com Message-ID: <20170905071106.GA22515@debian-ZGViaWFuCg> References: <20170831095023.21037-1-maxime.coquelin@redhat.com> <20170831095023.21037-9-maxime.coquelin@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline In-Reply-To: <20170831095023.21037-9-maxime.coquelin@redhat.com> User-Agent: Mutt/1.7.2 (2016-11-26) Subject: Re: [dpdk-dev] [PATCH 08/21] vhost: iotlb: add pending miss request list and helpers X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 05 Sep 2017 07:10:42 -0000 On Thu, Aug 31, 2017 at 11:50:10AM +0200, Maxime Coquelin wrote: > In order to be able to handle other ports or queues while waiting > for an IOTLB miss reply, a pending list is created so that waiter > can return and restart later on with sending again a miss request. > > Signed-off-by: Maxime Coquelin > --- > lib/librte_vhost/iotlb.c | 88 ++++++++++++++++++++++++++++++++++++++++++++++-- > lib/librte_vhost/iotlb.h | 4 +++ > lib/librte_vhost/vhost.h | 1 + > 3 files changed, 91 insertions(+), 2 deletions(-) > > diff --git a/lib/librte_vhost/iotlb.c b/lib/librte_vhost/iotlb.c > index 1b739dae5..d014bfe98 100644 > --- a/lib/librte_vhost/iotlb.c > +++ b/lib/librte_vhost/iotlb.c > @@ -49,7 +49,86 @@ struct vhost_iotlb_entry { > uint8_t perm; > }; > > -#define IOTLB_CACHE_SIZE 1024 > +#define IOTLB_CACHE_SIZE 2048 > + > +static void vhost_user_iotlb_pending_remove_all(struct vhost_virtqueue *vq) > +{ > + struct vhost_iotlb_entry *node, *temp_node; > + > + rte_rwlock_write_lock(&vq->iotlb_lock); > + > + TAILQ_FOREACH_SAFE(node, &vq->iotlb_pending_list, next, temp_node) { > + TAILQ_REMOVE(&vq->iotlb_pending_list, node, next); > + rte_mempool_put(vq->iotlb_pool, node); > + } > + > + rte_rwlock_write_unlock(&vq->iotlb_lock); > +} > + > +int vhost_user_iotlb_pending_miss(struct vhost_virtqueue *vq, uint64_t iova, > + uint8_t perm) > +{ > + struct vhost_iotlb_entry *node; > + int found = 0; > + The return value of this function is boolean. So it's better to return bool instead of int. > + rte_rwlock_read_lock(&vq->iotlb_lock); > + > + TAILQ_FOREACH(node, &vq->iotlb_pending_list, next) { > + if ((node->iova == iova) && (node->perm == perm)) { > + found = 1; > + break; > + } > + } > + > + rte_rwlock_read_unlock(&vq->iotlb_lock); > + > + return found; > +} > + > +void vhost_user_iotlb_pending_insert(struct vhost_virtqueue *vq, > + uint64_t iova, uint8_t perm) > +{ > + struct vhost_iotlb_entry *node; > + int ret; > + > + ret = rte_mempool_get(vq->iotlb_pool, (void **)&node); > + if (ret) { > + RTE_LOG(ERR, VHOST_CONFIG, "IOTLB pool empty, invalidate cache\n"); I think The log level should be INFO or the likes, not ERR. > + vhost_user_iotlb_pending_remove_all(vq); > + ret = rte_mempool_get(vq->iotlb_pool, (void **)&node); > + if (ret) { > + RTE_LOG(ERR, VHOST_CONFIG, "IOTLB pool still empty, failure\n"); > + return; > + } > + } > + > + node->iova = iova; > + node->perm = perm; > + > + rte_rwlock_write_lock(&vq->iotlb_lock); > + > + TAILQ_INSERT_TAIL(&vq->iotlb_pending_list, node, next); > + > + rte_rwlock_write_unlock(&vq->iotlb_lock); > +} > + > +static void vhost_user_iotlb_pending_remove(struct vhost_virtqueue *vq, > + uint64_t iova, uint64_t size, uint8_t perm) > +{ > + struct vhost_iotlb_entry *node, *temp_node; > + > + /* .iotlb_lock already locked by the caller */ > + TAILQ_FOREACH_SAFE(node, &vq->iotlb_pending_list, next, temp_node) { > + if (node->iova < iova) > + continue; > + if (node->iova >= iova + size) > + continue; > + if ((node->perm & perm) != node->perm) > + continue; > + TAILQ_REMOVE(&vq->iotlb_pending_list, node, next); > + rte_mempool_put(vq->iotlb_pool, node); > + } > +} > > static void vhost_user_iotlb_cache_remove_all(struct vhost_virtqueue *vq) > { > @@ -106,7 +185,10 @@ void vhost_user_iotlb_cache_insert(struct vhost_virtqueue *vq, uint64_t iova, > TAILQ_INSERT_TAIL(&vq->iotlb_list, new_node, next); > > unlock: > + vhost_user_iotlb_pending_remove(vq, iova, size, perm); > + > rte_rwlock_write_unlock(&vq->iotlb_lock); > + This empty line should be removed. Best regards, Tiwei Bie > } > > void vhost_user_iotlb_cache_remove(struct vhost_virtqueue *vq,