From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mx1.redhat.com (mx1.redhat.com [209.132.183.28]) by dpdk.org (Postfix) with ESMTP id A739B1B86A; Mon, 29 Jan 2018 17:31:09 +0100 (CET) Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 0BCA4C05FEF2; Mon, 29 Jan 2018 16:31:09 +0000 (UTC) Received: from localhost.localdomain (ovpn-112-21.ams2.redhat.com [10.36.112.21]) by smtp.corp.redhat.com (Postfix) with ESMTP id D60EF60C82; Mon, 29 Jan 2018 16:31:06 +0000 (UTC) From: Maxime Coquelin To: dev@dpdk.org, tiwei.bie@intel.com, yliu@fridaylinux.org, jfreimann@redhat.com, jianfeng.tan@intel.com Cc: stable@dpdk.org, Maxime Coquelin Date: Mon, 29 Jan 2018 17:30:39 +0100 Message-Id: <20180129163040.9560-2-maxime.coquelin@redhat.com> In-Reply-To: <20180129163040.9560-1-maxime.coquelin@redhat.com> References: <20180129163040.9560-1-maxime.coquelin@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.31]); Mon, 29 Jan 2018 16:31:09 +0000 (UTC) Subject: [dpdk-stable] [PATCH v3 1/2] vhost: fix iotlb pool out-of-memory handling X-BeenThere: stable@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches for DPDK stable branches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 29 Jan 2018 16:31:10 -0000 In the unlikely case the IOTLB memory pool runs out of memory, an issue may happen if all entries are used by the IOTLB cache, and an IOTLB miss happen. If the iotlb pending list is empty, then no memory is freed and allocation fails a second time. This patch fixes this by doing an IOTLB cache random evict if the IOTLB pending list is empty, ensuring the second allocation try will succeed. In the same spirit, the opposite is done when inserting an IOTLB entry in the IOTLB cache fails due to out of memory. In this case, the IOTLB pending is flushed if the IOTLB cache is empty to ensure the new entry can be inserted. Fixes: d012d1f293f4 ("vhost: add IOTLB helper functions") Fixes: f72c2ad63aeb ("vhost: add pending IOTLB miss request list and helpers") Cc: stable@dpdk.org Signed-off-by: Maxime Coquelin --- lib/librte_vhost/iotlb.c | 18 +++++++++++++----- 1 file changed, 13 insertions(+), 5 deletions(-) diff --git a/lib/librte_vhost/iotlb.c b/lib/librte_vhost/iotlb.c index b74cc6a78..72cd27df8 100644 --- a/lib/librte_vhost/iotlb.c +++ b/lib/librte_vhost/iotlb.c @@ -50,6 +50,9 @@ struct vhost_iotlb_entry { #define IOTLB_CACHE_SIZE 2048 +static void +vhost_user_iotlb_cache_random_evict(struct vhost_virtqueue *vq); + static void vhost_user_iotlb_pending_remove_all(struct vhost_virtqueue *vq) { @@ -95,9 +98,11 @@ vhost_user_iotlb_pending_insert(struct vhost_virtqueue *vq, ret = rte_mempool_get(vq->iotlb_pool, (void **)&node); if (ret) { - RTE_LOG(INFO, VHOST_CONFIG, - "IOTLB pool empty, clear pending misses\n"); - vhost_user_iotlb_pending_remove_all(vq); + RTE_LOG(DEBUG, VHOST_CONFIG, "IOTLB pool empty, clear entries\n"); + if (!TAILQ_EMPTY(&vq->iotlb_pending_list)) + vhost_user_iotlb_pending_remove_all(vq); + else + vhost_user_iotlb_cache_random_evict(vq); ret = rte_mempool_get(vq->iotlb_pool, (void **)&node); if (ret) { RTE_LOG(ERR, VHOST_CONFIG, "IOTLB pool still empty, failure\n"); @@ -186,8 +191,11 @@ vhost_user_iotlb_cache_insert(struct vhost_virtqueue *vq, uint64_t iova, ret = rte_mempool_get(vq->iotlb_pool, (void **)&new_node); if (ret) { - RTE_LOG(DEBUG, VHOST_CONFIG, "IOTLB pool empty, evict one entry\n"); - vhost_user_iotlb_cache_random_evict(vq); + RTE_LOG(DEBUG, VHOST_CONFIG, "IOTLB pool empty, clear entries\n"); + if (!TAILQ_EMPTY(&vq->iotlb_list)) + vhost_user_iotlb_cache_random_evict(vq); + else + vhost_user_iotlb_pending_remove_all(vq); ret = rte_mempool_get(vq->iotlb_pool, (void **)&new_node); if (ret) { RTE_LOG(ERR, VHOST_CONFIG, "IOTLB pool still empty, failure\n"); -- 2.14.3