From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pg0-f49.google.com (mail-pg0-f49.google.com [74.125.83.49]) by dpdk.org (Postfix) with ESMTP id EF46C2E8B for ; Fri, 8 Sep 2017 11:21:33 +0200 (CEST) Received: by mail-pg0-f49.google.com with SMTP id v66so3988130pgb.5 for ; Fri, 08 Sep 2017 02:21:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=fridaylinux-org.20150623.gappssmtp.com; s=20150623; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to:user-agent; bh=Uj+qE0t63BligTgkh44MTNQAz3HK3sWJg+/Zwq7Z+QU=; b=slWBh2+YvMayR0gBQH4SdUlOdyicY2RlHbtYjCkc3itSV7y68xoN+WIITt6mpGz2Ww tka0zxhIsiFG/Q4Q/U01BI8h4ZXlYXejDWMOQrA1lfDx42DUX3a8pAzV2wYL6z+DSA6A e7KsqWMcrqLciYorV68C0WklfTOCeRydXOYXPsr1pFPKjDlBjzXf+VeCf5vQ8KBNYLpW cnvAah33tulNVHrh02dyNGRjSMozsJ5jVjOvUu3gEtsAI5MzG4gjmZ+D8k0k/ZCjR2v+ UcDKCC4BdkAOe5x6yiT5zeTKaZRECf0PTHAVTLyWZYAISajGm9BB6g2FxB9NR5oG0qTb k7NA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to:user-agent; bh=Uj+qE0t63BligTgkh44MTNQAz3HK3sWJg+/Zwq7Z+QU=; b=eXyAyCGUdsOQYBMHlV2U5u90DY9P6GdSr7lbD4Zs7AYuZcYC5ZmCH0bBoYZAbsTMgW z4+vYKI3oeoZicHf50Uizpm7LHx3Nn3hXbksPz0pxQT6UGDPyVAV8Xc6qWvx/pIBxiGs ArKeq330+FMn3tDwGnyu/42mkk1//Ih2m8v5Mjq5ILpQ3QutjLAATNJfwbkQFUjHwzox eQAMAMm1znS1ASlCd1m1Zj5SrBT5kLEVq2buwALG4gaIUPEJjU56ANaixBuSC6Yx+SGm 0vwAa+3pVfxowJQzWA2G99kVK2SF4nn+KG39Jej/uEdwwKq6QiYVSn2EDhPf/jR6Odm+ 4SJw== X-Gm-Message-State: AHPjjUjHIDKnrOwHpRHFeTEfQqLziyQkfBmMUCWt2SOAVUh4nb8eXNEz GsOktaAzGJLZu7h0a0PNsA== X-Google-Smtp-Source: ADKCNb6gZXR8VVKIu8Yf4zkDcUPx2vDRLvPWKctjcKrpNyP/kz0vSJnkhLd6XhVE3iFwfWS2yYMqUw== X-Received: by 10.101.72.199 with SMTP id o7mr2505517pgs.450.1504862493239; Fri, 08 Sep 2017 02:21:33 -0700 (PDT) Received: from yliu-home ([45.63.61.64]) by smtp.gmail.com with ESMTPSA id m6sm3004456pfm.103.2017.09.08.02.21.29 (version=TLS1 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Fri, 08 Sep 2017 02:21:32 -0700 (PDT) Date: Fri, 8 Sep 2017 17:21:21 +0800 From: Yuanhan Liu To: Maxime Coquelin Cc: dev@dpdk.org, jfreiman@redhat.com, tiwei.bie@intel.com, mst@redhat.com, vkaplans@redhat.com, jasowang@redhat.com Message-ID: <20170908092121.GH9736@yliu-home> References: <20170831095023.21037-1-maxime.coquelin@redhat.com> <20170831095023.21037-8-maxime.coquelin@redhat.com> <20170908080855.GF9736@yliu-home> <8a9b486b-635a-ec70-76f9-de830ad21882@redhat.com> <20170908083633.GG9736@yliu-home> <3150540a-66ee-b686-7553-a4c988a2f18d@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <3150540a-66ee-b686-7553-a4c988a2f18d@redhat.com> User-Agent: Mutt/1.5.24 (2015-08-30) Subject: Re: [dpdk-dev] [PATCH 07/21] vhost: add iotlb helper functions X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 08 Sep 2017 09:21:34 -0000 On Fri, Sep 08, 2017 at 10:50:49AM +0200, Maxime Coquelin wrote: > >>>>+{ > >>>>+ struct vhost_iotlb_entry *node, *temp_node; > >>>>+ > >>>>+ rte_rwlock_write_lock(&vq->iotlb_lock); > >>>>+ > >>>>+ TAILQ_FOREACH_SAFE(node, &vq->iotlb_list, next, temp_node) { > >>>>+ TAILQ_REMOVE(&vq->iotlb_list, node, next); > >>>>+ rte_mempool_put(vq->iotlb_pool, node); > >>>>+ } > >>>>+ > >>>>+ rte_rwlock_write_unlock(&vq->iotlb_lock); > >>>>+} > >>>>+ > >>>>+void vhost_user_iotlb_cache_insert(struct vhost_virtqueue *vq, uint64_t iova, > >>>>+ uint64_t uaddr, uint64_t size, uint8_t perm) > >>>>+{ > >>>>+ struct vhost_iotlb_entry *node, *new_node; > >>>>+ int ret; > >>>>+ > >>>>+ ret = rte_mempool_get(vq->iotlb_pool, (void **)&new_node); > >>>>+ if (ret) { > >>>>+ RTE_LOG(ERR, VHOST_CONFIG, "IOTLB pool empty, invalidate cache\n"); > >>> > >>>It's a cache, why not considering remove one to get space for new one? > >> > >>It would mean having to track every lookups not to remove hot entries, > >>which would have an impact on performance. > > > >You were removing all caches, how can we do worse than that? Even a > >random evict would be better. Or, more simply, just to remove the > >head or the tail? > > I think removing head or tail could cause deadlocks. > For example it needs to translate from 0x0 to 0x2000, with page size > being 0x1000. > > If cache is full, when inserting 0x1000-0x1fff, it will remove > 0x0-0xfff, so a miss will be sent for 0x0-0xffff and 0x1000-0x1fff will > be remove at insert time, etc... Okay, that means we can't simply remove the head or tail. > Note that we really need to size the cache large enough for performance > purpose, so having the cache full could be cause by either buggy or > malicious guest. I agree. But for the malicious guest, it could lead to a DDOS attack: assume it keeps vhost running out of cache and then vhost keeps printing above message. What I suggested was to evict one (by some polices) to get a space for the new one we want to insert. Note that it won't be a performance issue, IMO, due to we only do that when we run out of caches, which, according to your sayings, should not happen in normal cases. --yliu > >>Moreover, the idea is to have the cache large enough, else you could > >>face packet drops due to random cache misses. > >> > >>We might consider to improve it, but I consider it an optimization that > >>could be implemented later if needed. > >> > >>>>+ vhost_user_iotlb_cache_remove_all(vq); > >>>>+ ret = rte_mempool_get(vq->iotlb_pool, (void **)&new_node); > >>>>+ if (ret) { > >>>>+ RTE_LOG(ERR, VHOST_CONFIG, "IOTLB pool still empty, failure\n"); > >>>>+ return; > >>>>+ } > >>>>+ }