From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mx1.redhat.com (mx1.redhat.com [209.132.183.28]) by dpdk.org (Postfix) with ESMTP id 993DF7CB6 for ; Mon, 11 Sep 2017 09:34:39 +0200 (CEST) Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id E5261C0CB572; Mon, 11 Sep 2017 07:34:38 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mx1.redhat.com E5261C0CB572 Authentication-Results: ext-mx07.extmail.prod.ext.phx2.redhat.com; dmarc=none (p=none dis=none) header.from=redhat.com Authentication-Results: ext-mx07.extmail.prod.ext.phx2.redhat.com; spf=fail smtp.mailfrom=maxime.coquelin@redhat.com Received: from [10.36.112.33] (ovpn-112-33.ams2.redhat.com [10.36.112.33]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 4FD2B6EC75; Mon, 11 Sep 2017 07:34:33 +0000 (UTC) To: Yuanhan Liu Cc: dev@dpdk.org, jfreiman@redhat.com, tiwei.bie@intel.com, mst@redhat.com, vkaplans@redhat.com, jasowang@redhat.com References: <20170831095023.21037-1-maxime.coquelin@redhat.com> <20170831095023.21037-22-maxime.coquelin@redhat.com> <20170911041821.GI9736@yliu-home> From: Maxime Coquelin Message-ID: Date: Mon, 11 Sep 2017 09:34:30 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.2.1 MIME-Version: 1.0 In-Reply-To: <20170911041821.GI9736@yliu-home> Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.31]); Mon, 11 Sep 2017 07:34:39 +0000 (UTC) Subject: Re: [dpdk-dev] [PATCH 21/21] vhost: iotlb: reduce iotlb read lock usage X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 11 Sep 2017 07:34:40 -0000 Hi Yuanhan, On 09/11/2017 06:18 AM, Yuanhan Liu wrote: > On Thu, Aug 31, 2017 at 11:50:23AM +0200, Maxime Coquelin wrote: >> Prior to this patch, iotlb cache's read/write lock was >> read-locked at every guest IOVA to app VA translation, >> i.e. at least once per packet with indirect off and twice >> with indirect on. >> >> The problem is that rte_rwlock_read_lock() makes use of atomic >> operation, which is costly. >> >> This patch introduces iotlb lock helpers, so that a full burst >> can be protected with taking the lock once, which reduces the >> number of atomic operations by up to 64 with indirect >> descriptors. > > You were assuming there is no single miss during a burst. If a miss > happens, it requries 2 locks: one for _pending_miss and another one > for _pending_insert. From this point of view, it's actually more > expensive. It's actually more expensive only when a miss happens. And in that case, the cost of taking the lock is negligible compared to the miss itself. > However, I won't call it's a bad assumption (for the case of virtio > PMD). And if you take this assumption, why not just deleting the > pending list and moving the lock outside the _iotlb_find function() > like what you did in this patch? Because we need the pending list. When the is no matching entry in the IOTLB cache, we have to send a miss request through the slave channel. On miss request reception, Qemu performs the translation, and in case of success, sends it through the main channel using an update request. While all this is done, the backend could wait for it, blocking processing on the PMD thread. But it would be really inefficient in case other queues are being processed on the same lcore. Moreover, if the iova is invalid, no update requst is sent, so the lcore would be blocked forever. To overcome this, what is done is that in case of miss, it exits the burst and try again later, letting a chance for other virtqueues to be processed while the update arrives. And here comes the pending list. On the next try, the update may have not arrived yet, so we need the check whether a miss has already been sent for the same address & perm. Else, we would flood Qemu with miss requests for the same address. > I don't really see the point of introducing the pending list. Hope the above clarifies. I will see if I can improve the pending list protection, but honestly, its cost is negligible. Cheers, Maxime > --yliu >