From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mx1.redhat.com (mx1.redhat.com [209.132.183.28]) by dpdk.org (Postfix) with ESMTP id D0154271; Mon, 27 Nov 2017 09:42:35 +0100 (CET) Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 168FDC0587D3; Mon, 27 Nov 2017 08:42:35 +0000 (UTC) Received: from colo-mx.corp.redhat.com (colo-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.21]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 0A92A6055F; Mon, 27 Nov 2017 08:42:35 +0000 (UTC) Received: from zmail17.collab.prod.int.phx2.redhat.com (zmail17.collab.prod.int.phx2.redhat.com [10.5.83.19]) by colo-mx.corp.redhat.com (Postfix) with ESMTP id E3B6C4BB79; Mon, 27 Nov 2017 08:42:34 +0000 (UTC) Date: Mon, 27 Nov 2017 03:42:34 -0500 (EST) From: Victor Kaplansky To: Maxime Coquelin Cc: dev@dpdk.org, yliu@fridaylinux.org, tiwei bie , jianfeng tan , stable@dpdk.org, jfreiman@redhat.com Message-ID: <1602820578.45759784.1511772154833.JavaMail.zimbra@redhat.com> In-Reply-To: <0a577e7b-2d90-04ba-3fac-f70192970a09@redhat.com> References: <20171124180826.18439-1-maxime.coquelin@redhat.com> <20171124180826.18439-3-maxime.coquelin@redhat.com> <1760091245.45753170.1511770580954.JavaMail.zimbra@redhat.com> <0a577e7b-2d90-04ba-3fac-f70192970a09@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Originating-IP: [10.35.206.59, 10.4.195.9] Thread-Topic: vhost: protect dirty logging against logging base change Thread-Index: grD9WGsl5akZgTma3877Xwd9olUq/A== X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.32]); Mon, 27 Nov 2017 08:42:35 +0000 (UTC) Subject: Re: [dpdk-dev] [PATCH v2 2/3] vhost: protect dirty logging against logging base change X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 27 Nov 2017 08:42:36 -0000 ----- Original Message ----- > From: "Maxime Coquelin" > To: "Victor Kaplansky" > Cc: dev@dpdk.org, yliu@fridaylinux.org, "tiwei bie" , "jianfeng tan" , > stable@dpdk.org, jfreiman@redhat.com > Sent: Monday, November 27, 2017 10:27:22 AM > Subject: Re: [PATCH v2 2/3] vhost: protect dirty logging against logging base change > > Hi Victor, > > On 11/27/2017 09:16 AM, Victor Kaplansky wrote: > > Hi, > > > > While I agree that taking full fledged lock by rte_rwlock_read_lock() > > solves the race condition, > > I'm afraid that it would be too expensive in case when logging is off, > > since it introduces > > acquiring and releasing lock into the main flow of ring updates. > > Actually my v2 fixes the performance penalty when logging is off. The > lock is now taken after the logging feature check. > > But still, I agree logging on case will suffer from a performance > penalty. Yes, checking of logging feature is better than nothing, but VHOST_F_LOG_ALL marks only whether logging is supported by the device and not if logging is in the action. Thus, any guest will hit the performance degradation even not during migration. > > > It is OK for now, as it fixes the bug, but we need to perform more careful > > performance measurements, > > and see whether the performance degradation is not too prohibitive. > > > > As alternative, we may consider using more light weighted busy looping. > > I think it will end up almost being the same, as both threads will need > to busy loop. PMD thread to be sure the protocol thread isn't being > unmapping the region before doing the logging, and protocol thread to be > sure the PMD thread is not doing logging before handling the set log > base. > I'm not fully aware how rte_rwlock_read_lock() is implemented, but theoretically busy looping should be much cheaper in cases when taking lock by one side is very rare. > Maybe you have something else in mind? > > > Also, lets fix by this series the __sync_fetch_and_or_8 -> > > __sync_fetch_and_or, > > as it may improve the performance slightly. > > Sure, this can be done, but it would need to be benchmarked first. Agree. > > Regards, > Maxime >