From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mx1.redhat.com (mx1.redhat.com [209.132.183.28]) by dpdk.org (Postfix) with ESMTP id DA4AE29CF; Fri, 8 Dec 2017 09:35:48 +0100 (CET) Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 77C0FC04AC51; Fri, 8 Dec 2017 08:35:47 +0000 (UTC) Received: from [10.36.112.38] (ovpn-112-38.ams2.redhat.com [10.36.112.38]) by smtp.corp.redhat.com (Postfix) with ESMTPS id E763418663; Fri, 8 Dec 2017 08:35:44 +0000 (UTC) To: "Tan, Jianfeng" , Victor Kaplansky , "dev@dpdk.org" , "yliu@fridaylinux.org" , "Bie, Tiwei" Cc: "stable@dpdk.org" , "jfreiman@redhat.com" References: <20171206135329.652-1-vkaplans@redhat.com> <00a522dc-2287-b3ed-7ca4-e666f6217c91@redhat.com> From: Maxime Coquelin Message-ID: <7352a09d-f90d-5b3a-51f9-bfc82faf744e@redhat.com> Date: Fri, 8 Dec 2017 09:35:42 +0100 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.4.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.31]); Fri, 08 Dec 2017 08:35:47 +0000 (UTC) Subject: Re: [dpdk-dev] [PATCH] vhost_user: protect active rings from async ring changes X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 08 Dec 2017 08:35:49 -0000 On 12/08/2017 03:14 AM, Tan, Jianfeng wrote: > > >> -----Original Message----- >> From: Maxime Coquelin [mailto:maxime.coquelin@redhat.com] >> Sent: Thursday, December 7, 2017 6:02 PM >> To: Tan, Jianfeng; Victor Kaplansky; dev@dpdk.org; yliu@fridaylinux.org; Bie, >> Tiwei >> Cc: stable@dpdk.org; jfreiman@redhat.com >> Subject: Re: [PATCH] vhost_user: protect active rings from async ring >> changes >> >> >> >> On 12/07/2017 10:33 AM, Tan, Jianfeng wrote: >>> >>> >>>> -----Original Message----- >>>> From: Victor Kaplansky [mailto:vkaplans@redhat.com] >>>> Sent: Wednesday, December 6, 2017 9:56 PM >>>> To: dev@dpdk.org; yliu@fridaylinux.org; Bie, Tiwei; Tan, Jianfeng; >>>> vkaplans@redhat.com >>>> Cc: stable@dpdk.org; jfreiman@redhat.com; Maxime Coquelin >>>> Subject: [PATCH] vhost_user: protect active rings from async ring changes >>>> >>>> When performing live migration or memory hot-plugging, >>>> the changes to the device and vrings made by message handler >>>> done independently from vring usage by PMD threads. >>>> >>>> This causes for example segfauls during live-migration >>> >>> segfauls ->segfaults? >>> >>>> with MQ enable, but in general virtually any request >>>> sent by qemu changing the state of device can cause >>>> problems. >>>> >>>> These patches fixes all above issues by adding a spinlock >>>> to every vring and requiring message handler to start operation >>>> only after ensuring that all PMD threads related to the divece >>> >>> Another typo: divece. >>> >>>> are out of critical section accessing the vring data. >>>> >>>> Each vring has its own lock in order to not create contention >>>> between PMD threads of different vrings and to prevent >>>> performance degradation by scaling queue pair number. >>> >>> Also wonder how much overhead it brings. >>> >>> Instead of locking each vring, can we just, waiting a while (10us for example) >> after call destroy_device() callback so that every PMD thread has enough >> time to skip out the criterial area? >> >> No, because we are not destroying the device when it is needed. >> Actually, once destroy_device() is called, it is likely that the >> application has taken care the ring aren't being processed anymore >> before returning from the callback (This is at least the case with Vhost >> PMD). > > OK, I did not put it right way as there are multiple cases above: migration and memory hot plug. Let me try again: > > Whenever a vhost thread handles a message affecting PMD threads, (like SET_MEM_TABLE, GET_VRING_BASE, etc) we can remove the dev flag - VIRTIO_DEV_RUNNING, and wait for a while so that PMD threads skip out of those criterial area. After message handling, reset the flag - VIRTIO_DEV_RUNNING. I think you mean clearing vq's enabled flag, because PMD threads never check the VIRTIO_DEV_RUNNING flag. > I suppose it can work, basing on an assumption that PMD threads work in polling mode and can skip criterial area quickly and inevitably. That sounds very fragile, because if the CPU aren't perfectly isolated, your PMD thread can be preempted for interrupt handling for example. Or what if for some reason the PMD thread CPU stalls for a short while? The later is unlikely, but if it happens, it will be hard to debug. Let's see first the performance impact of using the spinlock. It might not be that important because 99.9999% of the times, it will not even spin. Thanks, Maxime >> >> The reason we need the lock is to protect PMD threads from the handling >> of some vhost-user protocol requests. >> For example SET_MEM_TABLE in the case of memory hotplug, or >> SET_LOG_BASE >> in case of multiqueue, which is sent for every queue pair and results in >> unmapping/remapping the logging area. > > Yes, I understand how it fails. > > Thanks, > Jianfeng > >> >> Maxime