From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 1779D42417; Thu, 19 Jan 2023 15:51:56 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 07F8640223; Thu, 19 Jan 2023 15:51:56 +0100 (CET) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by mails.dpdk.org (Postfix) with ESMTP id 1C1F1400D5 for ; Thu, 19 Jan 2023 15:51:54 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1674139913; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=lHHMPleHco6Mt1863wdmmtg7t1YlMDa1lAE8d8/k74s=; b=JJZqnY9L+3HQMW3EN8lZMpZOPNxIQiG7WU2K3/TT0BWibxVETcaIzTGiGuABtK9HoOPh9k hCNBIceS5LipV9e73qKEx8SMvHsjnZPKY6TJx3jVlh61mJvpSZDbEGWumCX8VYjy8s3jbV nfHjlKqqnhqyhJMlV1SiKUfz9YFUx2o= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-434-Dp0LGxRHNZmEUfr1G2BZbA-1; Thu, 19 Jan 2023 09:51:50 -0500 X-MC-Unique: Dp0LGxRHNZmEUfr1G2BZbA-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 1F3568A0100; Thu, 19 Jan 2023 14:51:50 +0000 (UTC) Received: from [10.39.208.26] (unknown [10.39.208.26]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 10E9440C2064; Thu, 19 Jan 2023 14:51:48 +0000 (UTC) Message-ID: Date: Thu, 19 Jan 2023 15:51:47 +0100 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Thunderbird/102.6.0 Subject: Re: [PATCH] vdpa/ifc: fix reconnetion issue in SW assisted live migration To: Andy Pei , dev@dpdk.org Cc: chenbo.xia@intel.com, xiao.w.wang@intel.com, stable@dpdk.org References: <1670829165-138835-1-git-send-email-andy.pei@intel.com> From: Maxime Coquelin In-Reply-To: <1670829165-138835-1-git-send-email-andy.pei@intel.com> X-Scanned-By: MIMEDefang 3.1 on 10.11.54.1 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Language: en-US Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org On 12/12/22 08:12, Andy Pei wrote: > In the case using argument "sw-live-migration=1" to enable SW assisted live > migration, we take QEMU as front end for example, after source VM migrates > to destination VM, we keep vdpa process for source VM there, we kill the > QEMU process for source VM, and restart the QEMU process for source VM. > In this case, vdpa driver will not perform DMA map and data path will not > work properly. > > The above case works fine in the case "sw-live-migration=0". > > The root cause is that current code driver does not set running flag to 0. > Driver treats device as ruuning and does not perform DMA map. > > Fixes: 4bb531e152d3 ("net/ifc: support SW assisted VDPA live migration") > Cc: stable@dpdk.org > > Signed-off-by: Andy Pei > --- > drivers/vdpa/ifc/ifcvf_vdpa.c | 2 ++ > 1 file changed, 2 insertions(+) > > diff --git a/drivers/vdpa/ifc/ifcvf_vdpa.c b/drivers/vdpa/ifc/ifcvf_vdpa.c > index 49d68ad..dc8600d 100644 > --- a/drivers/vdpa/ifc/ifcvf_vdpa.c > +++ b/drivers/vdpa/ifc/ifcvf_vdpa.c > @@ -1044,6 +1044,8 @@ struct rte_vdpa_dev_info { > > vdpa_disable_vfio_intr(internal); > > + rte_atomic32_set(&internal->running, 0); > + > ret = rte_vhost_host_notifier_ctrl(vid, RTE_VHOST_QUEUE_ALL, false); > if (ret && ret != -ENOTSUP) > goto error; Reviewed-by: Maxime Coquelin Thanks, Maxime