From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 5CC46A0093; Thu, 5 May 2022 16:09:48 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 0CE254014F; Thu, 5 May 2022 16:09:48 +0200 (CEST) Received: from us-smtp-delivery-74.mimecast.com (us-smtp-delivery-74.mimecast.com [170.10.133.74]) by mails.dpdk.org (Postfix) with ESMTP id 4EEF340042 for ; Thu, 5 May 2022 16:09:46 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1651759785; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=V2mZeX2UzNghEwUiP/TKFw70T0J0sbVdEJK4grGTuRc=; b=Alp/UlEa+nWe6+1S+wdLeLPz+HpuR+5GUMf7xF2+1XLDfj0obNu4jiV4eTMdqSVNPorHOj fyo2t3OU+7PYg0WFukf/9dHMobZgoME0pAoWUvoMydDLUQ1GBLaiZp9E/lbOFTHTy/jQVO x3gzGQL55XxgOfIUgD6Z1GB2zYZv5JQ= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-548-b2t69eOaOVOupF5F-6gVuA-1; Thu, 05 May 2022 10:09:42 -0400 X-MC-Unique: b2t69eOaOVOupF5F-6gVuA-1 Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com [10.11.54.9]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 362BE805F70; Thu, 5 May 2022 14:09:42 +0000 (UTC) Received: from [10.39.208.18] (unknown [10.39.208.18]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 58BA35553DB; Thu, 5 May 2022 14:09:40 +0000 (UTC) Message-ID: <2ffd4e93-8dab-15c1-0a93-feffe0b0f111@redhat.com> Date: Thu, 5 May 2022 16:09:39 +0200 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Thunderbird/91.1.0 Subject: Re: [PATCH] net/vhost: fix access to freed memory To: Yuan Wang , chenbo.xia@intel.com Cc: dev@dpdk.org, jiayu.hu@intel.com, weix.ling@intel.com References: <20220311163512.76501-1-yuanx.wang@intel.com> From: Maxime Coquelin In-Reply-To: <20220311163512.76501-1-yuanx.wang@intel.com> X-Scanned-By: MIMEDefang 2.85 on 10.11.54.9 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=maxime.coquelin@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Language: en-US Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Hi Yuan, On 3/11/22 17:35, Yuan Wang wrote: > This patch fixes heap-use-after-free reported by ASan. > > It is possible for the rte_vhost_dequeue_burst() to access the vq > is freed when numa_realloc() gets called in the device running state. > The control plane will set the vq->access_lock to protected the vq > from the data plane. Unfortunately the lock will fail at the moment > the vq is freed, allowing the rte_vhost_dequeue_burst() to access > the fields of the vq, which will trigger a heap-use-after-free error. > > In the case of multiple queues, the vhost pmd can access other queues > that are not ready when the first queue is ready, which makes no sense > and also allows numa_realloc() and rte_vhost_dequeue_burst() access to > vq to happen at the same time. By controlling vq->allow_queuing we can make > the pmd access only the queues that are ready. > > Fixes: 1ce3c7fe149 ("net/vhost: emulate device start/stop behavior") > > Signed-off-by: Yuan Wang > --- > drivers/net/vhost/rte_eth_vhost.c | 15 +++++++++++++-- > 1 file changed, 13 insertions(+), 2 deletions(-) > It is indeed better for the Vhost PMD to not access virtqueues that aren't ready. Reviewed-by: Maxime Coquelin Thanks, Maxime