From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id D63DAA0093 for ; Tue, 23 Aug 2022 11:51:10 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B6B7C406A2; Tue, 23 Aug 2022 11:51:10 +0200 (CEST) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by mails.dpdk.org (Postfix) with ESMTP id 4BCF6400D6 for ; Tue, 23 Aug 2022 11:51:09 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1661248268; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=jLVzVYO+NvptNcrO0fHbj3gGKRpivNIfhndZlBqAIQ8=; b=PouS8ozJb5VfIMh2MkbPGBZT6YzKCpm8RYMv4meidaQB7E5EHgXtocVM3+f6CpMBu4/hyR I5vZcPNoTPrtF+AvOqCw+HDijwwaGJEIOWMKVhl4nNFPdNRd4DBlLc9FYZaMOHTFEipi3e medJPMo06zf3IT9YSXV1jzEm9k1wYRU= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-351-81cme7jnMw6I_mkMDXaoww-1; Tue, 23 Aug 2022 05:51:05 -0400 X-MC-Unique: 81cme7jnMw6I_mkMDXaoww-1 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com [10.11.54.3]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 633AB811E76 for ; Tue, 23 Aug 2022 09:51:05 +0000 (UTC) Received: from max-t490s.redhat.com (unknown [10.39.208.15]) by smtp.corp.redhat.com (Postfix) with ESMTP id 6830B1121315; Tue, 23 Aug 2022 09:51:04 +0000 (UTC) From: Maxime Coquelin To: ktraynor@redhat.com, david.marchand@redhat.com, fbl@redhat.com Cc: Maxime Coquelin , stable@dpdk.org Subject: [PATCH 1/2] vhost: discard too small descriptor chains Date: Tue, 23 Aug 2022 11:50:53 +0200 Message-Id: <20220823095054.312696-2-maxime.coquelin@redhat.com> In-Reply-To: <20220823095054.312696-1-maxime.coquelin@redhat.com> References: <20220823095054.312696-1-maxime.coquelin@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.78 on 10.11.54.3 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: 8bit Content-Type: text/plain; charset="US-ASCII"; x-default=true X-BeenThere: stable@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: patches for DPDK stable branches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: stable-bounces@dpdk.org This patch discards descriptor chains which are smaller than the Virtio-net header size, and ones that are equal. Indeed, such descriptor chains sizes mean there is no packet data. Fixes: 62250c1d0978 ("vhost: extract split ring handling from Rx and Tx functions") Cc: stable@dpdk.org CVE-2022-2132 Signed-off-by: Maxime Coquelin (cherry picked from commit 205409845e2d2f280fe812746bf93544d375fc8a) --- lib/librte_vhost/virtio_net.c | 20 +++++++++++--------- 1 file changed, 11 insertions(+), 9 deletions(-) diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c index ebeec8fd18..dca696446a 100644 --- a/lib/librte_vhost/virtio_net.c +++ b/lib/librte_vhost/virtio_net.c @@ -1112,11 +1112,6 @@ copy_desc_to_mbuf(struct virtio_net *dev, struct vhost_virtqueue *vq, buf_iova = buf_vec[vec_idx].buf_iova; buf_len = buf_vec[vec_idx].buf_len; - if (unlikely(buf_len < dev->vhost_hlen && nr_vec <= 1)) { - error = -1; - goto out; - } - if (virtio_net_with_host_offload(dev)) { if (unlikely(buf_len < sizeof(struct virtio_net_hdr))) { /* @@ -1350,20 +1345,24 @@ virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq, for (i = 0; i < count; i++) { struct buf_vector buf_vec[BUF_VECTOR_MAX]; uint16_t head_idx; - uint32_t dummy_len; + uint32_t buf_len; uint16_t nr_vec = 0; int err; if (unlikely(fill_vec_buf_split(dev, vq, vq->last_avail_idx + i, &nr_vec, buf_vec, - &head_idx, &dummy_len, + &head_idx, &buf_len, VHOST_ACCESS_RO) < 0)) break; if (likely(dev->dequeue_zero_copy == 0)) update_shadow_used_ring_split(vq, head_idx, 0); + if (unlikely(buf_len <= dev->vhost_hlen)) { + break; + } + pkts[i] = rte_pktmbuf_alloc(mbuf_pool); if (unlikely(pkts[i] == NULL)) { RTE_LOG(ERR, VHOST_DATA, @@ -1460,14 +1459,14 @@ virtio_dev_tx_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, for (i = 0; i < count; i++) { struct buf_vector buf_vec[BUF_VECTOR_MAX]; uint16_t buf_id; - uint32_t dummy_len; + uint32_t buf_len; uint16_t desc_count, nr_vec = 0; int err; if (unlikely(fill_vec_buf_packed(dev, vq, vq->last_avail_idx, &desc_count, buf_vec, &nr_vec, - &buf_id, &dummy_len, + &buf_id, &buf_len, VHOST_ACCESS_RO) < 0)) break; @@ -1475,6 +1474,9 @@ virtio_dev_tx_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, update_shadow_used_ring_packed(vq, buf_id, 0, desc_count); + if (unlikely(buf_len <= dev->vhost_hlen)) + break; + pkts[i] = rte_pktmbuf_alloc(mbuf_pool); if (unlikely(pkts[i] == NULL)) { RTE_LOG(ERR, VHOST_DATA, -- 2.37.1