patches for DPDK stable branches
 help / color / mirror / Atom feed
From: "Liu, Yong" <yong.liu@intel.com>
To: Maxime Coquelin <maxime.coquelin@redhat.com>,
	"Xia, Chenbo" <chenbo.xia@intel.com>
Cc: "dev@dpdk.org" <dev@dpdk.org>, "stable@dpdk.org" <stable@dpdk.org>
Subject: Re: [dpdk-stable] [PATCH] vhost: fix potential buffer overflow
Date: Thu, 25 Mar 2021 03:08:13 +0000	[thread overview]
Message-ID: <f06dd390c23247f4ae8005dac19619d1@intel.com> (raw)
In-Reply-To: <7daf5cb5-173f-ce38-b14e-5dc00fe970c8@redhat.com>



> -----Original Message-----
> From: Maxime Coquelin <maxime.coquelin@redhat.com>
> Sent: Wednesday, March 24, 2021 4:56 PM
> To: Liu, Yong <yong.liu@intel.com>; Xia, Chenbo <chenbo.xia@intel.com>
> Cc: dev@dpdk.org; stable@dpdk.org
> Subject: Re: [PATCH] vhost: fix potential buffer overflow
> 
> Hi Marvin,
> 
> On 2/26/21 8:33 AM, Marvin Liu wrote:
> > In vhost datapath, descriptor's length are mostly used in two coherent
> > operations. First step is used for address translation, second step is
> > used for memory transaction from guest to host. But the iterval between
> > two steps will give a window for malicious guest, in which can change
> > descriptor length after vhost calcuated buffer size. Thus may lead to
> > buffer overflow in vhost side. This potential risk can be eliminated by
> > accessing the descriptor length once.
> >
> > Fixes: 1be4ebb1c464 ("vhost: support indirect descriptor in mergeable Rx")
> > Fixes: 2f3225a7d69b ("vhost: add vector filling support for packed ring")
> > Fixes: 75ed51697820 ("vhost: add packed ring batch dequeue")
> 
> As the offending commits have been introduced in different LTS, I would
> prefer the patch to be split. It will make is easier for backporting later.
> 

Maxime,
Thanks for your suggestion,  I will split this patch into three parts as they were spread over three different LTS. 

Regards,
Marvin

> > Signed-off-by: Marvin Liu <yong.liu@intel.com>
> > Cc: stable@dpdk.org
> >
> > diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c
> > index 583bf379c6..0a7d008a91 100644
> > --- a/lib/librte_vhost/virtio_net.c
> > +++ b/lib/librte_vhost/virtio_net.c
> > @@ -548,10 +548,11 @@ fill_vec_buf_split(struct virtio_net *dev, struct
> vhost_virtqueue *vq,
> >  			return -1;
> >  		}
> >
> > -		len += descs[idx].len;
> > +		dlen = descs[idx].len;
> > +		len += dlen;
> >
> >  		if (unlikely(map_one_desc(dev, vq, buf_vec, &vec_id,
> > -						descs[idx].addr,
> descs[idx].len,
> > +						descs[idx].addr, dlen,
> >  						perm))) {
> >  			free_ind_table(idesc);
> >  			return -1;
> > @@ -668,9 +669,10 @@ fill_vec_buf_packed_indirect(struct virtio_net
> *dev,
> >  			return -1;
> >  		}
> >
> > -		*len += descs[i].len;
> > +		dlen = descs[i].len;
> > +		*len += dlen;
> >  		if (unlikely(map_one_desc(dev, vq, buf_vec, &vec_id,
> > -						descs[i].addr, descs[i].len,
> > +						descs[i].addr, dlen,
> >  						perm)))
> >  			return -1;
> >  	}
> > @@ -691,6 +693,7 @@ fill_vec_buf_packed(struct virtio_net *dev, struct
> vhost_virtqueue *vq,
> >  	bool wrap_counter = vq->avail_wrap_counter;
> >  	struct vring_packed_desc *descs = vq->desc_packed;
> >  	uint16_t vec_id = *vec_idx;
> > +	uint64_t dlen;
> >
> >  	if (avail_idx < vq->last_avail_idx)
> >  		wrap_counter ^= 1;
> > @@ -723,11 +726,12 @@ fill_vec_buf_packed(struct virtio_net *dev, struct
> vhost_virtqueue *vq,
> >  							len, perm) < 0))
> >  				return -1;
> >  		} else {
> > -			*len += descs[avail_idx].len;
> > +			dlen = descs[avail_idx].len;
> > +			*len += dlen;
> >
> >  			if (unlikely(map_one_desc(dev, vq, buf_vec, &vec_id,
> >  							descs[avail_idx].addr,
> > -							descs[avail_idx].len,
> > +							dlen,
> >  							perm)))
> >  				return -1;
> >  		}
> > @@ -2314,7 +2318,7 @@ vhost_reserve_avail_batch_packed(struct
> virtio_net *dev,
> >  	}
> >
> >  	vhost_for_each_try_unroll(i, 0, PACKED_BATCH_SIZE) {
> > -		pkts[i]->pkt_len = descs[avail_idx + i].len - buf_offset;
> > +		pkts[i]->pkt_len = lens[i] - buf_offset;
> >  		pkts[i]->data_len = pkts[i]->pkt_len;
> >  		ids[i] = descs[avail_idx + i].id;
> >  	}
> >
> 
> Other than that, the patch looks valid to me.
> With the split done:
> 
> Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
> 
> Thanks,
> Maxime


  reply	other threads:[~2021-03-25  3:08 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-02-26  7:33 Marvin Liu
2021-03-24  8:55 ` Maxime Coquelin
2021-03-25  3:08   ` Liu, Yong [this message]
2021-03-25  3:01 ` [dpdk-stable] [PATCH 1/3] vhost: fix split ring " Marvin Liu
2021-03-25  3:01   ` [dpdk-stable] [PATCH 2/3] vhost: fix packed " Marvin Liu
2021-03-25  3:01   ` [dpdk-stable] [PATCH 3/3] vhost: fix potential buffer overflow when batch dequeue Marvin Liu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=f06dd390c23247f4ae8005dac19619d1@intel.com \
    --to=yong.liu@intel.com \
    --cc=chenbo.xia@intel.com \
    --cc=dev@dpdk.org \
    --cc=maxime.coquelin@redhat.com \
    --cc=stable@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).