DPDK patches and discussions
 help / color / mirror / Atom feed
From: Yuanhan Liu <yuanhan.liu@linux.intel.com>
To: "Xie, Huawei" <huawei.xie@intel.com>
Cc: "dev@dpdk.org" <dev@dpdk.org>
Subject: Re: [dpdk-dev] [PATCH] vhost: avoid buffer overflow in update_secure_len
Date: Wed, 18 Nov 2015 14:25:50 +0800	[thread overview]
Message-ID: <20151118062550.GA2326@yliu-dev.sh.intel.com> (raw)
In-Reply-To: <C37D651A908B024F974696C65296B57B4B196799@SHSMSX101.ccr.corp.intel.com>

On Wed, Nov 18, 2015 at 06:13:08AM +0000, Xie, Huawei wrote:
> On 11/18/2015 10:56 AM, Yuanhan Liu wrote:
> > On Tue, Nov 17, 2015 at 08:39:30AM -0800, Rich Lane wrote:
> >> I don't think that adding a SIGINT handler is the right solution, though. The
> >> guest app could be killed with another signal (SIGKILL).
> > Good point.
> >
> >> Worse, a malicious or
> >> buggy guest could write to just that field. vhost should not crash no matter
> >> what the guest writes into the virtqueues.
> Rich, exactly, that has been in our list for a long time. We should
> ensure that "Any malicious guest couldn't crash host through vrings"
> otherwise this vhost implementation couldn't be deployed into production
> environment.
> There are many other known security holes in current dpdk vhost in my mind.
> A very simple example is we don't check the gpa_to_vva return value, so
> you could easily put a invalid GPA to vring entry to crash vhost.
> My plan is to review the vhost implementation, fix all the possible
> issues in one single patch set, and make the fix performance

First of all, there is no way you could find all of them out at
once, for we simply make mistakes, and may miss something here
and there.

And, fixing them in one single patch is not a good pratice; fixing
them with one issue per patch is. That will make patch eaiser to
review, yet easier to revert if it's a wrong fix. And it's friendly
to bisect as well, if it breaks something.

	--yliu

> optimization friendly rather than fix them here and there.
> 
> > Yeah, I agree with you: though we could fix this issue in the source
> > side, we also should do some defend here.
> >
> > How about following patch then?
> >
> > Note that the vec_id overflow check should be done before referencing
> > it, but not after. Hence I moved it ahead.
> >
> > 	--yliu
> >
> > ---
> > diff --git a/lib/librte_vhost/vhost_rxtx.c b/lib/librte_vhost/vhost_rxtx.c
> > index 9322ce6..08f5942 100644
> > --- a/lib/librte_vhost/vhost_rxtx.c
> > +++ b/lib/librte_vhost/vhost_rxtx.c
> > @@ -132,6 +132,8 @@ virtio_dev_rx(struct virtio_net *dev, uint16_t queue_id,
> >  
> >  		/* Get descriptor from available ring */
> >  		desc = &vq->desc[head[packet_success]];
> > +		if (desc->len == 0)
> > +			break;
> >  
> >  		buff = pkts[packet_success];
> >  
> > @@ -153,6 +155,8 @@ virtio_dev_rx(struct virtio_net *dev, uint16_t queue_id,
> >  			/* Buffer address translation. */
> >  			buff_addr = gpa_to_vva(dev, desc->addr);
> >  		} else {
> > +			if (desc->len < vq->vhost_hlen)
> > +				break;
> >  			vb_offset += vq->vhost_hlen;
> >  			hdr = 1;
> >  		}
> > @@ -446,6 +450,9 @@ update_secure_len(struct vhost_virtqueue *vq, uint32_t id,
> >  	uint32_t vec_id = *vec_idx;
> >  
> >  	do {
> > +		if (vec_id >= BUF_VECTOR_MAX)
> > +			break;
> > +
> >  		next_desc = 0;
> >  		len += vq->desc[idx].len;
> >  		vq->buf_vec[vec_id].buf_addr = vq->desc[idx].addr;
> > @@ -519,6 +526,8 @@ virtio_dev_merge_rx(struct virtio_net *dev, uint16_t queue_id,
> >  					goto merge_rx_exit;
> >  				} else {
> >  					update_secure_len(vq, res_cur_idx, &secure_len, &vec_idx);
> > +					if (secure_len == 0)
> > +						goto merge_rx_exit;
> >  					res_cur_idx++;
> >  				}
> >  			} while (pkt_len > secure_len);
> > @@ -631,6 +640,8 @@ rte_vhost_dequeue_burst(struct virtio_net *dev, uint16_t queue_id,
> >  		uint8_t alloc_err = 0;
> >  
> >  		desc = &vq->desc[head[entry_success]];
> > +		if (desc->len == 0)
> > +			break;
> >  
> >  		/* Discard first buffer as it is the virtio header */
> >  		if (desc->flags & VRING_DESC_F_NEXT) {
> > @@ -638,6 +649,8 @@ rte_vhost_dequeue_burst(struct virtio_net *dev, uint16_t queue_id,
> >  			vb_offset = 0;
> >  			vb_avail = desc->len;
> >  		} else {
> > +			if (desc->len < vq->vhost_hlen)
> > +				break;
> >  			vb_offset = vq->vhost_hlen;
> >  			vb_avail = desc->len - vb_offset;
> >  		}
> >
> 

  reply	other threads:[~2015-11-18  6:24 UTC|newest]

Thread overview: 18+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-11-12  8:02 Rich Lane
2015-11-12  9:23 ` Yuanhan Liu
2015-11-12 21:46   ` Rich Lane
2015-11-17 13:23     ` Yuanhan Liu
2015-11-17 16:39       ` Rich Lane
2015-11-18  2:56         ` Yuanhan Liu
2015-11-18  5:23           ` Wang, Zhihong
2015-11-18  5:26           ` Rich Lane
2015-11-18  5:32             ` Yuanhan Liu
2015-11-18  6:13           ` Xie, Huawei
2015-11-18  6:25             ` Yuanhan Liu [this message]
2015-11-18  8:13               ` Xie, Huawei
2015-11-18 15:53             ` Stephen Hemminger
2015-11-18 16:00               ` Xie, Huawei
2015-11-18  7:53           ` Xie, Huawei
2015-11-18  8:48             ` Yuanhan Liu
2015-11-18 11:15               ` Xie, Huawei
2015-11-19  5:51                 ` Yuanhan Liu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20151118062550.GA2326@yliu-dev.sh.intel.com \
    --to=yuanhan.liu@linux.intel.com \
    --cc=dev@dpdk.org \
    --cc=huawei.xie@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).