DPDK patches and discussions
 help / color / mirror / Atom feed
From: Jianfeng Tan <jianfeng.tan@intel.com>
To: dev@dpdk.org
Cc: huawei.xie@intel.com, yuanhan.liu@linux.intel.com,
	Jianfeng Tan <jianfeng.tan@intel.com>
Subject: [dpdk-dev] [PATCH v2] virtio: fix segfault when transmit pkts
Date: Mon, 25 Apr 2016 02:37:45 +0000	[thread overview]
Message-ID: <1461551865-15930-1-git-send-email-jianfeng.tan@intel.com> (raw)
In-Reply-To: <1461242170-146337-1-git-send-email-jianfeng.tan@intel.com>

Issue: when using virtio nic to transmit pkts, it causes segment fault.

How to reproduce:
Basically, we need to construct a case with vm send packets to vhost-user,
and this issue does not happen when transmitting packets using indirect
desc. Besides, make sure all descriptors are exhausted before vhost
dequeues any packets.

a. start testpmd with vhost.
  $ testpmd -c 0x3 -n 4 --socket-mem 1024,0 --no-pci \
    --vdev 'eth_vhost0,iface=/tmp/sock0,queues=1' -- -i --nb-cores=1

b. start a qemu with a virtio nic connected with the vhost-user port, just
make sure mrg_rxbuf is enabled.

c. enable testpmd on the host.
  testpmd> set fwd io
  testpmd> start (better without start vhost-user)

d. start testpmd in VM.
  $testpmd -c 0x3 -n 4 -m 1024 -- -i --disable-hw-vlan-filter --txqflags=0xf01
  testpmd> set fwd txonly
  testpmd> start

How to fix: this bug is because inside virtqueue_enqueue_xmit(), the flag of
desc has been updated inside the do {} while (), not necessary to update after
the loop. (And if we do that after the loop, if all descs could have run out,
idx is VQ_RING_DESC_CHAIN_END (32768), use this idx to reference the start_dp
array will lead to segment fault.)

Fixes: dd856dfcb9e ("virtio: use any layout on Tx")

Signed-off-by: Jianfeng Tan <jianfeng.tan@intel.com>
---
 v2: refine the commit message.

 drivers/net/virtio/virtio_rxtx.c | 2 --
 1 file changed, 2 deletions(-)

diff --git a/drivers/net/virtio/virtio_rxtx.c b/drivers/net/virtio/virtio_rxtx.c
index ef21d8e..432aeab 100644
--- a/drivers/net/virtio/virtio_rxtx.c
+++ b/drivers/net/virtio/virtio_rxtx.c
@@ -271,8 +271,6 @@ virtqueue_enqueue_xmit(struct virtqueue *txvq, struct rte_mbuf *cookie,
 		idx = start_dp[idx].next;
 	} while ((cookie = cookie->next) != NULL);
 
-	start_dp[idx].flags &= ~VRING_DESC_F_NEXT;
-
 	if (use_indirect)
 		idx = txvq->vq_ring.desc[head_idx].next;
 
-- 
2.1.4

  parent reply	other threads:[~2016-04-25  2:37 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-04-21 12:36 [dpdk-dev] [PATCH] " Jianfeng Tan
2016-04-21 22:44 ` Yuanhan Liu
2016-04-22 14:23   ` Xie, Huawei
2016-04-25  1:58     ` Tan, Jianfeng
2016-04-25  2:37 ` Jianfeng Tan [this message]
2016-04-25  7:33   ` [dpdk-dev] [PATCH v2] " Xie, Huawei
2016-04-26  3:43   ` Yuanhan Liu
2016-04-26  3:47     ` Tan, Jianfeng
2016-04-26  8:43     ` Thomas Monjalon
2016-04-26 16:54       ` Yuanhan Liu
2016-04-26  4:48 ` [dpdk-dev] [PATCH] " Stephen Hemminger
2016-04-26  5:08   ` Tan, Jianfeng

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1461551865-15930-1-git-send-email-jianfeng.tan@intel.com \
    --to=jianfeng.tan@intel.com \
    --cc=dev@dpdk.org \
    --cc=huawei.xie@intel.com \
    --cc=yuanhan.liu@linux.intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).