DPDK patches and discussions
 help / color / mirror / Atom feed
From: Claudio Fontana <cfontana@suse.de>
To: Maxime Coquelin <maxime.coquelin@redhat.com>,
	Chenbo Xia <chenbo.xia@intel.com>
Cc: dev@dpdk.org, Claudio Fontana <cfontana@suse.de>
Subject: [PATCH v3 1/2] vhost: check for nr_vec == 0 in desc_to_mbuf, mbuf_to_desc
Date: Tue,  2 Aug 2022 02:49:37 +0200	[thread overview]
Message-ID: <20220802004938.23670-2-cfontana@suse.de> (raw)
In-Reply-To: <20220802004938.23670-1-cfontana@suse.de>

in virtio_dev_split we cannot currently call desc_to_mbuf with
nr_vec == 0, or we end up trying to rte_memcpy from a source address
buf_vec[0] that is an uninitialized stack variable.

Improve this in general by having desc_to_mbuf and mbuf_to_desc
return -1 when called with an invalid nr_vec == 0, which should
fix any other instance of this problem.

This should fix errors that have been reported in multiple occasions
from telcos to the DPDK, OVS and QEMU projects, as this affects in
particular the openvswitch/DPDK, QEMU vhost-user setup when the
guest DPDK application abruptly goes away via SIGKILL and then
reconnects.

The back trace looks roughly like this, depending on the specific
rte_memcpy selected, etc, in any case the "src" parameter is garbage
(in this example containing 0 + dev->host_hlen(12 = 0xc)).

Thread 153 "pmd-c88/id:150" received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0x7f64e5e6b700 (LWP 141373)]
rte_mov128blocks (n=2048, src=0xc <error: Cannot access memory at 0xc>,
             dst=0x150da4480) at ../lib/eal/x86/include/rte_memcpy.h:384
(gdb) bt
0  rte_mov128blocks (n=2048, src=0xc, dst=0x150da4480)
1  rte_memcpy_generic (n=2048, src=0xc, dst=0x150da4480)
2  rte_memcpy (n=2048, src=0xc, dst=<optimized out>)
3  sync_fill_seg
4  desc_to_mbuf
5  virtio_dev_tx_split
6  virtio_dev_tx_split_legacy
7  0x00007f676fea0fef in rte_vhost_dequeue_burst
8  0x00007f6772005a62 in netdev_dpdk_vhost_rxq_recv
9  0x00007f6771f38116 in netdev_rxq_recv
10 0x00007f6771f03d96 in dp_netdev_process_rxq_port
11 0x00007f6771f04239 in pmd_thread_main
12 0x00007f6771f92aff in ovsthread_wrapper
13 0x00007f6771c1b6ea in start_thread
14 0x00007f6771933a8f in clone

Tested-by: Claudio Fontana <cfontana@suse.de>
Signed-off-by: Claudio Fontana <cfontana@suse.de>
---
 lib/vhost/virtio_net.c | 11 ++++++++---
 1 file changed, 8 insertions(+), 3 deletions(-)

diff --git a/lib/vhost/virtio_net.c b/lib/vhost/virtio_net.c
index 35fa4670fd..eb19e54c2b 100644
--- a/lib/vhost/virtio_net.c
+++ b/lib/vhost/virtio_net.c
@@ -1153,7 +1153,7 @@ mbuf_to_desc(struct virtio_net *dev, struct vhost_virtqueue *vq,
 	struct virtio_net_hdr_mrg_rxbuf tmp_hdr, *hdr = NULL;
 	struct vhost_async *async = vq->async;
 
-	if (unlikely(m == NULL))
+	if (unlikely(m == NULL) || nr_vec == 0)
 		return -1;
 
 	buf_addr = buf_vec[vec_idx].buf_addr;
@@ -2673,6 +2673,9 @@ desc_to_mbuf(struct virtio_net *dev, struct vhost_virtqueue *vq,
 	struct vhost_async *async = vq->async;
 	struct async_inflight_info *pkts_info;
 
+	if (unlikely(nr_vec == 0))
+		return -1;
+
 	buf_addr = buf_vec[vec_idx].buf_addr;
 	buf_iova = buf_vec[vec_idx].buf_iova;
 	buf_len = buf_vec[vec_idx].buf_len;
@@ -2917,9 +2920,11 @@ virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq,
 						vq->last_avail_idx + i,
 						&nr_vec, buf_vec,
 						&head_idx, &buf_len,
-						VHOST_ACCESS_RO) < 0))
+						VHOST_ACCESS_RO) < 0)) {
+			dropped += 1;
+			i++;
 			break;
-
+		}
 		update_shadow_used_ring_split(vq, head_idx, 0);
 
 		err = virtio_dev_pktmbuf_prep(dev, pkts[i], buf_len);
-- 
2.26.2


  reply	other threads:[~2022-08-02  0:49 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-08-02  0:49 [PATCH v3 0/2] vhost fixes for OVS SIGSEGV in PMD Claudio Fontana
2022-08-02  0:49 ` Claudio Fontana [this message]
2022-08-02  1:34   ` [PATCH v3 1/2] vhost: check for nr_vec == 0 in desc_to_mbuf, mbuf_to_desc Stephen Hemminger
2022-09-28 14:37   ` Maxime Coquelin
2022-09-28 15:21     ` Claudio Fontana
2022-09-28 16:03       ` Thomas Monjalon
2022-09-30 10:22         ` Maxime Coquelin
2022-10-05 15:06   ` Maxime Coquelin
2022-11-02 10:34     ` Claudio Fontana
2022-12-20 12:23       ` Claudio Fontana
2022-08-02  0:49 ` [PATCH v3 2/2] vhost: improve error handling in desc_to_mbuf Claudio Fontana
2022-10-05 12:57   ` Maxime Coquelin
2022-08-02  1:40 ` [PATCH v3 0/2] vhost fixes for OVS SIGSEGV in PMD Stephen Hemminger
2022-08-02 17:20   ` Claudio Fontana
2022-08-04 10:32     ` Claudio Fontana
2022-08-09 12:39 ` Claudio Fontana

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20220802004938.23670-2-cfontana@suse.de \
    --to=cfontana@suse.de \
    --cc=chenbo.xia@intel.com \
    --cc=dev@dpdk.org \
    --cc=maxime.coquelin@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).