DPDK patches and discussions
 help / color / mirror / Atom feed
From: Claudio Fontana <cfontana@suse.de>
To: Maxime Coquelin <maxime.coquelin@redhat.com>,
	Chenbo Xia <chenbo.xia@intel.com>
Cc: dev@dpdk.org
Subject: Re: [PATCH v3 1/2] vhost: check for nr_vec == 0 in desc_to_mbuf, mbuf_to_desc
Date: Tue, 20 Dec 2022 13:23:11 +0100	[thread overview]
Message-ID: <e85fdbb1-2ddc-8c9a-6303-5963ac37c5c9@suse.de> (raw)
In-Reply-To: <0e983180-0cbb-c20f-6937-e6384d29d099@suse.de>

On 11/2/22 11:34, Claudio Fontana wrote:
> On 10/5/22 17:06, Maxime Coquelin wrote:
>>
>>
>> On 8/2/22 02:49, Claudio Fontana wrote:
>>> in virtio_dev_split we cannot currently call desc_to_mbuf with
>>> nr_vec == 0, or we end up trying to rte_memcpy from a source address
>>> buf_vec[0] that is an uninitialized stack variable.
>>>
>>> Improve this in general by having desc_to_mbuf and mbuf_to_desc
>>> return -1 when called with an invalid nr_vec == 0, which should
>>> fix any other instance of this problem.
>>>
>>> This should fix errors that have been reported in multiple occasions
>>> from telcos to the DPDK, OVS and QEMU projects, as this affects in
>>> particular the openvswitch/DPDK, QEMU vhost-user setup when the
>>> guest DPDK application abruptly goes away via SIGKILL and then
>>> reconnects.
>>>
>>> The back trace looks roughly like this, depending on the specific
>>> rte_memcpy selected, etc, in any case the "src" parameter is garbage
>>> (in this example containing 0 + dev->host_hlen(12 = 0xc)).
>>>
>>> Thread 153 "pmd-c88/id:150" received signal SIGSEGV, Segmentation fault.
>>> [Switching to Thread 0x7f64e5e6b700 (LWP 141373)]
>>> rte_mov128blocks (n=2048, src=0xc <error: Cannot access memory at 0xc>,
>>>               dst=0x150da4480) at ../lib/eal/x86/include/rte_memcpy.h:384
>>> (gdb) bt
>>> 0  rte_mov128blocks (n=2048, src=0xc, dst=0x150da4480)
>>> 1  rte_memcpy_generic (n=2048, src=0xc, dst=0x150da4480)
>>> 2  rte_memcpy (n=2048, src=0xc, dst=<optimized out>)
>>> 3  sync_fill_seg
>>> 4  desc_to_mbuf
>>> 5  virtio_dev_tx_split
>>> 6  virtio_dev_tx_split_legacy
>>> 7  0x00007f676fea0fef in rte_vhost_dequeue_burst
>>> 8  0x00007f6772005a62 in netdev_dpdk_vhost_rxq_recv
>>> 9  0x00007f6771f38116 in netdev_rxq_recv
>>> 10 0x00007f6771f03d96 in dp_netdev_process_rxq_port
>>> 11 0x00007f6771f04239 in pmd_thread_main
>>> 12 0x00007f6771f92aff in ovsthread_wrapper
>>> 13 0x00007f6771c1b6ea in start_thread
>>> 14 0x00007f6771933a8f in clone
>>>
>>> Tested-by: Claudio Fontana <cfontana@suse.de>
>>> Signed-off-by: Claudio Fontana <cfontana@suse.de>
>>> ---
>>>   lib/vhost/virtio_net.c | 11 ++++++++---
>>>   1 file changed, 8 insertions(+), 3 deletions(-)
>>
>> This patch is also no more necessary since CVE-2022-2132 has been fixed.
>> Latests LTS versions and upstream main branch contain the fixes:
>>
>> dc1516e260a0 ("vhost: fix header spanned across more than two descriptors")
>> 71bd0cc536ad ("vhost: discard too small descriptor chains")
>>
>> desc_to_mbuf callers now check that the descriptors are at least the
>> size of the virtio_net header, so nr_vec cannot be 0 in desc_to_mbuf.
>>
>> Since I cannot reproduce, if you are willing to try please let us know
>> the results.
>>
>> Maxime
>>
> 
> Hello Maxime,
> 
> which versions of DPDK did you get to test in the guest? The problem seems to be easier to reproduce when DPDK 16.x is in the guest.
> The guest OS where the problem was encountered in the field is "Ubuntu 16.04", but we were able to reproduce this also in our lab with ubuntu20.04.
> For reproduction we have used a few network cards, mainly Intel X520.
> 
> I'll let you know our results as I have them.
> 
> Thanks,
> 
> CLaudio

Just to follow up on this, the problem is fully addressed by CVE-2022-2132.

Thanks,

Claudio


  reply	other threads:[~2022-12-20 12:23 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-08-02  0:49 [PATCH v3 0/2] vhost fixes for OVS SIGSEGV in PMD Claudio Fontana
2022-08-02  0:49 ` [PATCH v3 1/2] vhost: check for nr_vec == 0 in desc_to_mbuf, mbuf_to_desc Claudio Fontana
2022-08-02  1:34   ` Stephen Hemminger
2022-09-28 14:37   ` Maxime Coquelin
2022-09-28 15:21     ` Claudio Fontana
2022-09-28 16:03       ` Thomas Monjalon
2022-09-30 10:22         ` Maxime Coquelin
2022-10-05 15:06   ` Maxime Coquelin
2022-11-02 10:34     ` Claudio Fontana
2022-12-20 12:23       ` Claudio Fontana [this message]
2022-08-02  0:49 ` [PATCH v3 2/2] vhost: improve error handling in desc_to_mbuf Claudio Fontana
2022-10-05 12:57   ` Maxime Coquelin
2022-08-02  1:40 ` [PATCH v3 0/2] vhost fixes for OVS SIGSEGV in PMD Stephen Hemminger
2022-08-02 17:20   ` Claudio Fontana
2022-08-04 10:32     ` Claudio Fontana
2022-08-09 12:39 ` Claudio Fontana

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=e85fdbb1-2ddc-8c9a-6303-5963ac37c5c9@suse.de \
    --to=cfontana@suse.de \
    --cc=chenbo.xia@intel.com \
    --cc=dev@dpdk.org \
    --cc=maxime.coquelin@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).