DPDK patches and discussions
 help / color / mirror / Atom feed
From: Claudio Fontana <cfontana@suse.de>
To: Maxime Coquelin <maxime.coquelin@redhat.com>,
	Chenbo Xia <chenbo.xia@intel.com>
Cc: dev@dpdk.org
Subject: Re: [PATCH v3 1/2] vhost: check for nr_vec == 0 in desc_to_mbuf, mbuf_to_desc
Date: Wed, 28 Sep 2022 17:21:33 +0200	[thread overview]
Message-ID: <77f63478-88d1-01b2-0cdd-c93289bd42a6@suse.de> (raw)
In-Reply-To: <228bf109-57c2-91f0-ab2c-6ac92d688781@redhat.com>

On 9/28/22 16:37, Maxime Coquelin wrote:
> Hi Claudio,
> 
> The title should be reworded, maybe something like below?
> "vhost: fix possible out of bound access in buffer vectors"

possible, I leave it to you and other maintainers here now to figure out.

> 
> On 8/2/22 02:49, Claudio Fontana wrote:
>> in virtio_dev_split we cannot currently call desc_to_mbuf with
>> nr_vec == 0, or we end up trying to rte_memcpy from a source address
>> buf_vec[0] that is an uninitialized stack variable.
>>
>> Improve this in general by having desc_to_mbuf and mbuf_to_desc
>> return -1 when called with an invalid nr_vec == 0, which should
>> fix any other instance of this problem.
>>
>> This should fix errors that have been reported in multiple occasions
>> from telcos to the DPDK, OVS and QEMU projects, as this affects in
>> particular the openvswitch/DPDK, QEMU vhost-user setup when the
>> guest DPDK application abruptly goes away via SIGKILL and then
>> reconnects.
> 
> Looking at fill_vec_buf_split() and map_one_desc(), I suspect that on
> guest DPDK application kill/crash, the shared virtqueue memory gets
> unmmaped on guest side, and the backend reads zeros in descriptors, and
> so zero-length.
> 
> If that's the case, then your fix might not be enough, if for example
> the backends reads the .next field of the desc as 0, it may cause
> undefined behaviour. I'm not sure how we could address this though.
> 
> Maybe we miss something on Vhost side to detect the guest application
> has crashed, because we should not try to access the vrings as long as 
> the new guest app instance is done with its initialization.
> 
> I'm going to try to reproduce the issue, but I'm not sure I will
> succeed. Could you please share the Vhost logs when the issue is 
> reproduced and you face the crash?

With vacations and lab work it's unlikely anything can be done from my side for the next 2-3 weeks.

This issue has been reported multiple times from multiple telco customers for about a year, it's all over the mailing lists
between ovs, dpdk and qemu, with all the details.

Something in the governance of these Open Source projects is clearly not working right, probably too much inward-focus between a small number of companies, but I digress.

I think Chenbo Xia already knows the context, and I suspect this is now considered a "security issue". The problem is, the information about all of this has been public already for a year.

I will again repost how to reproduce here:

In general, the easiest thing to reproduce this is to use an older DPDK in the guest (16.11.x, as per the Ubuntu package),
run a testpmd application with the easiest possible configuration:

I could reduce the problem to just a single data interface, without any need to pin anything, or do any complex flow configuration. The key is in getting guest application, qemu and dpdk in the right state. The data plane traffic is flowing from a traffic generator on another host.

1.   Interfaces and Bridges

    # ovs-vsctl show

9edee1e2-c16b-41cf-bf9d-ce1ffbc096d2
    Bridge ovs-br0
        fail_mode: standalone
        datapath_type: netdev
        Port eth0
            Interface eth0
        Port dpdkvhostuser0
            Interface dpdkvhostuser0
                type: dpdkvhostuserclient
                options: {vhost-server-path="/tmp/dpdkvhostuser0"}
        Port ovs-br0
            Interface ovs-br0
                type: internal
        Port data0
            Interface data0
                type: dpdk
                options: {dpdk-devargs="0000:17:00.1", dpdk-lsc-interrupt="true", n_rxq="2", n_txq_desc="2048"}
    ovs_version: "2.14.2"


2.  QEMU VM XML configuration snippet:

    <interface type='vhostuser'>
      <mac address='00:00:00:00:00:01'/>
      <source type='unix' path='/tmp/dpdkvhostuser0' mode='server'/>
      <target dev='dpdkvhostuser0'/>
      <model type='virtio'/>
      <driver name='vhost' queues='4' rx_queue_size='1024' tx_queue_size='1024'>
        <host mrg_rxbuf='off'/>
      </driver>
      <alias name='net0'/>
      <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
    </interface>
    <interface type='network'>
      <mac address='xx:xx:xx:xx:xx:xx'/>
      <source network='default' portid='...' bridge='virbr0'/>
      <target dev='vnet9'/>
      <model type='virtio'/>
      <alias name='net1'/>
      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
    </interface>


3.  Inside the VM bind the device.

    $ cat devbind.sh

    modprobe uio_pci_generic
    dpdk-stable-16.11.11/tools/dpdk-devbind.py -b uio_pci_generic 0000:07:00.0


4.  Prepare the test scripts inside the VM:

    Here are the scripts to use:

    $ cat start_testpmd.sh
    
    #! /bin/bash

    while true ; do 
      testpmd --log-level=8 -c 0x1e -n 4 --socket-mem 512 -- -i --nb-cores=3 --port-topology=chained --disable-hw-vlan --forward-mode=macswap --auto-start --rxq=4 --txq=4 --rxd=512 --txd=512 --burst=32
    done

    $ cat kill_testpmd.sh

    #! /bin/bash

    while true ; do
       kill -9 `pgrep -x testpmd`
       sleep 2
    done

     
5. Run the test scripts in separate ssh sessions or consoles:

   session 1: ./start_testpmd.sh
   session 2: ./kill_testpmd.sh

   this ensures that the test is continuosly started and killed after a short time running.
   If the "sleep 2" is too little (necessary if you are using older qemu or DPDK on the host) you can increase it to "sleep 10".

   But with upstream DPDK and decent hardware, "sleep 2" will make you hit the issue quicker.

The issue will appear as a segfault of the ovs daemon. It can be captured via gdb, or monitored via the ovs monitor,

with the backtrace that follows:


> 
>>
>> The back trace looks roughly like this, depending on the specific
>> rte_memcpy selected, etc, in any case the "src" parameter is garbage
>> (in this example containing 0 + dev->host_hlen(12 = 0xc)).
>>
>> Thread 153 "pmd-c88/id:150" received signal SIGSEGV, Segmentation fault.
>> [Switching to Thread 0x7f64e5e6b700 (LWP 141373)]
>> rte_mov128blocks (n=2048, src=0xc <error: Cannot access memory at 0xc>,
>>               dst=0x150da4480) at ../lib/eal/x86/include/rte_memcpy.h:384
>> (gdb) bt
>> 0  rte_mov128blocks (n=2048, src=0xc, dst=0x150da4480)
>> 1  rte_memcpy_generic (n=2048, src=0xc, dst=0x150da4480)
>> 2  rte_memcpy (n=2048, src=0xc, dst=<optimized out>)
>> 3  sync_fill_seg
>> 4  desc_to_mbuf
>> 5  virtio_dev_tx_split
>> 6  virtio_dev_tx_split_legacy
>> 7  0x00007f676fea0fef in rte_vhost_dequeue_burst
>> 8  0x00007f6772005a62 in netdev_dpdk_vhost_rxq_recv
>> 9  0x00007f6771f38116 in netdev_rxq_recv
>> 10 0x00007f6771f03d96 in dp_netdev_process_rxq_port
>> 11 0x00007f6771f04239 in pmd_thread_main
>> 12 0x00007f6771f92aff in ovsthread_wrapper
>> 13 0x00007f6771c1b6ea in start_thread
>> 14 0x00007f6771933a8f in clone
> 
> 
> It is a fix, so it should contains the Fixes tag, and also cc
> stable@dpdk.org.

After one year, it is time for Redhat and Intel or whatever the governance of this project is,
to mention if there is any intention to fix this or not, before I or anyone else at SUSE will invest any more of our time and efforts here.

Any tags you need you can add as required.

Thanks,

Claudio


> 
>>
>> Tested-by: Claudio Fontana <cfontana@suse.de>
> 
> Tested-by can be removed as you're already the author.
> 
>> Signed-off-by: Claudio Fontana <cfontana@suse.de>
>> ---
>>   lib/vhost/virtio_net.c | 11 ++++++++---
>>   1 file changed, 8 insertions(+), 3 deletions(-)
>>
>> diff --git a/lib/vhost/virtio_net.c b/lib/vhost/virtio_net.c
>> index 35fa4670fd..eb19e54c2b 100644
>> --- a/lib/vhost/virtio_net.c
>> +++ b/lib/vhost/virtio_net.c
>> @@ -1153,7 +1153,7 @@ mbuf_to_desc(struct virtio_net *dev, struct vhost_virtqueue *vq,
>>   	struct virtio_net_hdr_mrg_rxbuf tmp_hdr, *hdr = NULL;
>>   	struct vhost_async *async = vq->async;
>>   
>> -	if (unlikely(m == NULL))
>> +	if (unlikely(m == NULL) || nr_vec == 0)
> 
> I think nr_vec == 0 is also unlikely.
> 
>>   		return -1;
>>   
>>   	buf_addr = buf_vec[vec_idx].buf_addr;
>> @@ -2673,6 +2673,9 @@ desc_to_mbuf(struct virtio_net *dev, struct vhost_virtqueue *vq,
>>   	struct vhost_async *async = vq->async;
>>   	struct async_inflight_info *pkts_info;
>>   
>> +	if (unlikely(nr_vec == 0))
>> +		return -1;
>> +
>>   	buf_addr = buf_vec[vec_idx].buf_addr;
>>   	buf_iova = buf_vec[vec_idx].buf_iova;
>>   	buf_len = buf_vec[vec_idx].buf_len;
>> @@ -2917,9 +2920,11 @@ virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq,
>>   						vq->last_avail_idx + i,
>>   						&nr_vec, buf_vec,
>>   						&head_idx, &buf_len,
>> -						VHOST_ACCESS_RO) < 0))
>> +						VHOST_ACCESS_RO) < 0)) {
>> +			dropped += 1;
>> +			i++;
>>   			break;
>> -
>> +		}
>>   		update_shadow_used_ring_split(vq, head_idx, 0);
>>   
>>   		err = virtio_dev_pktmbuf_prep(dev, pkts[i], buf_len);
> 


  reply	other threads:[~2022-09-28 15:21 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-08-02  0:49 [PATCH v3 0/2] vhost fixes for OVS SIGSEGV in PMD Claudio Fontana
2022-08-02  0:49 ` [PATCH v3 1/2] vhost: check for nr_vec == 0 in desc_to_mbuf, mbuf_to_desc Claudio Fontana
2022-08-02  1:34   ` Stephen Hemminger
2022-09-28 14:37   ` Maxime Coquelin
2022-09-28 15:21     ` Claudio Fontana [this message]
2022-09-28 16:03       ` Thomas Monjalon
2022-09-30 10:22         ` Maxime Coquelin
2022-10-05 15:06   ` Maxime Coquelin
2022-11-02 10:34     ` Claudio Fontana
2022-12-20 12:23       ` Claudio Fontana
2022-08-02  0:49 ` [PATCH v3 2/2] vhost: improve error handling in desc_to_mbuf Claudio Fontana
2022-10-05 12:57   ` Maxime Coquelin
2022-08-02  1:40 ` [PATCH v3 0/2] vhost fixes for OVS SIGSEGV in PMD Stephen Hemminger
2022-08-02 17:20   ` Claudio Fontana
2022-08-04 10:32     ` Claudio Fontana
2022-08-09 12:39 ` Claudio Fontana

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=77f63478-88d1-01b2-0cdd-c93289bd42a6@suse.de \
    --to=cfontana@suse.de \
    --cc=chenbo.xia@intel.com \
    --cc=dev@dpdk.org \
    --cc=maxime.coquelin@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).