From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 71DABA04B1 for ; Tue, 25 Aug 2020 11:18:14 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 60D0E1C19E; Tue, 25 Aug 2020 11:18:14 +0200 (CEST) Received: from us-smtp-delivery-1.mimecast.com (us-smtp-1.mimecast.com [205.139.110.61]) by dpdk.org (Postfix) with ESMTP id 85D1C1C19B for ; Tue, 25 Aug 2020 11:18:13 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1598347093; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=KBM1mNmaOOAo9v07wLPOnm7T2i2Tvtyhl1WcYG6XZm4=; b=RJTTfZOUqgwlZEhYcjdglR6b/XSsqTUiH+yiqjSm+fyf1fYNP7k9NBGARur+ZsBuDRsRBQ nTK74aItWJDIgrjGYNVUat0D/5zuQFfBb9nyTBIc7jSySQasCh2IZFMPELhbmX9G670+74 v5MGoxlL2ndxTQbveZ9p0zsppWFStUk= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-258-vM30XJgdPkelFHhNZyBiTg-1; Tue, 25 Aug 2020 05:18:09 -0400 X-MC-Unique: vM30XJgdPkelFHhNZyBiTg-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id E095781CAFE; Tue, 25 Aug 2020 09:18:07 +0000 (UTC) Received: from [10.33.36.20] (unknown [10.33.36.20]) by smtp.corp.redhat.com (Postfix) with ESMTP id 3D33977D71; Tue, 25 Aug 2020 09:17:53 +0000 (UTC) To: =?UTF-8?Q?Eugenio_P=c3=a9rez?= , dev@dpdk.org Cc: Adrian Moreno Zapata , Maxime Coquelin , stable@dpdk.org, Zhihong Wang , Chenbo Xia References: <20200810141103.8015-1-eperezma@redhat.com> <20200810141103.8015-2-eperezma@redhat.com> From: Kevin Traynor Message-ID: Date: Tue, 25 Aug 2020 10:17:52 +0100 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.11.0 MIME-Version: 1.0 In-Reply-To: <20200810141103.8015-2-eperezma@redhat.com> Content-Language: en-US X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=ktraynor@redhat.com X-Mimecast-Spam-Score: 0.002 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 8bit Subject: Re: [dpdk-stable] [dpdk-dev] [PATCH 1/1] vhost: fix iotlb mempool single-consumer flag X-BeenThere: stable@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches for DPDK stable branches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: stable-bounces@dpdk.org Sender: "stable" On 10/08/2020 15:11, Eugenio Pérez wrote: > Bugzilla bug: 523 > > Using testpmd as a vhost-user with iommu: > > /home/dpdk/build/app/dpdk-testpmd -l 1,3 \ > --vdev net_vhost0,iface=/tmp/vhost-user1,queues=1,iommu-support=1 \ > -- --auto-start --stats-period 5 --forward-mode=txonly > > And qemu with packed virtqueue: > > > > > > >
> > ... > > > > > > > -- > > Is it possible to consume the iotlb's entries of the mempoo from different > threads. Thread sanitizer example output (after change rwlocks to POSIX ones): > > WARNING: ThreadSanitizer: data race (pid=76927) > Write of size 8 at 0x00017ffd5628 by thread T5: > #0 vhost_user_iotlb_cache_insert ../lib/librte_vhost/iotlb.c:181 (dpdk-testpmd+0x769343) > #1 vhost_user_iotlb_msg ../lib/librte_vhost/vhost_user.c:2380 (dpdk-testpmd+0x78e4bf) > #2 vhost_user_msg_handler ../lib/librte_vhost/vhost_user.c:2848 (dpdk-testpmd+0x78fcf8) > #3 vhost_user_read_cb ../lib/librte_vhost/socket.c:311 (dpdk-testpmd+0x770162) > #4 fdset_event_dispatch ../lib/librte_vhost/fd_man.c:286 (dpdk-testpmd+0x7591c2) > #5 ctrl_thread_init ../lib/librte_eal/common/eal_common_thread.c:193 (dpdk-testpmd+0xa2890b) > #6 (libtsan.so.0+0x2a68d) > > Previous read of size 8 at 0x00017ffd5628 by thread T3: > #0 vhost_user_iotlb_cache_find ../lib/librte_vhost/iotlb.c:252 (dpdk-testpmd+0x76ee96) > #1 __vhost_iova_to_vva ../lib/librte_vhost/vhost.c:42 (dpdk-testpmd+0x77488c) > #2 vhost_iova_to_vva ../lib/librte_vhost/vhost.h:753 (dpdk-testpmd+0x7abeb3) > #3 map_one_desc ../lib/librte_vhost/virtio_net.c:497 (dpdk-testpmd+0x7abeb3) > #4 fill_vec_buf_packed ../lib/librte_vhost/virtio_net.c:751 (dpdk-testpmd+0x7abeb3) > #5 vhost_enqueue_single_packed ../lib/librte_vhost/virtio_net.c:1170 (dpdk-testpmd+0x7abeb3) > #6 virtio_dev_rx_single_packed ../lib/librte_vhost/virtio_net.c:1346 (dpdk-testpmd+0x7abeb3) > #7 virtio_dev_rx_packed ../lib/librte_vhost/virtio_net.c:1384 (dpdk-testpmd+0x7abeb3) > #8 virtio_dev_rx ../lib/librte_vhost/virtio_net.c:1435 (dpdk-testpmd+0x7b0654) > #9 rte_vhost_enqueue_burst ../lib/librte_vhost/virtio_net.c:1465 (dpdk-testpmd+0x7b0654) > #10 eth_vhost_tx ../drivers/net/vhost/rte_eth_vhost.c:470 (dpdk-testpmd+0x1ddfbd8) > #11 rte_eth_tx_burst ../lib/librte_ethdev/rte_ethdev.h:4800 (dpdk-testpmd+0x505fdb) > #12 pkt_burst_transmit ../app/test-pmd/txonly.c:365 (dpdk-testpmd+0x5106ad) > #13 run_pkt_fwd_on_lcore ../app/test-pmd/testpmd.c:2080 (dpdk-testpmd+0x4f8951) > #14 start_pkt_forward_on_core ../app/test-pmd/testpmd.c:2106 (dpdk-testpmd+0x4f89d7) > #15 eal_thread_loop ../lib/librte_eal/linux/eal_thread.c:127 (dpdk-testpmd+0xa5b20a) > #16 (libtsan.so.0+0x2a68d) > > Location is global '' at 0x000000000000 (rtemap_0+0x00003ffd5628) > > Thread T5 'vhost-events' (tid=76933, running) created by main thread at: > #0 pthread_create (libtsan.so.0+0x2cd42) > #1 rte_ctrl_thread_create ../lib/librte_eal/common/eal_common_thread.c:216 (dpdk-testpmd+0xa289e7) > #2 rte_vhost_driver_start ../lib/librte_vhost/socket.c:1190 (dpdk-testpmd+0x7728ef) > #3 vhost_driver_setup ../drivers/net/vhost/rte_eth_vhost.c:1028 (dpdk-testpmd+0x1de233d) > #4 eth_dev_configure ../drivers/net/vhost/rte_eth_vhost.c:1126 (dpdk-testpmd+0x1de29cc) > #5 rte_eth_dev_configure ../lib/librte_ethdev/rte_ethdev.c:1439 (dpdk-testpmd+0x991ce2) > #6 start_port ../app/test-pmd/testpmd.c:2450 (dpdk-testpmd+0x4f9b45) > #7 main ../app/test-pmd/testpmd.c:3777 (dpdk-testpmd+0x4fe1ac) > > Thread T3 'lcore-slave-3' (tid=76931, running) created by main thread at: > #0 pthread_create (libtsan.so.0+0x2cd42) > #1 rte_eal_init ../lib/librte_eal/linux/eal.c:1244 (dpdk-testpmd+0xa46e2b) > #2 main ../app/test-pmd/testpmd.c:3673 (dpdk-testpmd+0x4fdd75) > > -- > > Or: > WARNING: ThreadSanitizer: data race (pid=76927) > Write of size 1 at 0x00017ffd00f8 by thread T5: > #0 vhost_user_iotlb_cache_insert ../lib/librte_vhost/iotlb.c:182 (dpdk-testpmd+0x769370) > #1 vhost_user_iotlb_msg ../lib/librte_vhost/vhost_user.c:2380 (dpdk-testpmd+0x78e4bf) > #2 vhost_user_msg_handler ../lib/librte_vhost/vhost_user.c:2848 (dpdk-testpmd+0x78fcf8) > #3 vhost_user_read_cb ../lib/librte_vhost/socket.c:311 (dpdk-testpmd+0x770162) > #4 fdset_event_dispatch ../lib/librte_vhost/fd_man.c:286 (dpdk-testpmd+0x7591c2) > #5 ctrl_thread_init ../lib/librte_eal/common/eal_common_thread.c:193 (dpdk-testpmd+0xa2890b) > #6 (libtsan.so.0+0x2a68d) > > Previous write of size 1 at 0x00017ffd00f8 by thread T3: > #0 vhost_user_iotlb_pending_insert ../lib/librte_vhost/iotlb.c:86 (dpdk-testpmd+0x75eb0c) > #1 __vhost_iova_to_vva ../lib/librte_vhost/vhost.c:58 (dpdk-testpmd+0x774926) > #2 vhost_iova_to_vva ../lib/librte_vhost/vhost.h:753 (dpdk-testpmd+0x7a79d1) > #3 virtio_dev_rx_batch_packed ../lib/librte_vhost/virtio_net.c:1295 (dpdk-testpmd+0x7a79d1) > #4 virtio_dev_rx_packed ../lib/librte_vhost/virtio_net.c:1376 (dpdk-testpmd+0x7a79d1) > #5 virtio_dev_rx ../lib/librte_vhost/virtio_net.c:1435 (dpdk-testpmd+0x7b0654) > #6 rte_vhost_enqueue_burst ../lib/librte_vhost/virtio_net.c:1465 (dpdk-testpmd+0x7b0654) > #7 eth_vhost_tx ../drivers/net/vhost/rte_eth_vhost.c:470 (dpdk-testpmd+0x1ddfbd8) > #8 rte_eth_tx_burst ../lib/librte_ethdev/rte_ethdev.h:4800 (dpdk-testpmd+0x505fdb) > #9 pkt_burst_transmit ../app/test-pmd/txonly.c:365 (dpdk-testpmd+0x5106ad) > #10 run_pkt_fwd_on_lcore ../app/test-pmd/testpmd.c:2080 (dpdk-testpmd+0x4f8951) > #11 start_pkt_forward_on_core ../app/test-pmd/testpmd.c:2106 (dpdk-testpmd+0x4f89d7) > #12 eal_thread_loop ../lib/librte_eal/linux/eal_thread.c:127 (dpdk-testpmd+0xa5b20a) > #13 (libtsan.so.0+0x2a68d) > > -- > > As a consequence, the two threads can modify the same entry of the mempool. > Usually, this cause a loop in iotlb_pending_entries list. > > Fixes: d012d1f293f4 ("vhost: add IOTLB helper functions") > Signed-off-by: Eugenio Pérez > --- > lib/librte_vhost/iotlb.c | 3 +-- > 1 file changed, 1 insertion(+), 2 deletions(-) > > diff --git a/lib/librte_vhost/iotlb.c b/lib/librte_vhost/iotlb.c > index 5b3a0c090..e0b67721b 100644 > --- a/lib/librte_vhost/iotlb.c > +++ b/lib/librte_vhost/iotlb.c > @@ -321,8 +321,7 @@ vhost_user_iotlb_init(struct virtio_net *dev, int vq_index) > IOTLB_CACHE_SIZE, sizeof(struct vhost_iotlb_entry), 0, > 0, 0, NULL, NULL, NULL, socket, > MEMPOOL_F_NO_CACHE_ALIGN | > - MEMPOOL_F_SP_PUT | > - MEMPOOL_F_SC_GET); > + MEMPOOL_F_SP_PUT); > if (!vq->iotlb_pool) { > VHOST_LOG_CONFIG(ERR, > "Failed to create IOTLB cache pool (%s)\n", > Looks ok to me, but would need review from vhost maintainer.