From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 72E09A04C3 for ; Mon, 10 Aug 2020 16:11:18 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 340FA2986; Mon, 10 Aug 2020 16:11:18 +0200 (CEST) Received: from us-smtp-delivery-1.mimecast.com (us-smtp-1.mimecast.com [207.211.31.81]) by dpdk.org (Postfix) with ESMTP id CD9D12986 for ; Mon, 10 Aug 2020 16:11:15 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1597068675; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding; bh=oyK5HM13K9vJCiHLQVblmowLDrRciquAUdW5ck5kOdI=; b=cgOfAf4oCJHyC6BA/SphT2oM+CjObILhT+Tq/C1llpHJ76YHhfe5lvI6bDjq1qTV2cHOcT Fa5mH7bI9dz5f6RBjEbT8rvfQ18k/q9QN0goBLBk8BqaG5E8awQVB+px32XdXzxHY0PyVB 22VOwiIBDdf2uZpg9kR17flpcJ7GIN0= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-263-HFnW3bUSMOyu5wMDwpU_Bg-1; Mon, 10 Aug 2020 10:11:13 -0400 X-MC-Unique: HFnW3bUSMOyu5wMDwpU_Bg-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com [10.5.11.15]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id B3CCB1005504; Mon, 10 Aug 2020 14:11:11 +0000 (UTC) Received: from eperezma.remote.csb (ovpn-113-5.ams2.redhat.com [10.36.113.5]) by smtp.corp.redhat.com (Postfix) with ESMTP id E0EA572ADE; Mon, 10 Aug 2020 14:11:06 +0000 (UTC) From: =?UTF-8?q?Eugenio=20P=C3=A9rez?= To: dev@dpdk.org Cc: Adrian Moreno Zapata , Maxime Coquelin , stable@dpdk.org, Zhihong Wang , Chenbo Xia Date: Mon, 10 Aug 2020 16:11:02 +0200 Message-Id: <20200810141103.8015-1-eperezma@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=eperezma@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Subject: [dpdk-stable] [PATCH 0/1] vhost: fix iotlb mempool single-consumer flag X-BeenThere: stable@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches for DPDK stable branches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: stable-bounces@dpdk.org Sender: "stable" Bugzilla bug: 523 Using testpmd as a vhost-user with iommu: /home/dpdk/build/app/dpdk-testpmd -l 1,3 \ --vdev net_vhost0,iface=/tmp/vhost-user1,queues=1,iommu-support=1 \ -- --auto-start --stats-period 5 --forward-mode=txonly And qemu with packed virtqueue:
... -- Is it possible to consume the iotlb's entries of the mempoo from different threads. Thread sanitizer example output (after change rwlocks to POSIX ones): WARNING: ThreadSanitizer: data race (pid=76927) Write of size 8 at 0x00017ffd5628 by thread T5: #0 vhost_user_iotlb_cache_insert ../lib/librte_vhost/iotlb.c:181 (dpdk-testpmd+0x769343) #1 vhost_user_iotlb_msg ../lib/librte_vhost/vhost_user.c:2380 (dpdk-testpmd+0x78e4bf) #2 vhost_user_msg_handler ../lib/librte_vhost/vhost_user.c:2848 (dpdk-testpmd+0x78fcf8) #3 vhost_user_read_cb ../lib/librte_vhost/socket.c:311 (dpdk-testpmd+0x770162) #4 fdset_event_dispatch ../lib/librte_vhost/fd_man.c:286 (dpdk-testpmd+0x7591c2) #5 ctrl_thread_init ../lib/librte_eal/common/eal_common_thread.c:193 (dpdk-testpmd+0xa2890b) #6 (libtsan.so.0+0x2a68d) Previous read of size 8 at 0x00017ffd5628 by thread T3: #0 vhost_user_iotlb_cache_find ../lib/librte_vhost/iotlb.c:252 (dpdk-testpmd+0x76ee96) #1 __vhost_iova_to_vva ../lib/librte_vhost/vhost.c:42 (dpdk-testpmd+0x77488c) #2 vhost_iova_to_vva ../lib/librte_vhost/vhost.h:753 (dpdk-testpmd+0x7abeb3) #3 map_one_desc ../lib/librte_vhost/virtio_net.c:497 (dpdk-testpmd+0x7abeb3) #4 fill_vec_buf_packed ../lib/librte_vhost/virtio_net.c:751 (dpdk-testpmd+0x7abeb3) #5 vhost_enqueue_single_packed ../lib/librte_vhost/virtio_net.c:1170 (dpdk-testpmd+0x7abeb3) #6 virtio_dev_rx_single_packed ../lib/librte_vhost/virtio_net.c:1346 (dpdk-testpmd+0x7abeb3) #7 virtio_dev_rx_packed ../lib/librte_vhost/virtio_net.c:1384 (dpdk-testpmd+0x7abeb3) #8 virtio_dev_rx ../lib/librte_vhost/virtio_net.c:1435 (dpdk-testpmd+0x7b0654) #9 rte_vhost_enqueue_burst ../lib/librte_vhost/virtio_net.c:1465 (dpdk-testpmd+0x7b0654) #10 eth_vhost_tx ../drivers/net/vhost/rte_eth_vhost.c:470 (dpdk-testpmd+0x1ddfbd8) #11 rte_eth_tx_burst ../lib/librte_ethdev/rte_ethdev.h:4800 (dpdk-testpmd+0x505fdb) #12 pkt_burst_transmit ../app/test-pmd/txonly.c:365 (dpdk-testpmd+0x5106ad) #13 run_pkt_fwd_on_lcore ../app/test-pmd/testpmd.c:2080 (dpdk-testpmd+0x4f8951) #14 start_pkt_forward_on_core ../app/test-pmd/testpmd.c:2106 (dpdk-testpmd+0x4f89d7) #15 eal_thread_loop ../lib/librte_eal/linux/eal_thread.c:127 (dpdk-testpmd+0xa5b20a) #16 (libtsan.so.0+0x2a68d) Location is global '' at 0x000000000000 (rtemap_0+0x00003ffd5628) Thread T5 'vhost-events' (tid=76933, running) created by main thread at: #0 pthread_create (libtsan.so.0+0x2cd42) #1 rte_ctrl_thread_create ../lib/librte_eal/common/eal_common_thread.c:216 (dpdk-testpmd+0xa289e7) #2 rte_vhost_driver_start ../lib/librte_vhost/socket.c:1190 (dpdk-testpmd+0x7728ef) #3 vhost_driver_setup ../drivers/net/vhost/rte_eth_vhost.c:1028 (dpdk-testpmd+0x1de233d) #4 eth_dev_configure ../drivers/net/vhost/rte_eth_vhost.c:1126 (dpdk-testpmd+0x1de29cc) #5 rte_eth_dev_configure ../lib/librte_ethdev/rte_ethdev.c:1439 (dpdk-testpmd+0x991ce2) #6 start_port ../app/test-pmd/testpmd.c:2450 (dpdk-testpmd+0x4f9b45) #7 main ../app/test-pmd/testpmd.c:3777 (dpdk-testpmd+0x4fe1ac) Thread T3 'lcore-slave-3' (tid=76931, running) created by main thread at: #0 pthread_create (libtsan.so.0+0x2cd42) #1 rte_eal_init ../lib/librte_eal/linux/eal.c:1244 (dpdk-testpmd+0xa46e2b) #2 main ../app/test-pmd/testpmd.c:3673 (dpdk-testpmd+0x4fdd75) -- Or: WARNING: ThreadSanitizer: data race (pid=76927) Write of size 1 at 0x00017ffd00f8 by thread T5: #0 vhost_user_iotlb_cache_insert ../lib/librte_vhost/iotlb.c:182 (dpdk-testpmd+0x769370) #1 vhost_user_iotlb_msg ../lib/librte_vhost/vhost_user.c:2380 (dpdk-testpmd+0x78e4bf) #2 vhost_user_msg_handler ../lib/librte_vhost/vhost_user.c:2848 (dpdk-testpmd+0x78fcf8) #3 vhost_user_read_cb ../lib/librte_vhost/socket.c:311 (dpdk-testpmd+0x770162) #4 fdset_event_dispatch ../lib/librte_vhost/fd_man.c:286 (dpdk-testpmd+0x7591c2) #5 ctrl_thread_init ../lib/librte_eal/common/eal_common_thread.c:193 (dpdk-testpmd+0xa2890b) #6 (libtsan.so.0+0x2a68d) Previous write of size 1 at 0x00017ffd00f8 by thread T3: #0 vhost_user_iotlb_pending_insert ../lib/librte_vhost/iotlb.c:86 (dpdk-testpmd+0x75eb0c) #1 __vhost_iova_to_vva ../lib/librte_vhost/vhost.c:58 (dpdk-testpmd+0x774926) #2 vhost_iova_to_vva ../lib/librte_vhost/vhost.h:753 (dpdk-testpmd+0x7a79d1) #3 virtio_dev_rx_batch_packed ../lib/librte_vhost/virtio_net.c:1295 (dpdk-testpmd+0x7a79d1) #4 virtio_dev_rx_packed ../lib/librte_vhost/virtio_net.c:1376 (dpdk-testpmd+0x7a79d1) #5 virtio_dev_rx ../lib/librte_vhost/virtio_net.c:1435 (dpdk-testpmd+0x7b0654) #6 rte_vhost_enqueue_burst ../lib/librte_vhost/virtio_net.c:1465 (dpdk-testpmd+0x7b0654) #7 eth_vhost_tx ../drivers/net/vhost/rte_eth_vhost.c:470 (dpdk-testpmd+0x1ddfbd8) #8 rte_eth_tx_burst ../lib/librte_ethdev/rte_ethdev.h:4800 (dpdk-testpmd+0x505fdb) #9 pkt_burst_transmit ../app/test-pmd/txonly.c:365 (dpdk-testpmd+0x5106ad) #10 run_pkt_fwd_on_lcore ../app/test-pmd/testpmd.c:2080 (dpdk-testpmd+0x4f8951) #11 start_pkt_forward_on_core ../app/test-pmd/testpmd.c:2106 (dpdk-testpmd+0x4f89d7) #12 eal_thread_loop ../lib/librte_eal/linux/eal_thread.c:127 (dpdk-testpmd+0xa5b20a) #13 (libtsan.so.0+0x2a68d) -- As a consequence, the two threads can modify the same entry of the mempool. Usually, this cause a loop in iotlb_pending_entries list. This behavior is only observed on packed vq + virtio-net kernel driver in the guest, so we could make the single-consumer flag optional. However, I have not found why I cannot see the issue in split, so the safer option is to never set it. Any comments? Thanks! Eugenio PĂ©rez (1): vhost: fix iotlb mempool single-consumer flag lib/librte_vhost/iotlb.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) -- 2.18.1