From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 5D597A04B4; Wed, 26 Aug 2020 14:51:26 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 745F11B94F; Wed, 26 Aug 2020 14:51:24 +0200 (CEST) Received: from us-smtp-1.mimecast.com (us-smtp-delivery-1.mimecast.com [207.211.31.120]) by dpdk.org (Postfix) with ESMTP id E168D58C4 for ; Wed, 26 Aug 2020 14:51:22 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1598446282; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=LNNSQ02NHi/Relhmrshy7VMLKIINDFQc3WZCbuTpNgg=; b=OM+/JhN0huxOMkRQCKSk82cF0mgKjlo3dEh/di/wfulKZmnYkGRxqg3nqofN8fengRqNjz DDV1H7lEiDeNEmgcr0mIaAZvMrdYXwEVsFQSi3Eg+OY3BzCWx+r3woMAyRZOonl0a2bsHO ozTTsAHqlcGxErahzE5ae14zzXAOq4g= Received: from mail-qk1-f200.google.com (mail-qk1-f200.google.com [209.85.222.200]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-17-9h5INmUVO5ix9RjisZldWg-1; Wed, 26 Aug 2020 08:51:20 -0400 X-MC-Unique: 9h5INmUVO5ix9RjisZldWg-1 Received: by mail-qk1-f200.google.com with SMTP id e63so1522453qkd.14 for ; Wed, 26 Aug 2020 05:51:20 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc:content-transfer-encoding; bh=LNNSQ02NHi/Relhmrshy7VMLKIINDFQc3WZCbuTpNgg=; b=H3MMRQkhpszoXiFBBM/V2X5z8tycSFuguvtkz/zZkrpS1e+vMaeFGtW65MmPdJenD6 In5m8A06GGvuYNEX8kz2IqWGDacgjyilEeemuFYn7NV0ZfaIgHPeVAjOEjxmT7sG9NBF cSg+32w+PzGHSajyobEkDJBFpuQe+TTm5qBC9M6Ci1J7/nodsxM1qDJa7cT4ofKjjpah SCOwqwr8ZMs1pnRFrgGR/JxvTmgnQjiDP6ial9I/MQc30i6oN5cAOeTdxT9R2u7avMiT M/GqewZf7ZKqT0TvVu7oMxUVTZfTcrDYIn/9YhKv4z0YA+BKbUICgAjwQKWGW1hEAFio ydRw== X-Gm-Message-State: AOAM532XIeNSvq49ielqZjzOdjfBTuccZ7U/DAQdsuPXPs0VFU4f+aPM iob4jac3WV6tSV/po1Z6heTshzbpV1Een5wke/AhqS03bH3qc+gEw6RvpcRevXrkMz+y3Lhnj6G ZuYW6pI9XvN+myYs0sF8= X-Received: by 2002:a05:620a:a13:: with SMTP id i19mr11654975qka.168.1598446279612; Wed, 26 Aug 2020 05:51:19 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwEXCrRuLOk3S3+gqhW2IteqYH89VLDP32knKwtiAD9EOmi9oHBQJBH6P51m5b4FUYGhCWYsTF7nE3rGzD7X74= X-Received: by 2002:a05:620a:a13:: with SMTP id i19mr11654949qka.168.1598446279319; Wed, 26 Aug 2020 05:51:19 -0700 (PDT) MIME-Version: 1.0 References: <20200810141103.8015-1-eperezma@redhat.com> <20200810141103.8015-2-eperezma@redhat.com> In-Reply-To: From: Eugenio Perez Martin Date: Wed, 26 Aug 2020 14:50:43 +0200 Message-ID: To: "Xia, Chenbo" Cc: "dev@dpdk.org" , Adrian Moreno Zapata , Maxime Coquelin , "stable@dpdk.org" , "Wang, Zhihong" Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=eperezma@redhat.com X-Mimecast-Spam-Score: 0.002 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Subject: Re: [dpdk-dev] [PATCH 1/1] vhost: fix iotlb mempool single-consumer flag X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Hi Chenbo. On Wed, Aug 26, 2020 at 8:29 AM Xia, Chenbo wrote: > > Hi Eugenio, > > > -----Original Message----- > > From: Eugenio P=C3=A9rez > > Sent: Monday, August 10, 2020 10:11 PM > > To: dev@dpdk.org > > Cc: Adrian Moreno Zapata ; Maxime Coquelin > > ; stable@dpdk.org; Wang, Zhihong > > ; Xia, Chenbo > > Subject: [PATCH 1/1] vhost: fix iotlb mempool single-consumer flag > > > > Bugzilla bug: 523 > > > > Using testpmd as a vhost-user with iommu: > > > > /home/dpdk/build/app/dpdk-testpmd -l 1,3 \ > > --vdev net_vhost0,iface=3D/tmp/vhost-user1,queues=3D1,iommu-sup= port=3D1 > > \ > > -- --auto-start --stats-period 5 --forward-mode=3Dtxonly > > > > And qemu with packed virtqueue: > > > > > > > > > > > > > >
> function=3D'0x0'/> > > > > ... > > > > > > > > > > > > > > The fix looks fine to me. But the commit message is a little bit complica= ted > to me (also, some lines too long). Since this bug is clear and could be > described by something like 'control thread which handles iotlb msg and f= orwarding > thread which uses iotlb to translate address may modify same entry of mem= pool > and may cause a loop in iotlb_pending_entries list'. Do you think it make= s > sense? Sure, I just wanted to give enough information to reproduce it, but that can be in the bugzilla case too if you prefer. Do you need me to send a v2? Thanks! > > Thanks for the fix! > Chenbo > > > -- > > > > Is it possible to consume the iotlb's entries of the mempoo from differ= ent > > threads. Thread sanitizer example output (after change rwlocks to POSIX > > ones): > > > > WARNING: ThreadSanitizer: data race (pid=3D76927) > > Write of size 8 at 0x00017ffd5628 by thread T5: > > #0 vhost_user_iotlb_cache_insert ../lib/librte_vhost/iotlb.c:181 > > (dpdk-testpmd+0x769343) > > #1 vhost_user_iotlb_msg ../lib/librte_vhost/vhost_user.c:2380 (dpdk= - > > testpmd+0x78e4bf) > > #2 vhost_user_msg_handler ../lib/librte_vhost/vhost_user.c:2848 (dp= dk- > > testpmd+0x78fcf8) > > #3 vhost_user_read_cb ../lib/librte_vhost/socket.c:311 (dpdk- > > testpmd+0x770162) > > #4 fdset_event_dispatch ../lib/librte_vhost/fd_man.c:286 (dpdk- > > testpmd+0x7591c2) > > #5 ctrl_thread_init ../lib/librte_eal/common/eal_common_thread.c:19= 3 > > (dpdk-testpmd+0xa2890b) > > #6 (libtsan.so.0+0x2a68d) > > > > Previous read of size 8 at 0x00017ffd5628 by thread T3: > > #0 vhost_user_iotlb_cache_find ../lib/librte_vhost/iotlb.c:252 (dpd= k- > > testpmd+0x76ee96) > > #1 __vhost_iova_to_vva ../lib/librte_vhost/vhost.c:42 (dpdk- > > testpmd+0x77488c) > > #2 vhost_iova_to_vva ../lib/librte_vhost/vhost.h:753 (dpdk- > > testpmd+0x7abeb3) > > #3 map_one_desc ../lib/librte_vhost/virtio_net.c:497 (dpdk- > > testpmd+0x7abeb3) > > #4 fill_vec_buf_packed ../lib/librte_vhost/virtio_net.c:751 (dpdk- > > testpmd+0x7abeb3) > > #5 vhost_enqueue_single_packed ../lib/librte_vhost/virtio_net.c:117= 0 > > (dpdk-testpmd+0x7abeb3) > > #6 virtio_dev_rx_single_packed ../lib/librte_vhost/virtio_net.c:134= 6 > > (dpdk-testpmd+0x7abeb3) > > #7 virtio_dev_rx_packed ../lib/librte_vhost/virtio_net.c:1384 (dpdk= - > > testpmd+0x7abeb3) > > #8 virtio_dev_rx ../lib/librte_vhost/virtio_net.c:1435 (dpdk- > > testpmd+0x7b0654) > > #9 rte_vhost_enqueue_burst ../lib/librte_vhost/virtio_net.c:1465 > > (dpdk-testpmd+0x7b0654) > > #10 eth_vhost_tx ../drivers/net/vhost/rte_eth_vhost.c:470 (dpdk- > > testpmd+0x1ddfbd8) > > #11 rte_eth_tx_burst ../lib/librte_ethdev/rte_ethdev.h:4800 (dpdk- > > testpmd+0x505fdb) > > #12 pkt_burst_transmit ../app/test-pmd/txonly.c:365 (dpdk- > > testpmd+0x5106ad) > > #13 run_pkt_fwd_on_lcore ../app/test-pmd/testpmd.c:2080 (dpdk- > > testpmd+0x4f8951) > > #14 start_pkt_forward_on_core ../app/test-pmd/testpmd.c:2106 (dpdk- > > testpmd+0x4f89d7) > > #15 eal_thread_loop ../lib/librte_eal/linux/eal_thread.c:127 (dpdk- > > testpmd+0xa5b20a) > > #16 (libtsan.so.0+0x2a68d) > > > > Location is global '' at 0x000000000000 (rtemap_0+0x00003ffd562= 8) > > > > Thread T5 'vhost-events' (tid=3D76933, running) created by main threa= d at: > > #0 pthread_create (libtsan.so.0+0x2cd42) > > #1 > > rte_ctrl_thread_create ../lib/librte_eal/common/eal_common_thread.c:216 > > (dpdk-testpmd+0xa289e7) > > #2 rte_vhost_driver_start ../lib/librte_vhost/socket.c:1190 (dpdk- > > testpmd+0x7728ef) > > #3 vhost_driver_setup ../drivers/net/vhost/rte_eth_vhost.c:1028 (dp= dk- > > testpmd+0x1de233d) > > #4 eth_dev_configure ../drivers/net/vhost/rte_eth_vhost.c:1126 (dpd= k- > > testpmd+0x1de29cc) > > #5 rte_eth_dev_configure ../lib/librte_ethdev/rte_ethdev.c:1439 (dp= dk- > > testpmd+0x991ce2) > > #6 start_port ../app/test-pmd/testpmd.c:2450 (dpdk-testpmd+0x4f9b45= ) > > #7 main ../app/test-pmd/testpmd.c:3777 (dpdk-testpmd+0x4fe1ac) > > > > Thread T3 'lcore-slave-3' (tid=3D76931, running) created by main thre= ad at: > > #0 pthread_create (libtsan.so.0+0x2cd42) > > #1 rte_eal_init ../lib/librte_eal/linux/eal.c:1244 (dpdk- > > testpmd+0xa46e2b) > > #2 main ../app/test-pmd/testpmd.c:3673 (dpdk-testpmd+0x4fdd75) > > > > -- > > > > Or: > > WARNING: ThreadSanitizer: data race (pid=3D76927) > > Write of size 1 at 0x00017ffd00f8 by thread T5: > > #0 vhost_user_iotlb_cache_insert ../lib/librte_vhost/iotlb.c:182 > > (dpdk-testpmd+0x769370) > > #1 vhost_user_iotlb_msg ../lib/librte_vhost/vhost_user.c:2380 (dpdk= - > > testpmd+0x78e4bf) > > #2 vhost_user_msg_handler ../lib/librte_vhost/vhost_user.c:2848 (dp= dk- > > testpmd+0x78fcf8) > > #3 vhost_user_read_cb ../lib/librte_vhost/socket.c:311 (dpdk- > > testpmd+0x770162) > > #4 fdset_event_dispatch ../lib/librte_vhost/fd_man.c:286 (dpdk- > > testpmd+0x7591c2) > > #5 ctrl_thread_init ../lib/librte_eal/common/eal_common_thread.c:19= 3 > > (dpdk-testpmd+0xa2890b) > > #6 (libtsan.so.0+0x2a68d) > > > > Previous write of size 1 at 0x00017ffd00f8 by thread T3: > > #0 vhost_user_iotlb_pending_insert ../lib/librte_vhost/iotlb.c:86 > > (dpdk-testpmd+0x75eb0c) > > #1 __vhost_iova_to_vva ../lib/librte_vhost/vhost.c:58 (dpdk- > > testpmd+0x774926) > > #2 vhost_iova_to_vva ../lib/librte_vhost/vhost.h:753 (dpdk- > > testpmd+0x7a79d1) > > #3 virtio_dev_rx_batch_packed ../lib/librte_vhost/virtio_net.c:1295 > > (dpdk-testpmd+0x7a79d1) > > #4 virtio_dev_rx_packed ../lib/librte_vhost/virtio_net.c:1376 (dpdk= - > > testpmd+0x7a79d1) > > #5 virtio_dev_rx ../lib/librte_vhost/virtio_net.c:1435 (dpdk- > > testpmd+0x7b0654) > > #6 rte_vhost_enqueue_burst ../lib/librte_vhost/virtio_net.c:1465 > > (dpdk-testpmd+0x7b0654) > > #7 eth_vhost_tx ../drivers/net/vhost/rte_eth_vhost.c:470 (dpdk- > > testpmd+0x1ddfbd8) > > #8 rte_eth_tx_burst ../lib/librte_ethdev/rte_ethdev.h:4800 (dpdk- > > testpmd+0x505fdb) > > #9 pkt_burst_transmit ../app/test-pmd/txonly.c:365 (dpdk- > > testpmd+0x5106ad) > > #10 run_pkt_fwd_on_lcore ../app/test-pmd/testpmd.c:2080 (dpdk- > > testpmd+0x4f8951) > > #11 start_pkt_forward_on_core ../app/test-pmd/testpmd.c:2106 (dpdk- > > testpmd+0x4f89d7) > > #12 eal_thread_loop ../lib/librte_eal/linux/eal_thread.c:127 (dpdk- > > testpmd+0xa5b20a) > > #13 (libtsan.so.0+0x2a68d) > > > > -- > > > > As a consequence, the two threads can modify the same entry of the memp= ool. > > Usually, this cause a loop in iotlb_pending_entries list. > > > > Fixes: d012d1f293f4 ("vhost: add IOTLB helper functions") > > Signed-off-by: Eugenio P=C3=A9rez > > --- > > lib/librte_vhost/iotlb.c | 3 +-- > > 1 file changed, 1 insertion(+), 2 deletions(-) > > > > diff --git a/lib/librte_vhost/iotlb.c b/lib/librte_vhost/iotlb.c > > index 5b3a0c090..e0b67721b 100644 > > --- a/lib/librte_vhost/iotlb.c > > +++ b/lib/librte_vhost/iotlb.c > > @@ -321,8 +321,7 @@ vhost_user_iotlb_init(struct virtio_net *dev, int > > vq_index) > > IOTLB_CACHE_SIZE, sizeof(struct vhost_iotlb_entry= ), 0, > > 0, 0, NULL, NULL, NULL, socket, > > MEMPOOL_F_NO_CACHE_ALIGN | > > - MEMPOOL_F_SP_PUT | > > - MEMPOOL_F_SC_GET); > > + MEMPOOL_F_SP_PUT); > > if (!vq->iotlb_pool) { > > VHOST_LOG_CONFIG(ERR, > > "Failed to create IOTLB cache pool (%s)\n= ", > > -- > > 2.18.1 >