From: Eugenio Perez Martin <eperezma@redhat.com>
To: "Xia, Chenbo" <chenbo.xia@intel.com>
Cc: "dev@dpdk.org" <dev@dpdk.org>,
Adrian Moreno Zapata <amorenoz@redhat.com>,
Maxime Coquelin <maxime.coquelin@redhat.com>,
"stable@dpdk.org" <stable@dpdk.org>,
"Wang, Zhihong" <zhihong.wang@intel.com>
Subject: Re: [dpdk-dev] [PATCH 1/1] vhost: fix iotlb mempool single-consumer flag
Date: Wed, 26 Aug 2020 14:50:43 +0200 [thread overview]
Message-ID: <CAJaqyWfihL7svxqAUR7sv6DnLafTdsrL3TcRebDzi-GJRqAPbA@mail.gmail.com> (raw)
In-Reply-To: <MN2PR11MB40631F7773E0AA289EFE57B99C540@MN2PR11MB4063.namprd11.prod.outlook.com>
Hi Chenbo.
On Wed, Aug 26, 2020 at 8:29 AM Xia, Chenbo <chenbo.xia@intel.com> wrote:
>
> Hi Eugenio,
>
> > -----Original Message-----
> > From: Eugenio Pérez <eperezma@redhat.com>
> > Sent: Monday, August 10, 2020 10:11 PM
> > To: dev@dpdk.org
> > Cc: Adrian Moreno Zapata <amorenoz@redhat.com>; Maxime Coquelin
> > <maxime.coquelin@redhat.com>; stable@dpdk.org; Wang, Zhihong
> > <zhihong.wang@intel.com>; Xia, Chenbo <chenbo.xia@intel.com>
> > Subject: [PATCH 1/1] vhost: fix iotlb mempool single-consumer flag
> >
> > Bugzilla bug: 523
> >
> > Using testpmd as a vhost-user with iommu:
> >
> > /home/dpdk/build/app/dpdk-testpmd -l 1,3 \
> > --vdev net_vhost0,iface=/tmp/vhost-user1,queues=1,iommu-support=1
> > \
> > -- --auto-start --stats-period 5 --forward-mode=txonly
> >
> > And qemu with packed virtqueue:
> >
> > <interface type='vhostuser'>
> > <mac address='88:67:11:5f:dd:02'/>
> > <source type='unix' path='/tmp/vhost-user1' mode='client'/>
> > <model type='virtio'/>
> > <driver name='vhost' rx_queue_size='1024' iommu='on' ats='on'/>
> > <address type='pci' domain='0x0000' bus='0x07' slot='0x00'
> > function='0x0'/>
> > </interface>
> > ...
> >
> > <qemu:commandline>
> > <qemu:arg value='-set'/>
> > <qemu:arg value='device.net1.packed=on'/>
> > </qemu:commandline>
> >
>
> The fix looks fine to me. But the commit message is a little bit complicated
> to me (also, some lines too long). Since this bug is clear and could be
> described by something like 'control thread which handles iotlb msg and forwarding
> thread which uses iotlb to translate address may modify same entry of mempool
> and may cause a loop in iotlb_pending_entries list'. Do you think it makes
> sense?
Sure, I just wanted to give enough information to reproduce it, but
that can be in the bugzilla case too if you prefer. Do you need me to
send a v2?
Thanks!
>
> Thanks for the fix!
> Chenbo
>
> > --
> >
> > Is it possible to consume the iotlb's entries of the mempoo from different
> > threads. Thread sanitizer example output (after change rwlocks to POSIX
> > ones):
> >
> > WARNING: ThreadSanitizer: data race (pid=76927)
> > Write of size 8 at 0x00017ffd5628 by thread T5:
> > #0 vhost_user_iotlb_cache_insert ../lib/librte_vhost/iotlb.c:181
> > (dpdk-testpmd+0x769343)
> > #1 vhost_user_iotlb_msg ../lib/librte_vhost/vhost_user.c:2380 (dpdk-
> > testpmd+0x78e4bf)
> > #2 vhost_user_msg_handler ../lib/librte_vhost/vhost_user.c:2848 (dpdk-
> > testpmd+0x78fcf8)
> > #3 vhost_user_read_cb ../lib/librte_vhost/socket.c:311 (dpdk-
> > testpmd+0x770162)
> > #4 fdset_event_dispatch ../lib/librte_vhost/fd_man.c:286 (dpdk-
> > testpmd+0x7591c2)
> > #5 ctrl_thread_init ../lib/librte_eal/common/eal_common_thread.c:193
> > (dpdk-testpmd+0xa2890b)
> > #6 <null> <null> (libtsan.so.0+0x2a68d)
> >
> > Previous read of size 8 at 0x00017ffd5628 by thread T3:
> > #0 vhost_user_iotlb_cache_find ../lib/librte_vhost/iotlb.c:252 (dpdk-
> > testpmd+0x76ee96)
> > #1 __vhost_iova_to_vva ../lib/librte_vhost/vhost.c:42 (dpdk-
> > testpmd+0x77488c)
> > #2 vhost_iova_to_vva ../lib/librte_vhost/vhost.h:753 (dpdk-
> > testpmd+0x7abeb3)
> > #3 map_one_desc ../lib/librte_vhost/virtio_net.c:497 (dpdk-
> > testpmd+0x7abeb3)
> > #4 fill_vec_buf_packed ../lib/librte_vhost/virtio_net.c:751 (dpdk-
> > testpmd+0x7abeb3)
> > #5 vhost_enqueue_single_packed ../lib/librte_vhost/virtio_net.c:1170
> > (dpdk-testpmd+0x7abeb3)
> > #6 virtio_dev_rx_single_packed ../lib/librte_vhost/virtio_net.c:1346
> > (dpdk-testpmd+0x7abeb3)
> > #7 virtio_dev_rx_packed ../lib/librte_vhost/virtio_net.c:1384 (dpdk-
> > testpmd+0x7abeb3)
> > #8 virtio_dev_rx ../lib/librte_vhost/virtio_net.c:1435 (dpdk-
> > testpmd+0x7b0654)
> > #9 rte_vhost_enqueue_burst ../lib/librte_vhost/virtio_net.c:1465
> > (dpdk-testpmd+0x7b0654)
> > #10 eth_vhost_tx ../drivers/net/vhost/rte_eth_vhost.c:470 (dpdk-
> > testpmd+0x1ddfbd8)
> > #11 rte_eth_tx_burst ../lib/librte_ethdev/rte_ethdev.h:4800 (dpdk-
> > testpmd+0x505fdb)
> > #12 pkt_burst_transmit ../app/test-pmd/txonly.c:365 (dpdk-
> > testpmd+0x5106ad)
> > #13 run_pkt_fwd_on_lcore ../app/test-pmd/testpmd.c:2080 (dpdk-
> > testpmd+0x4f8951)
> > #14 start_pkt_forward_on_core ../app/test-pmd/testpmd.c:2106 (dpdk-
> > testpmd+0x4f89d7)
> > #15 eal_thread_loop ../lib/librte_eal/linux/eal_thread.c:127 (dpdk-
> > testpmd+0xa5b20a)
> > #16 <null> <null> (libtsan.so.0+0x2a68d)
> >
> > Location is global '<null>' at 0x000000000000 (rtemap_0+0x00003ffd5628)
> >
> > Thread T5 'vhost-events' (tid=76933, running) created by main thread at:
> > #0 pthread_create <null> (libtsan.so.0+0x2cd42)
> > #1
> > rte_ctrl_thread_create ../lib/librte_eal/common/eal_common_thread.c:216
> > (dpdk-testpmd+0xa289e7)
> > #2 rte_vhost_driver_start ../lib/librte_vhost/socket.c:1190 (dpdk-
> > testpmd+0x7728ef)
> > #3 vhost_driver_setup ../drivers/net/vhost/rte_eth_vhost.c:1028 (dpdk-
> > testpmd+0x1de233d)
> > #4 eth_dev_configure ../drivers/net/vhost/rte_eth_vhost.c:1126 (dpdk-
> > testpmd+0x1de29cc)
> > #5 rte_eth_dev_configure ../lib/librte_ethdev/rte_ethdev.c:1439 (dpdk-
> > testpmd+0x991ce2)
> > #6 start_port ../app/test-pmd/testpmd.c:2450 (dpdk-testpmd+0x4f9b45)
> > #7 main ../app/test-pmd/testpmd.c:3777 (dpdk-testpmd+0x4fe1ac)
> >
> > Thread T3 'lcore-slave-3' (tid=76931, running) created by main thread at:
> > #0 pthread_create <null> (libtsan.so.0+0x2cd42)
> > #1 rte_eal_init ../lib/librte_eal/linux/eal.c:1244 (dpdk-
> > testpmd+0xa46e2b)
> > #2 main ../app/test-pmd/testpmd.c:3673 (dpdk-testpmd+0x4fdd75)
> >
> > --
> >
> > Or:
> > WARNING: ThreadSanitizer: data race (pid=76927)
> > Write of size 1 at 0x00017ffd00f8 by thread T5:
> > #0 vhost_user_iotlb_cache_insert ../lib/librte_vhost/iotlb.c:182
> > (dpdk-testpmd+0x769370)
> > #1 vhost_user_iotlb_msg ../lib/librte_vhost/vhost_user.c:2380 (dpdk-
> > testpmd+0x78e4bf)
> > #2 vhost_user_msg_handler ../lib/librte_vhost/vhost_user.c:2848 (dpdk-
> > testpmd+0x78fcf8)
> > #3 vhost_user_read_cb ../lib/librte_vhost/socket.c:311 (dpdk-
> > testpmd+0x770162)
> > #4 fdset_event_dispatch ../lib/librte_vhost/fd_man.c:286 (dpdk-
> > testpmd+0x7591c2)
> > #5 ctrl_thread_init ../lib/librte_eal/common/eal_common_thread.c:193
> > (dpdk-testpmd+0xa2890b)
> > #6 <null> <null> (libtsan.so.0+0x2a68d)
> >
> > Previous write of size 1 at 0x00017ffd00f8 by thread T3:
> > #0 vhost_user_iotlb_pending_insert ../lib/librte_vhost/iotlb.c:86
> > (dpdk-testpmd+0x75eb0c)
> > #1 __vhost_iova_to_vva ../lib/librte_vhost/vhost.c:58 (dpdk-
> > testpmd+0x774926)
> > #2 vhost_iova_to_vva ../lib/librte_vhost/vhost.h:753 (dpdk-
> > testpmd+0x7a79d1)
> > #3 virtio_dev_rx_batch_packed ../lib/librte_vhost/virtio_net.c:1295
> > (dpdk-testpmd+0x7a79d1)
> > #4 virtio_dev_rx_packed ../lib/librte_vhost/virtio_net.c:1376 (dpdk-
> > testpmd+0x7a79d1)
> > #5 virtio_dev_rx ../lib/librte_vhost/virtio_net.c:1435 (dpdk-
> > testpmd+0x7b0654)
> > #6 rte_vhost_enqueue_burst ../lib/librte_vhost/virtio_net.c:1465
> > (dpdk-testpmd+0x7b0654)
> > #7 eth_vhost_tx ../drivers/net/vhost/rte_eth_vhost.c:470 (dpdk-
> > testpmd+0x1ddfbd8)
> > #8 rte_eth_tx_burst ../lib/librte_ethdev/rte_ethdev.h:4800 (dpdk-
> > testpmd+0x505fdb)
> > #9 pkt_burst_transmit ../app/test-pmd/txonly.c:365 (dpdk-
> > testpmd+0x5106ad)
> > #10 run_pkt_fwd_on_lcore ../app/test-pmd/testpmd.c:2080 (dpdk-
> > testpmd+0x4f8951)
> > #11 start_pkt_forward_on_core ../app/test-pmd/testpmd.c:2106 (dpdk-
> > testpmd+0x4f89d7)
> > #12 eal_thread_loop ../lib/librte_eal/linux/eal_thread.c:127 (dpdk-
> > testpmd+0xa5b20a)
> > #13 <null> <null> (libtsan.so.0+0x2a68d)
> >
> > --
> >
> > As a consequence, the two threads can modify the same entry of the mempool.
> > Usually, this cause a loop in iotlb_pending_entries list.
> >
> > Fixes: d012d1f293f4 ("vhost: add IOTLB helper functions")
> > Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
> > ---
> > lib/librte_vhost/iotlb.c | 3 +--
> > 1 file changed, 1 insertion(+), 2 deletions(-)
> >
> > diff --git a/lib/librte_vhost/iotlb.c b/lib/librte_vhost/iotlb.c
> > index 5b3a0c090..e0b67721b 100644
> > --- a/lib/librte_vhost/iotlb.c
> > +++ b/lib/librte_vhost/iotlb.c
> > @@ -321,8 +321,7 @@ vhost_user_iotlb_init(struct virtio_net *dev, int
> > vq_index)
> > IOTLB_CACHE_SIZE, sizeof(struct vhost_iotlb_entry), 0,
> > 0, 0, NULL, NULL, NULL, socket,
> > MEMPOOL_F_NO_CACHE_ALIGN |
> > - MEMPOOL_F_SP_PUT |
> > - MEMPOOL_F_SC_GET);
> > + MEMPOOL_F_SP_PUT);
> > if (!vq->iotlb_pool) {
> > VHOST_LOG_CONFIG(ERR,
> > "Failed to create IOTLB cache pool (%s)\n",
> > --
> > 2.18.1
>
next prev parent reply other threads:[~2020-08-26 12:51 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-08-10 14:11 [dpdk-dev] [PATCH 0/1] " Eugenio Pérez
2020-08-10 14:11 ` [dpdk-dev] [PATCH 1/1] " Eugenio Pérez
2020-08-25 9:17 ` Kevin Traynor
2020-08-26 6:28 ` Xia, Chenbo
2020-08-26 12:50 ` Eugenio Perez Martin [this message]
2020-08-27 1:20 ` Xia, Chenbo
2020-08-28 18:40 ` Jens Freimann
2020-08-18 14:21 ` [dpdk-dev] [PATCH 0/1] " Eugenio Perez Martin
2020-08-31 7:59 ` [dpdk-dev] [PATCH v2 0/1] vhost: Make iotlb mempool not single-consumer Eugenio Pérez
2020-08-31 7:59 ` [dpdk-dev] [PATCH v2 1/1] vhost: fix iotlb mempool single-consumer flag Eugenio Pérez
2020-08-31 10:21 ` Xia, Chenbo
2020-09-18 9:49 ` Maxime Coquelin
2020-09-18 12:29 ` Maxime Coquelin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CAJaqyWfihL7svxqAUR7sv6DnLafTdsrL3TcRebDzi-GJRqAPbA@mail.gmail.com \
--to=eperezma@redhat.com \
--cc=amorenoz@redhat.com \
--cc=chenbo.xia@intel.com \
--cc=dev@dpdk.org \
--cc=maxime.coquelin@redhat.com \
--cc=stable@dpdk.org \
--cc=zhihong.wang@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).