DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] [PATCH 0/1] vhost: fix iotlb mempool single-consumer flag
@ 2020-08-10 14:11 Eugenio Pérez
  2020-08-10 14:11 ` [dpdk-dev] [PATCH 1/1] " Eugenio Pérez
                   ` (2 more replies)
  0 siblings, 3 replies; 13+ messages in thread
From: Eugenio Pérez @ 2020-08-10 14:11 UTC (permalink / raw)
  To: dev
  Cc: Adrian Moreno Zapata, Maxime Coquelin, stable, Zhihong Wang, Chenbo Xia

Bugzilla bug: 523

Using testpmd as a vhost-user with iommu:

/home/dpdk/build/app/dpdk-testpmd -l 1,3 \
	--vdev net_vhost0,iface=/tmp/vhost-user1,queues=1,iommu-support=1 \
	-- --auto-start --stats-period 5 --forward-mode=txonly

And qemu with packed virtqueue:

    <interface type='vhostuser'>
      <mac address='88:67:11:5f:dd:02'/>
      <source type='unix' path='/tmp/vhost-user1' mode='client'/>
      <model type='virtio'/>
      <driver name='vhost' rx_queue_size='1024' iommu='on' ats='on'/>
      <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
    </interface>
...

  <qemu:commandline>
    <qemu:arg value='-set'/>
    <qemu:arg value='device.net1.packed=on'/>
  </qemu:commandline>

--

Is it possible to consume the iotlb's entries of the mempoo from different
threads. Thread sanitizer example output (after change rwlocks to POSIX ones):

WARNING: ThreadSanitizer: data race (pid=76927)
  Write of size 8 at 0x00017ffd5628 by thread T5:
    #0 vhost_user_iotlb_cache_insert ../lib/librte_vhost/iotlb.c:181 (dpdk-testpmd+0x769343)
    #1 vhost_user_iotlb_msg ../lib/librte_vhost/vhost_user.c:2380 (dpdk-testpmd+0x78e4bf)
    #2 vhost_user_msg_handler ../lib/librte_vhost/vhost_user.c:2848 (dpdk-testpmd+0x78fcf8)
    #3 vhost_user_read_cb ../lib/librte_vhost/socket.c:311 (dpdk-testpmd+0x770162)
    #4 fdset_event_dispatch ../lib/librte_vhost/fd_man.c:286 (dpdk-testpmd+0x7591c2)
    #5 ctrl_thread_init ../lib/librte_eal/common/eal_common_thread.c:193 (dpdk-testpmd+0xa2890b)
    #6 <null> <null> (libtsan.so.0+0x2a68d)

  Previous read of size 8 at 0x00017ffd5628 by thread T3:
    #0 vhost_user_iotlb_cache_find ../lib/librte_vhost/iotlb.c:252 (dpdk-testpmd+0x76ee96)
    #1 __vhost_iova_to_vva ../lib/librte_vhost/vhost.c:42 (dpdk-testpmd+0x77488c)
    #2 vhost_iova_to_vva ../lib/librte_vhost/vhost.h:753 (dpdk-testpmd+0x7abeb3)
    #3 map_one_desc ../lib/librte_vhost/virtio_net.c:497 (dpdk-testpmd+0x7abeb3)
    #4 fill_vec_buf_packed ../lib/librte_vhost/virtio_net.c:751 (dpdk-testpmd+0x7abeb3)
    #5 vhost_enqueue_single_packed ../lib/librte_vhost/virtio_net.c:1170 (dpdk-testpmd+0x7abeb3)
    #6 virtio_dev_rx_single_packed ../lib/librte_vhost/virtio_net.c:1346 (dpdk-testpmd+0x7abeb3)
    #7 virtio_dev_rx_packed ../lib/librte_vhost/virtio_net.c:1384 (dpdk-testpmd+0x7abeb3)
    #8 virtio_dev_rx ../lib/librte_vhost/virtio_net.c:1435 (dpdk-testpmd+0x7b0654)
    #9 rte_vhost_enqueue_burst ../lib/librte_vhost/virtio_net.c:1465 (dpdk-testpmd+0x7b0654)
    #10 eth_vhost_tx ../drivers/net/vhost/rte_eth_vhost.c:470 (dpdk-testpmd+0x1ddfbd8)
    #11 rte_eth_tx_burst ../lib/librte_ethdev/rte_ethdev.h:4800 (dpdk-testpmd+0x505fdb)
    #12 pkt_burst_transmit ../app/test-pmd/txonly.c:365 (dpdk-testpmd+0x5106ad)
    #13 run_pkt_fwd_on_lcore ../app/test-pmd/testpmd.c:2080 (dpdk-testpmd+0x4f8951)
    #14 start_pkt_forward_on_core ../app/test-pmd/testpmd.c:2106 (dpdk-testpmd+0x4f89d7)
    #15 eal_thread_loop ../lib/librte_eal/linux/eal_thread.c:127 (dpdk-testpmd+0xa5b20a)
    #16 <null> <null> (libtsan.so.0+0x2a68d)

  Location is global '<null>' at 0x000000000000 (rtemap_0+0x00003ffd5628)

  Thread T5 'vhost-events' (tid=76933, running) created by main thread at:
    #0 pthread_create <null> (libtsan.so.0+0x2cd42)
    #1 rte_ctrl_thread_create ../lib/librte_eal/common/eal_common_thread.c:216 (dpdk-testpmd+0xa289e7)
    #2 rte_vhost_driver_start ../lib/librte_vhost/socket.c:1190 (dpdk-testpmd+0x7728ef)
    #3 vhost_driver_setup ../drivers/net/vhost/rte_eth_vhost.c:1028 (dpdk-testpmd+0x1de233d)
    #4 eth_dev_configure ../drivers/net/vhost/rte_eth_vhost.c:1126 (dpdk-testpmd+0x1de29cc)
    #5 rte_eth_dev_configure ../lib/librte_ethdev/rte_ethdev.c:1439 (dpdk-testpmd+0x991ce2)
    #6 start_port ../app/test-pmd/testpmd.c:2450 (dpdk-testpmd+0x4f9b45)
    #7 main ../app/test-pmd/testpmd.c:3777 (dpdk-testpmd+0x4fe1ac)

  Thread T3 'lcore-slave-3' (tid=76931, running) created by main thread at:
    #0 pthread_create <null> (libtsan.so.0+0x2cd42)
    #1 rte_eal_init ../lib/librte_eal/linux/eal.c:1244 (dpdk-testpmd+0xa46e2b)
    #2 main ../app/test-pmd/testpmd.c:3673 (dpdk-testpmd+0x4fdd75)

--

Or:
WARNING: ThreadSanitizer: data race (pid=76927)
  Write of size 1 at 0x00017ffd00f8 by thread T5:
    #0 vhost_user_iotlb_cache_insert ../lib/librte_vhost/iotlb.c:182 (dpdk-testpmd+0x769370)
    #1 vhost_user_iotlb_msg ../lib/librte_vhost/vhost_user.c:2380 (dpdk-testpmd+0x78e4bf)
    #2 vhost_user_msg_handler ../lib/librte_vhost/vhost_user.c:2848 (dpdk-testpmd+0x78fcf8)
    #3 vhost_user_read_cb ../lib/librte_vhost/socket.c:311 (dpdk-testpmd+0x770162)
    #4 fdset_event_dispatch ../lib/librte_vhost/fd_man.c:286 (dpdk-testpmd+0x7591c2)
    #5 ctrl_thread_init ../lib/librte_eal/common/eal_common_thread.c:193 (dpdk-testpmd+0xa2890b)
    #6 <null> <null> (libtsan.so.0+0x2a68d)

  Previous write of size 1 at 0x00017ffd00f8 by thread T3:
    #0 vhost_user_iotlb_pending_insert ../lib/librte_vhost/iotlb.c:86 (dpdk-testpmd+0x75eb0c)
    #1 __vhost_iova_to_vva ../lib/librte_vhost/vhost.c:58 (dpdk-testpmd+0x774926)
    #2 vhost_iova_to_vva ../lib/librte_vhost/vhost.h:753 (dpdk-testpmd+0x7a79d1)
    #3 virtio_dev_rx_batch_packed ../lib/librte_vhost/virtio_net.c:1295 (dpdk-testpmd+0x7a79d1)
    #4 virtio_dev_rx_packed ../lib/librte_vhost/virtio_net.c:1376 (dpdk-testpmd+0x7a79d1)
    #5 virtio_dev_rx ../lib/librte_vhost/virtio_net.c:1435 (dpdk-testpmd+0x7b0654)
    #6 rte_vhost_enqueue_burst ../lib/librte_vhost/virtio_net.c:1465 (dpdk-testpmd+0x7b0654)
    #7 eth_vhost_tx ../drivers/net/vhost/rte_eth_vhost.c:470 (dpdk-testpmd+0x1ddfbd8)
    #8 rte_eth_tx_burst ../lib/librte_ethdev/rte_ethdev.h:4800 (dpdk-testpmd+0x505fdb)
    #9 pkt_burst_transmit ../app/test-pmd/txonly.c:365 (dpdk-testpmd+0x5106ad)
    #10 run_pkt_fwd_on_lcore ../app/test-pmd/testpmd.c:2080 (dpdk-testpmd+0x4f8951)
    #11 start_pkt_forward_on_core ../app/test-pmd/testpmd.c:2106 (dpdk-testpmd+0x4f89d7)
    #12 eal_thread_loop ../lib/librte_eal/linux/eal_thread.c:127 (dpdk-testpmd+0xa5b20a)
    #13 <null> <null> (libtsan.so.0+0x2a68d)

--

As a consequence, the two threads can modify the same entry of the mempool.
Usually, this cause a loop in iotlb_pending_entries list.

This behavior is only observed on packed vq + virtio-net kernel driver in the
guest, so we could make the single-consumer flag optional. However, I have not
found why I cannot see the issue in split, so the safer option is to never set
it.

Any comments?

Thanks!

Eugenio Pérez (1):
  vhost: fix iotlb mempool single-consumer flag

 lib/librte_vhost/iotlb.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

-- 
2.18.1


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [dpdk-dev] [PATCH 1/1] vhost: fix iotlb mempool single-consumer flag
  2020-08-10 14:11 [dpdk-dev] [PATCH 0/1] vhost: fix iotlb mempool single-consumer flag Eugenio Pérez
@ 2020-08-10 14:11 ` Eugenio Pérez
  2020-08-25  9:17   ` Kevin Traynor
                     ` (2 more replies)
  2020-08-18 14:21 ` [dpdk-dev] [PATCH 0/1] " Eugenio Perez Martin
  2020-08-31  7:59 ` [dpdk-dev] [PATCH v2 0/1] vhost: Make iotlb mempool not single-consumer Eugenio Pérez
  2 siblings, 3 replies; 13+ messages in thread
From: Eugenio Pérez @ 2020-08-10 14:11 UTC (permalink / raw)
  To: dev
  Cc: Adrian Moreno Zapata, Maxime Coquelin, stable, Zhihong Wang, Chenbo Xia

Bugzilla bug: 523

Using testpmd as a vhost-user with iommu:

/home/dpdk/build/app/dpdk-testpmd -l 1,3 \
        --vdev net_vhost0,iface=/tmp/vhost-user1,queues=1,iommu-support=1 \
        -- --auto-start --stats-period 5 --forward-mode=txonly

And qemu with packed virtqueue:

    <interface type='vhostuser'>
      <mac address='88:67:11:5f:dd:02'/>
      <source type='unix' path='/tmp/vhost-user1' mode='client'/>
      <model type='virtio'/>
      <driver name='vhost' rx_queue_size='1024' iommu='on' ats='on'/>
      <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
    </interface>
...

  <qemu:commandline>
    <qemu:arg value='-set'/>
    <qemu:arg value='device.net1.packed=on'/>
  </qemu:commandline>

--

Is it possible to consume the iotlb's entries of the mempoo from different
threads. Thread sanitizer example output (after change rwlocks to POSIX ones):

WARNING: ThreadSanitizer: data race (pid=76927)
  Write of size 8 at 0x00017ffd5628 by thread T5:
    #0 vhost_user_iotlb_cache_insert ../lib/librte_vhost/iotlb.c:181 (dpdk-testpmd+0x769343)
    #1 vhost_user_iotlb_msg ../lib/librte_vhost/vhost_user.c:2380 (dpdk-testpmd+0x78e4bf)
    #2 vhost_user_msg_handler ../lib/librte_vhost/vhost_user.c:2848 (dpdk-testpmd+0x78fcf8)
    #3 vhost_user_read_cb ../lib/librte_vhost/socket.c:311 (dpdk-testpmd+0x770162)
    #4 fdset_event_dispatch ../lib/librte_vhost/fd_man.c:286 (dpdk-testpmd+0x7591c2)
    #5 ctrl_thread_init ../lib/librte_eal/common/eal_common_thread.c:193 (dpdk-testpmd+0xa2890b)
    #6 <null> <null> (libtsan.so.0+0x2a68d)

  Previous read of size 8 at 0x00017ffd5628 by thread T3:
    #0 vhost_user_iotlb_cache_find ../lib/librte_vhost/iotlb.c:252 (dpdk-testpmd+0x76ee96)
    #1 __vhost_iova_to_vva ../lib/librte_vhost/vhost.c:42 (dpdk-testpmd+0x77488c)
    #2 vhost_iova_to_vva ../lib/librte_vhost/vhost.h:753 (dpdk-testpmd+0x7abeb3)
    #3 map_one_desc ../lib/librte_vhost/virtio_net.c:497 (dpdk-testpmd+0x7abeb3)
    #4 fill_vec_buf_packed ../lib/librte_vhost/virtio_net.c:751 (dpdk-testpmd+0x7abeb3)
    #5 vhost_enqueue_single_packed ../lib/librte_vhost/virtio_net.c:1170 (dpdk-testpmd+0x7abeb3)
    #6 virtio_dev_rx_single_packed ../lib/librte_vhost/virtio_net.c:1346 (dpdk-testpmd+0x7abeb3)
    #7 virtio_dev_rx_packed ../lib/librte_vhost/virtio_net.c:1384 (dpdk-testpmd+0x7abeb3)
    #8 virtio_dev_rx ../lib/librte_vhost/virtio_net.c:1435 (dpdk-testpmd+0x7b0654)
    #9 rte_vhost_enqueue_burst ../lib/librte_vhost/virtio_net.c:1465 (dpdk-testpmd+0x7b0654)
    #10 eth_vhost_tx ../drivers/net/vhost/rte_eth_vhost.c:470 (dpdk-testpmd+0x1ddfbd8)
    #11 rte_eth_tx_burst ../lib/librte_ethdev/rte_ethdev.h:4800 (dpdk-testpmd+0x505fdb)
    #12 pkt_burst_transmit ../app/test-pmd/txonly.c:365 (dpdk-testpmd+0x5106ad)
    #13 run_pkt_fwd_on_lcore ../app/test-pmd/testpmd.c:2080 (dpdk-testpmd+0x4f8951)
    #14 start_pkt_forward_on_core ../app/test-pmd/testpmd.c:2106 (dpdk-testpmd+0x4f89d7)
    #15 eal_thread_loop ../lib/librte_eal/linux/eal_thread.c:127 (dpdk-testpmd+0xa5b20a)
    #16 <null> <null> (libtsan.so.0+0x2a68d)

  Location is global '<null>' at 0x000000000000 (rtemap_0+0x00003ffd5628)

  Thread T5 'vhost-events' (tid=76933, running) created by main thread at:
    #0 pthread_create <null> (libtsan.so.0+0x2cd42)
    #1 rte_ctrl_thread_create ../lib/librte_eal/common/eal_common_thread.c:216 (dpdk-testpmd+0xa289e7)
    #2 rte_vhost_driver_start ../lib/librte_vhost/socket.c:1190 (dpdk-testpmd+0x7728ef)
    #3 vhost_driver_setup ../drivers/net/vhost/rte_eth_vhost.c:1028 (dpdk-testpmd+0x1de233d)
    #4 eth_dev_configure ../drivers/net/vhost/rte_eth_vhost.c:1126 (dpdk-testpmd+0x1de29cc)
    #5 rte_eth_dev_configure ../lib/librte_ethdev/rte_ethdev.c:1439 (dpdk-testpmd+0x991ce2)
    #6 start_port ../app/test-pmd/testpmd.c:2450 (dpdk-testpmd+0x4f9b45)
    #7 main ../app/test-pmd/testpmd.c:3777 (dpdk-testpmd+0x4fe1ac)

  Thread T3 'lcore-slave-3' (tid=76931, running) created by main thread at:
    #0 pthread_create <null> (libtsan.so.0+0x2cd42)
    #1 rte_eal_init ../lib/librte_eal/linux/eal.c:1244 (dpdk-testpmd+0xa46e2b)
    #2 main ../app/test-pmd/testpmd.c:3673 (dpdk-testpmd+0x4fdd75)

--

Or:
WARNING: ThreadSanitizer: data race (pid=76927)
  Write of size 1 at 0x00017ffd00f8 by thread T5:
    #0 vhost_user_iotlb_cache_insert ../lib/librte_vhost/iotlb.c:182 (dpdk-testpmd+0x769370)
    #1 vhost_user_iotlb_msg ../lib/librte_vhost/vhost_user.c:2380 (dpdk-testpmd+0x78e4bf)
    #2 vhost_user_msg_handler ../lib/librte_vhost/vhost_user.c:2848 (dpdk-testpmd+0x78fcf8)
    #3 vhost_user_read_cb ../lib/librte_vhost/socket.c:311 (dpdk-testpmd+0x770162)
    #4 fdset_event_dispatch ../lib/librte_vhost/fd_man.c:286 (dpdk-testpmd+0x7591c2)
    #5 ctrl_thread_init ../lib/librte_eal/common/eal_common_thread.c:193 (dpdk-testpmd+0xa2890b)
    #6 <null> <null> (libtsan.so.0+0x2a68d)

  Previous write of size 1 at 0x00017ffd00f8 by thread T3:
    #0 vhost_user_iotlb_pending_insert ../lib/librte_vhost/iotlb.c:86 (dpdk-testpmd+0x75eb0c)
    #1 __vhost_iova_to_vva ../lib/librte_vhost/vhost.c:58 (dpdk-testpmd+0x774926)
    #2 vhost_iova_to_vva ../lib/librte_vhost/vhost.h:753 (dpdk-testpmd+0x7a79d1)
    #3 virtio_dev_rx_batch_packed ../lib/librte_vhost/virtio_net.c:1295 (dpdk-testpmd+0x7a79d1)
    #4 virtio_dev_rx_packed ../lib/librte_vhost/virtio_net.c:1376 (dpdk-testpmd+0x7a79d1)
    #5 virtio_dev_rx ../lib/librte_vhost/virtio_net.c:1435 (dpdk-testpmd+0x7b0654)
    #6 rte_vhost_enqueue_burst ../lib/librte_vhost/virtio_net.c:1465 (dpdk-testpmd+0x7b0654)
    #7 eth_vhost_tx ../drivers/net/vhost/rte_eth_vhost.c:470 (dpdk-testpmd+0x1ddfbd8)
    #8 rte_eth_tx_burst ../lib/librte_ethdev/rte_ethdev.h:4800 (dpdk-testpmd+0x505fdb)
    #9 pkt_burst_transmit ../app/test-pmd/txonly.c:365 (dpdk-testpmd+0x5106ad)
    #10 run_pkt_fwd_on_lcore ../app/test-pmd/testpmd.c:2080 (dpdk-testpmd+0x4f8951)
    #11 start_pkt_forward_on_core ../app/test-pmd/testpmd.c:2106 (dpdk-testpmd+0x4f89d7)
    #12 eal_thread_loop ../lib/librte_eal/linux/eal_thread.c:127 (dpdk-testpmd+0xa5b20a)
    #13 <null> <null> (libtsan.so.0+0x2a68d)

--

As a consequence, the two threads can modify the same entry of the mempool.
Usually, this cause a loop in iotlb_pending_entries list.

Fixes: d012d1f293f4 ("vhost: add IOTLB helper functions")
Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
---
 lib/librte_vhost/iotlb.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/lib/librte_vhost/iotlb.c b/lib/librte_vhost/iotlb.c
index 5b3a0c090..e0b67721b 100644
--- a/lib/librte_vhost/iotlb.c
+++ b/lib/librte_vhost/iotlb.c
@@ -321,8 +321,7 @@ vhost_user_iotlb_init(struct virtio_net *dev, int vq_index)
 			IOTLB_CACHE_SIZE, sizeof(struct vhost_iotlb_entry), 0,
 			0, 0, NULL, NULL, NULL, socket,
 			MEMPOOL_F_NO_CACHE_ALIGN |
-			MEMPOOL_F_SP_PUT |
-			MEMPOOL_F_SC_GET);
+			MEMPOOL_F_SP_PUT);
 	if (!vq->iotlb_pool) {
 		VHOST_LOG_CONFIG(ERR,
 				"Failed to create IOTLB cache pool (%s)\n",
-- 
2.18.1


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [dpdk-dev] [PATCH 0/1] vhost: fix iotlb mempool single-consumer flag
  2020-08-10 14:11 [dpdk-dev] [PATCH 0/1] vhost: fix iotlb mempool single-consumer flag Eugenio Pérez
  2020-08-10 14:11 ` [dpdk-dev] [PATCH 1/1] " Eugenio Pérez
@ 2020-08-18 14:21 ` Eugenio Perez Martin
  2020-08-31  7:59 ` [dpdk-dev] [PATCH v2 0/1] vhost: Make iotlb mempool not single-consumer Eugenio Pérez
  2 siblings, 0 replies; 13+ messages in thread
From: Eugenio Perez Martin @ 2020-08-18 14:21 UTC (permalink / raw)
  To: dev
  Cc: Adrian Moreno Zapata, Maxime Coquelin, stable, Zhihong Wang, Chenbo Xia

On Mon, Aug 10, 2020 at 4:11 PM Eugenio Pérez <eperezma@redhat.com> wrote:
>
> Bugzilla bug: 523
>
> Using testpmd as a vhost-user with iommu:
>
> /home/dpdk/build/app/dpdk-testpmd -l 1,3 \
>         --vdev net_vhost0,iface=/tmp/vhost-user1,queues=1,iommu-support=1 \
>         -- --auto-start --stats-period 5 --forward-mode=txonly
>
> And qemu with packed virtqueue:
>
>     <interface type='vhostuser'>
>       <mac address='88:67:11:5f:dd:02'/>
>       <source type='unix' path='/tmp/vhost-user1' mode='client'/>
>       <model type='virtio'/>
>       <driver name='vhost' rx_queue_size='1024' iommu='on' ats='on'/>
>       <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
>     </interface>
> ...
>
>   <qemu:commandline>
>     <qemu:arg value='-set'/>
>     <qemu:arg value='device.net1.packed=on'/>
>   </qemu:commandline>
>
> --
>
> Is it possible to consume the iotlb's entries of the mempoo from different
> threads. Thread sanitizer example output (after change rwlocks to POSIX ones):
>
> WARNING: ThreadSanitizer: data race (pid=76927)
>   Write of size 8 at 0x00017ffd5628 by thread T5:
>     #0 vhost_user_iotlb_cache_insert ../lib/librte_vhost/iotlb.c:181 (dpdk-testpmd+0x769343)
>     #1 vhost_user_iotlb_msg ../lib/librte_vhost/vhost_user.c:2380 (dpdk-testpmd+0x78e4bf)
>     #2 vhost_user_msg_handler ../lib/librte_vhost/vhost_user.c:2848 (dpdk-testpmd+0x78fcf8)
>     #3 vhost_user_read_cb ../lib/librte_vhost/socket.c:311 (dpdk-testpmd+0x770162)
>     #4 fdset_event_dispatch ../lib/librte_vhost/fd_man.c:286 (dpdk-testpmd+0x7591c2)
>     #5 ctrl_thread_init ../lib/librte_eal/common/eal_common_thread.c:193 (dpdk-testpmd+0xa2890b)
>     #6 <null> <null> (libtsan.so.0+0x2a68d)
>
>   Previous read of size 8 at 0x00017ffd5628 by thread T3:
>     #0 vhost_user_iotlb_cache_find ../lib/librte_vhost/iotlb.c:252 (dpdk-testpmd+0x76ee96)
>     #1 __vhost_iova_to_vva ../lib/librte_vhost/vhost.c:42 (dpdk-testpmd+0x77488c)
>     #2 vhost_iova_to_vva ../lib/librte_vhost/vhost.h:753 (dpdk-testpmd+0x7abeb3)
>     #3 map_one_desc ../lib/librte_vhost/virtio_net.c:497 (dpdk-testpmd+0x7abeb3)
>     #4 fill_vec_buf_packed ../lib/librte_vhost/virtio_net.c:751 (dpdk-testpmd+0x7abeb3)
>     #5 vhost_enqueue_single_packed ../lib/librte_vhost/virtio_net.c:1170 (dpdk-testpmd+0x7abeb3)
>     #6 virtio_dev_rx_single_packed ../lib/librte_vhost/virtio_net.c:1346 (dpdk-testpmd+0x7abeb3)
>     #7 virtio_dev_rx_packed ../lib/librte_vhost/virtio_net.c:1384 (dpdk-testpmd+0x7abeb3)
>     #8 virtio_dev_rx ../lib/librte_vhost/virtio_net.c:1435 (dpdk-testpmd+0x7b0654)
>     #9 rte_vhost_enqueue_burst ../lib/librte_vhost/virtio_net.c:1465 (dpdk-testpmd+0x7b0654)
>     #10 eth_vhost_tx ../drivers/net/vhost/rte_eth_vhost.c:470 (dpdk-testpmd+0x1ddfbd8)
>     #11 rte_eth_tx_burst ../lib/librte_ethdev/rte_ethdev.h:4800 (dpdk-testpmd+0x505fdb)
>     #12 pkt_burst_transmit ../app/test-pmd/txonly.c:365 (dpdk-testpmd+0x5106ad)
>     #13 run_pkt_fwd_on_lcore ../app/test-pmd/testpmd.c:2080 (dpdk-testpmd+0x4f8951)
>     #14 start_pkt_forward_on_core ../app/test-pmd/testpmd.c:2106 (dpdk-testpmd+0x4f89d7)
>     #15 eal_thread_loop ../lib/librte_eal/linux/eal_thread.c:127 (dpdk-testpmd+0xa5b20a)
>     #16 <null> <null> (libtsan.so.0+0x2a68d)
>
>   Location is global '<null>' at 0x000000000000 (rtemap_0+0x00003ffd5628)
>
>   Thread T5 'vhost-events' (tid=76933, running) created by main thread at:
>     #0 pthread_create <null> (libtsan.so.0+0x2cd42)
>     #1 rte_ctrl_thread_create ../lib/librte_eal/common/eal_common_thread.c:216 (dpdk-testpmd+0xa289e7)
>     #2 rte_vhost_driver_start ../lib/librte_vhost/socket.c:1190 (dpdk-testpmd+0x7728ef)
>     #3 vhost_driver_setup ../drivers/net/vhost/rte_eth_vhost.c:1028 (dpdk-testpmd+0x1de233d)
>     #4 eth_dev_configure ../drivers/net/vhost/rte_eth_vhost.c:1126 (dpdk-testpmd+0x1de29cc)
>     #5 rte_eth_dev_configure ../lib/librte_ethdev/rte_ethdev.c:1439 (dpdk-testpmd+0x991ce2)
>     #6 start_port ../app/test-pmd/testpmd.c:2450 (dpdk-testpmd+0x4f9b45)
>     #7 main ../app/test-pmd/testpmd.c:3777 (dpdk-testpmd+0x4fe1ac)
>
>   Thread T3 'lcore-slave-3' (tid=76931, running) created by main thread at:
>     #0 pthread_create <null> (libtsan.so.0+0x2cd42)
>     #1 rte_eal_init ../lib/librte_eal/linux/eal.c:1244 (dpdk-testpmd+0xa46e2b)
>     #2 main ../app/test-pmd/testpmd.c:3673 (dpdk-testpmd+0x4fdd75)
>
> --
>
> Or:
> WARNING: ThreadSanitizer: data race (pid=76927)
>   Write of size 1 at 0x00017ffd00f8 by thread T5:
>     #0 vhost_user_iotlb_cache_insert ../lib/librte_vhost/iotlb.c:182 (dpdk-testpmd+0x769370)
>     #1 vhost_user_iotlb_msg ../lib/librte_vhost/vhost_user.c:2380 (dpdk-testpmd+0x78e4bf)
>     #2 vhost_user_msg_handler ../lib/librte_vhost/vhost_user.c:2848 (dpdk-testpmd+0x78fcf8)
>     #3 vhost_user_read_cb ../lib/librte_vhost/socket.c:311 (dpdk-testpmd+0x770162)
>     #4 fdset_event_dispatch ../lib/librte_vhost/fd_man.c:286 (dpdk-testpmd+0x7591c2)
>     #5 ctrl_thread_init ../lib/librte_eal/common/eal_common_thread.c:193 (dpdk-testpmd+0xa2890b)
>     #6 <null> <null> (libtsan.so.0+0x2a68d)
>
>   Previous write of size 1 at 0x00017ffd00f8 by thread T3:
>     #0 vhost_user_iotlb_pending_insert ../lib/librte_vhost/iotlb.c:86 (dpdk-testpmd+0x75eb0c)
>     #1 __vhost_iova_to_vva ../lib/librte_vhost/vhost.c:58 (dpdk-testpmd+0x774926)
>     #2 vhost_iova_to_vva ../lib/librte_vhost/vhost.h:753 (dpdk-testpmd+0x7a79d1)
>     #3 virtio_dev_rx_batch_packed ../lib/librte_vhost/virtio_net.c:1295 (dpdk-testpmd+0x7a79d1)
>     #4 virtio_dev_rx_packed ../lib/librte_vhost/virtio_net.c:1376 (dpdk-testpmd+0x7a79d1)
>     #5 virtio_dev_rx ../lib/librte_vhost/virtio_net.c:1435 (dpdk-testpmd+0x7b0654)
>     #6 rte_vhost_enqueue_burst ../lib/librte_vhost/virtio_net.c:1465 (dpdk-testpmd+0x7b0654)
>     #7 eth_vhost_tx ../drivers/net/vhost/rte_eth_vhost.c:470 (dpdk-testpmd+0x1ddfbd8)
>     #8 rte_eth_tx_burst ../lib/librte_ethdev/rte_ethdev.h:4800 (dpdk-testpmd+0x505fdb)
>     #9 pkt_burst_transmit ../app/test-pmd/txonly.c:365 (dpdk-testpmd+0x5106ad)
>     #10 run_pkt_fwd_on_lcore ../app/test-pmd/testpmd.c:2080 (dpdk-testpmd+0x4f8951)
>     #11 start_pkt_forward_on_core ../app/test-pmd/testpmd.c:2106 (dpdk-testpmd+0x4f89d7)
>     #12 eal_thread_loop ../lib/librte_eal/linux/eal_thread.c:127 (dpdk-testpmd+0xa5b20a)
>     #13 <null> <null> (libtsan.so.0+0x2a68d)
>
> --
>
> As a consequence, the two threads can modify the same entry of the mempool.
> Usually, this cause a loop in iotlb_pending_entries list.
>
> This behavior is only observed on packed vq + virtio-net kernel driver in the
> guest, so we could make the single-consumer flag optional. However, I have not
> found why I cannot see the issue in split, so the safer option is to never set
> it.
>
> Any comments?
>
> Thanks!
>
> Eugenio Pérez (1):
>   vhost: fix iotlb mempool single-consumer flag
>
>  lib/librte_vhost/iotlb.c | 3 +--
>  1 file changed, 1 insertion(+), 2 deletions(-)
>
> --
> 2.18.1
>

Hi! Just a friendly ping on this. Should I CC somebody else?

Thanks!


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [dpdk-dev] [PATCH 1/1] vhost: fix iotlb mempool single-consumer flag
  2020-08-10 14:11 ` [dpdk-dev] [PATCH 1/1] " Eugenio Pérez
@ 2020-08-25  9:17   ` Kevin Traynor
  2020-08-26  6:28   ` Xia, Chenbo
  2020-08-28 18:40   ` Jens Freimann
  2 siblings, 0 replies; 13+ messages in thread
From: Kevin Traynor @ 2020-08-25  9:17 UTC (permalink / raw)
  To: Eugenio Pérez, dev
  Cc: Adrian Moreno Zapata, Maxime Coquelin, stable, Zhihong Wang, Chenbo Xia

On 10/08/2020 15:11, Eugenio Pérez wrote:
> Bugzilla bug: 523
> 
> Using testpmd as a vhost-user with iommu:
> 
> /home/dpdk/build/app/dpdk-testpmd -l 1,3 \
>         --vdev net_vhost0,iface=/tmp/vhost-user1,queues=1,iommu-support=1 \
>         -- --auto-start --stats-period 5 --forward-mode=txonly
> 
> And qemu with packed virtqueue:
> 
>     <interface type='vhostuser'>
>       <mac address='88:67:11:5f:dd:02'/>
>       <source type='unix' path='/tmp/vhost-user1' mode='client'/>
>       <model type='virtio'/>
>       <driver name='vhost' rx_queue_size='1024' iommu='on' ats='on'/>
>       <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
>     </interface>
> ...
> 
>   <qemu:commandline>
>     <qemu:arg value='-set'/>
>     <qemu:arg value='device.net1.packed=on'/>
>   </qemu:commandline>
> 
> --
> 
> Is it possible to consume the iotlb's entries of the mempoo from different
> threads. Thread sanitizer example output (after change rwlocks to POSIX ones):
> 
> WARNING: ThreadSanitizer: data race (pid=76927)
>   Write of size 8 at 0x00017ffd5628 by thread T5:
>     #0 vhost_user_iotlb_cache_insert ../lib/librte_vhost/iotlb.c:181 (dpdk-testpmd+0x769343)
>     #1 vhost_user_iotlb_msg ../lib/librte_vhost/vhost_user.c:2380 (dpdk-testpmd+0x78e4bf)
>     #2 vhost_user_msg_handler ../lib/librte_vhost/vhost_user.c:2848 (dpdk-testpmd+0x78fcf8)
>     #3 vhost_user_read_cb ../lib/librte_vhost/socket.c:311 (dpdk-testpmd+0x770162)
>     #4 fdset_event_dispatch ../lib/librte_vhost/fd_man.c:286 (dpdk-testpmd+0x7591c2)
>     #5 ctrl_thread_init ../lib/librte_eal/common/eal_common_thread.c:193 (dpdk-testpmd+0xa2890b)
>     #6 <null> <null> (libtsan.so.0+0x2a68d)
> 
>   Previous read of size 8 at 0x00017ffd5628 by thread T3:
>     #0 vhost_user_iotlb_cache_find ../lib/librte_vhost/iotlb.c:252 (dpdk-testpmd+0x76ee96)
>     #1 __vhost_iova_to_vva ../lib/librte_vhost/vhost.c:42 (dpdk-testpmd+0x77488c)
>     #2 vhost_iova_to_vva ../lib/librte_vhost/vhost.h:753 (dpdk-testpmd+0x7abeb3)
>     #3 map_one_desc ../lib/librte_vhost/virtio_net.c:497 (dpdk-testpmd+0x7abeb3)
>     #4 fill_vec_buf_packed ../lib/librte_vhost/virtio_net.c:751 (dpdk-testpmd+0x7abeb3)
>     #5 vhost_enqueue_single_packed ../lib/librte_vhost/virtio_net.c:1170 (dpdk-testpmd+0x7abeb3)
>     #6 virtio_dev_rx_single_packed ../lib/librte_vhost/virtio_net.c:1346 (dpdk-testpmd+0x7abeb3)
>     #7 virtio_dev_rx_packed ../lib/librte_vhost/virtio_net.c:1384 (dpdk-testpmd+0x7abeb3)
>     #8 virtio_dev_rx ../lib/librte_vhost/virtio_net.c:1435 (dpdk-testpmd+0x7b0654)
>     #9 rte_vhost_enqueue_burst ../lib/librte_vhost/virtio_net.c:1465 (dpdk-testpmd+0x7b0654)
>     #10 eth_vhost_tx ../drivers/net/vhost/rte_eth_vhost.c:470 (dpdk-testpmd+0x1ddfbd8)
>     #11 rte_eth_tx_burst ../lib/librte_ethdev/rte_ethdev.h:4800 (dpdk-testpmd+0x505fdb)
>     #12 pkt_burst_transmit ../app/test-pmd/txonly.c:365 (dpdk-testpmd+0x5106ad)
>     #13 run_pkt_fwd_on_lcore ../app/test-pmd/testpmd.c:2080 (dpdk-testpmd+0x4f8951)
>     #14 start_pkt_forward_on_core ../app/test-pmd/testpmd.c:2106 (dpdk-testpmd+0x4f89d7)
>     #15 eal_thread_loop ../lib/librte_eal/linux/eal_thread.c:127 (dpdk-testpmd+0xa5b20a)
>     #16 <null> <null> (libtsan.so.0+0x2a68d)
> 
>   Location is global '<null>' at 0x000000000000 (rtemap_0+0x00003ffd5628)
> 
>   Thread T5 'vhost-events' (tid=76933, running) created by main thread at:
>     #0 pthread_create <null> (libtsan.so.0+0x2cd42)
>     #1 rte_ctrl_thread_create ../lib/librte_eal/common/eal_common_thread.c:216 (dpdk-testpmd+0xa289e7)
>     #2 rte_vhost_driver_start ../lib/librte_vhost/socket.c:1190 (dpdk-testpmd+0x7728ef)
>     #3 vhost_driver_setup ../drivers/net/vhost/rte_eth_vhost.c:1028 (dpdk-testpmd+0x1de233d)
>     #4 eth_dev_configure ../drivers/net/vhost/rte_eth_vhost.c:1126 (dpdk-testpmd+0x1de29cc)
>     #5 rte_eth_dev_configure ../lib/librte_ethdev/rte_ethdev.c:1439 (dpdk-testpmd+0x991ce2)
>     #6 start_port ../app/test-pmd/testpmd.c:2450 (dpdk-testpmd+0x4f9b45)
>     #7 main ../app/test-pmd/testpmd.c:3777 (dpdk-testpmd+0x4fe1ac)
> 
>   Thread T3 'lcore-slave-3' (tid=76931, running) created by main thread at:
>     #0 pthread_create <null> (libtsan.so.0+0x2cd42)
>     #1 rte_eal_init ../lib/librte_eal/linux/eal.c:1244 (dpdk-testpmd+0xa46e2b)
>     #2 main ../app/test-pmd/testpmd.c:3673 (dpdk-testpmd+0x4fdd75)
> 
> --
> 
> Or:
> WARNING: ThreadSanitizer: data race (pid=76927)
>   Write of size 1 at 0x00017ffd00f8 by thread T5:
>     #0 vhost_user_iotlb_cache_insert ../lib/librte_vhost/iotlb.c:182 (dpdk-testpmd+0x769370)
>     #1 vhost_user_iotlb_msg ../lib/librte_vhost/vhost_user.c:2380 (dpdk-testpmd+0x78e4bf)
>     #2 vhost_user_msg_handler ../lib/librte_vhost/vhost_user.c:2848 (dpdk-testpmd+0x78fcf8)
>     #3 vhost_user_read_cb ../lib/librte_vhost/socket.c:311 (dpdk-testpmd+0x770162)
>     #4 fdset_event_dispatch ../lib/librte_vhost/fd_man.c:286 (dpdk-testpmd+0x7591c2)
>     #5 ctrl_thread_init ../lib/librte_eal/common/eal_common_thread.c:193 (dpdk-testpmd+0xa2890b)
>     #6 <null> <null> (libtsan.so.0+0x2a68d)
> 
>   Previous write of size 1 at 0x00017ffd00f8 by thread T3:
>     #0 vhost_user_iotlb_pending_insert ../lib/librte_vhost/iotlb.c:86 (dpdk-testpmd+0x75eb0c)
>     #1 __vhost_iova_to_vva ../lib/librte_vhost/vhost.c:58 (dpdk-testpmd+0x774926)
>     #2 vhost_iova_to_vva ../lib/librte_vhost/vhost.h:753 (dpdk-testpmd+0x7a79d1)
>     #3 virtio_dev_rx_batch_packed ../lib/librte_vhost/virtio_net.c:1295 (dpdk-testpmd+0x7a79d1)
>     #4 virtio_dev_rx_packed ../lib/librte_vhost/virtio_net.c:1376 (dpdk-testpmd+0x7a79d1)
>     #5 virtio_dev_rx ../lib/librte_vhost/virtio_net.c:1435 (dpdk-testpmd+0x7b0654)
>     #6 rte_vhost_enqueue_burst ../lib/librte_vhost/virtio_net.c:1465 (dpdk-testpmd+0x7b0654)
>     #7 eth_vhost_tx ../drivers/net/vhost/rte_eth_vhost.c:470 (dpdk-testpmd+0x1ddfbd8)
>     #8 rte_eth_tx_burst ../lib/librte_ethdev/rte_ethdev.h:4800 (dpdk-testpmd+0x505fdb)
>     #9 pkt_burst_transmit ../app/test-pmd/txonly.c:365 (dpdk-testpmd+0x5106ad)
>     #10 run_pkt_fwd_on_lcore ../app/test-pmd/testpmd.c:2080 (dpdk-testpmd+0x4f8951)
>     #11 start_pkt_forward_on_core ../app/test-pmd/testpmd.c:2106 (dpdk-testpmd+0x4f89d7)
>     #12 eal_thread_loop ../lib/librte_eal/linux/eal_thread.c:127 (dpdk-testpmd+0xa5b20a)
>     #13 <null> <null> (libtsan.so.0+0x2a68d)
> 
> --
> 
> As a consequence, the two threads can modify the same entry of the mempool.
> Usually, this cause a loop in iotlb_pending_entries list.
> 
> Fixes: d012d1f293f4 ("vhost: add IOTLB helper functions")
> Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
> ---
>  lib/librte_vhost/iotlb.c | 3 +--
>  1 file changed, 1 insertion(+), 2 deletions(-)
> 
> diff --git a/lib/librte_vhost/iotlb.c b/lib/librte_vhost/iotlb.c
> index 5b3a0c090..e0b67721b 100644
> --- a/lib/librte_vhost/iotlb.c
> +++ b/lib/librte_vhost/iotlb.c
> @@ -321,8 +321,7 @@ vhost_user_iotlb_init(struct virtio_net *dev, int vq_index)
>  			IOTLB_CACHE_SIZE, sizeof(struct vhost_iotlb_entry), 0,
>  			0, 0, NULL, NULL, NULL, socket,
>  			MEMPOOL_F_NO_CACHE_ALIGN |
> -			MEMPOOL_F_SP_PUT |
> -			MEMPOOL_F_SC_GET);
> +			MEMPOOL_F_SP_PUT);
>  	if (!vq->iotlb_pool) {
>  		VHOST_LOG_CONFIG(ERR,
>  				"Failed to create IOTLB cache pool (%s)\n",
> 

Looks ok to me, but would need review from vhost maintainer.


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [dpdk-dev] [PATCH 1/1] vhost: fix iotlb mempool single-consumer flag
  2020-08-10 14:11 ` [dpdk-dev] [PATCH 1/1] " Eugenio Pérez
  2020-08-25  9:17   ` Kevin Traynor
@ 2020-08-26  6:28   ` Xia, Chenbo
  2020-08-26 12:50     ` Eugenio Perez Martin
  2020-08-28 18:40   ` Jens Freimann
  2 siblings, 1 reply; 13+ messages in thread
From: Xia, Chenbo @ 2020-08-26  6:28 UTC (permalink / raw)
  To: Eugenio Pérez, dev
  Cc: Adrian Moreno Zapata, Maxime Coquelin, stable, Wang, Zhihong

Hi Eugenio,

> -----Original Message-----
> From: Eugenio Pérez <eperezma@redhat.com>
> Sent: Monday, August 10, 2020 10:11 PM
> To: dev@dpdk.org
> Cc: Adrian Moreno Zapata <amorenoz@redhat.com>; Maxime Coquelin
> <maxime.coquelin@redhat.com>; stable@dpdk.org; Wang, Zhihong
> <zhihong.wang@intel.com>; Xia, Chenbo <chenbo.xia@intel.com>
> Subject: [PATCH 1/1] vhost: fix iotlb mempool single-consumer flag
> 
> Bugzilla bug: 523
> 
> Using testpmd as a vhost-user with iommu:
> 
> /home/dpdk/build/app/dpdk-testpmd -l 1,3 \
>         --vdev net_vhost0,iface=/tmp/vhost-user1,queues=1,iommu-support=1
> \
>         -- --auto-start --stats-period 5 --forward-mode=txonly
> 
> And qemu with packed virtqueue:
> 
>     <interface type='vhostuser'>
>       <mac address='88:67:11:5f:dd:02'/>
>       <source type='unix' path='/tmp/vhost-user1' mode='client'/>
>       <model type='virtio'/>
>       <driver name='vhost' rx_queue_size='1024' iommu='on' ats='on'/>
>       <address type='pci' domain='0x0000' bus='0x07' slot='0x00'
> function='0x0'/>
>     </interface>
> ...
> 
>   <qemu:commandline>
>     <qemu:arg value='-set'/>
>     <qemu:arg value='device.net1.packed=on'/>
>   </qemu:commandline>
> 

The fix looks fine to me. But the commit message is a little bit complicated
to me (also, some lines too long). Since this bug is clear and could be
described by something like 'control thread which handles iotlb msg and forwarding
thread which uses iotlb to translate address may modify same entry of mempool
and may cause a loop in iotlb_pending_entries list'. Do you think it makes
sense?

Thanks for the fix!
Chenbo

> --
> 
> Is it possible to consume the iotlb's entries of the mempoo from different
> threads. Thread sanitizer example output (after change rwlocks to POSIX
> ones):
> 
> WARNING: ThreadSanitizer: data race (pid=76927)
>   Write of size 8 at 0x00017ffd5628 by thread T5:
>     #0 vhost_user_iotlb_cache_insert ../lib/librte_vhost/iotlb.c:181
> (dpdk-testpmd+0x769343)
>     #1 vhost_user_iotlb_msg ../lib/librte_vhost/vhost_user.c:2380 (dpdk-
> testpmd+0x78e4bf)
>     #2 vhost_user_msg_handler ../lib/librte_vhost/vhost_user.c:2848 (dpdk-
> testpmd+0x78fcf8)
>     #3 vhost_user_read_cb ../lib/librte_vhost/socket.c:311 (dpdk-
> testpmd+0x770162)
>     #4 fdset_event_dispatch ../lib/librte_vhost/fd_man.c:286 (dpdk-
> testpmd+0x7591c2)
>     #5 ctrl_thread_init ../lib/librte_eal/common/eal_common_thread.c:193
> (dpdk-testpmd+0xa2890b)
>     #6 <null> <null> (libtsan.so.0+0x2a68d)
> 
>   Previous read of size 8 at 0x00017ffd5628 by thread T3:
>     #0 vhost_user_iotlb_cache_find ../lib/librte_vhost/iotlb.c:252 (dpdk-
> testpmd+0x76ee96)
>     #1 __vhost_iova_to_vva ../lib/librte_vhost/vhost.c:42 (dpdk-
> testpmd+0x77488c)
>     #2 vhost_iova_to_vva ../lib/librte_vhost/vhost.h:753 (dpdk-
> testpmd+0x7abeb3)
>     #3 map_one_desc ../lib/librte_vhost/virtio_net.c:497 (dpdk-
> testpmd+0x7abeb3)
>     #4 fill_vec_buf_packed ../lib/librte_vhost/virtio_net.c:751 (dpdk-
> testpmd+0x7abeb3)
>     #5 vhost_enqueue_single_packed ../lib/librte_vhost/virtio_net.c:1170
> (dpdk-testpmd+0x7abeb3)
>     #6 virtio_dev_rx_single_packed ../lib/librte_vhost/virtio_net.c:1346
> (dpdk-testpmd+0x7abeb3)
>     #7 virtio_dev_rx_packed ../lib/librte_vhost/virtio_net.c:1384 (dpdk-
> testpmd+0x7abeb3)
>     #8 virtio_dev_rx ../lib/librte_vhost/virtio_net.c:1435 (dpdk-
> testpmd+0x7b0654)
>     #9 rte_vhost_enqueue_burst ../lib/librte_vhost/virtio_net.c:1465
> (dpdk-testpmd+0x7b0654)
>     #10 eth_vhost_tx ../drivers/net/vhost/rte_eth_vhost.c:470 (dpdk-
> testpmd+0x1ddfbd8)
>     #11 rte_eth_tx_burst ../lib/librte_ethdev/rte_ethdev.h:4800 (dpdk-
> testpmd+0x505fdb)
>     #12 pkt_burst_transmit ../app/test-pmd/txonly.c:365 (dpdk-
> testpmd+0x5106ad)
>     #13 run_pkt_fwd_on_lcore ../app/test-pmd/testpmd.c:2080 (dpdk-
> testpmd+0x4f8951)
>     #14 start_pkt_forward_on_core ../app/test-pmd/testpmd.c:2106 (dpdk-
> testpmd+0x4f89d7)
>     #15 eal_thread_loop ../lib/librte_eal/linux/eal_thread.c:127 (dpdk-
> testpmd+0xa5b20a)
>     #16 <null> <null> (libtsan.so.0+0x2a68d)
> 
>   Location is global '<null>' at 0x000000000000 (rtemap_0+0x00003ffd5628)
> 
>   Thread T5 'vhost-events' (tid=76933, running) created by main thread at:
>     #0 pthread_create <null> (libtsan.so.0+0x2cd42)
>     #1
> rte_ctrl_thread_create ../lib/librte_eal/common/eal_common_thread.c:216
> (dpdk-testpmd+0xa289e7)
>     #2 rte_vhost_driver_start ../lib/librte_vhost/socket.c:1190 (dpdk-
> testpmd+0x7728ef)
>     #3 vhost_driver_setup ../drivers/net/vhost/rte_eth_vhost.c:1028 (dpdk-
> testpmd+0x1de233d)
>     #4 eth_dev_configure ../drivers/net/vhost/rte_eth_vhost.c:1126 (dpdk-
> testpmd+0x1de29cc)
>     #5 rte_eth_dev_configure ../lib/librte_ethdev/rte_ethdev.c:1439 (dpdk-
> testpmd+0x991ce2)
>     #6 start_port ../app/test-pmd/testpmd.c:2450 (dpdk-testpmd+0x4f9b45)
>     #7 main ../app/test-pmd/testpmd.c:3777 (dpdk-testpmd+0x4fe1ac)
> 
>   Thread T3 'lcore-slave-3' (tid=76931, running) created by main thread at:
>     #0 pthread_create <null> (libtsan.so.0+0x2cd42)
>     #1 rte_eal_init ../lib/librte_eal/linux/eal.c:1244 (dpdk-
> testpmd+0xa46e2b)
>     #2 main ../app/test-pmd/testpmd.c:3673 (dpdk-testpmd+0x4fdd75)
> 
> --
> 
> Or:
> WARNING: ThreadSanitizer: data race (pid=76927)
>   Write of size 1 at 0x00017ffd00f8 by thread T5:
>     #0 vhost_user_iotlb_cache_insert ../lib/librte_vhost/iotlb.c:182
> (dpdk-testpmd+0x769370)
>     #1 vhost_user_iotlb_msg ../lib/librte_vhost/vhost_user.c:2380 (dpdk-
> testpmd+0x78e4bf)
>     #2 vhost_user_msg_handler ../lib/librte_vhost/vhost_user.c:2848 (dpdk-
> testpmd+0x78fcf8)
>     #3 vhost_user_read_cb ../lib/librte_vhost/socket.c:311 (dpdk-
> testpmd+0x770162)
>     #4 fdset_event_dispatch ../lib/librte_vhost/fd_man.c:286 (dpdk-
> testpmd+0x7591c2)
>     #5 ctrl_thread_init ../lib/librte_eal/common/eal_common_thread.c:193
> (dpdk-testpmd+0xa2890b)
>     #6 <null> <null> (libtsan.so.0+0x2a68d)
> 
>   Previous write of size 1 at 0x00017ffd00f8 by thread T3:
>     #0 vhost_user_iotlb_pending_insert ../lib/librte_vhost/iotlb.c:86
> (dpdk-testpmd+0x75eb0c)
>     #1 __vhost_iova_to_vva ../lib/librte_vhost/vhost.c:58 (dpdk-
> testpmd+0x774926)
>     #2 vhost_iova_to_vva ../lib/librte_vhost/vhost.h:753 (dpdk-
> testpmd+0x7a79d1)
>     #3 virtio_dev_rx_batch_packed ../lib/librte_vhost/virtio_net.c:1295
> (dpdk-testpmd+0x7a79d1)
>     #4 virtio_dev_rx_packed ../lib/librte_vhost/virtio_net.c:1376 (dpdk-
> testpmd+0x7a79d1)
>     #5 virtio_dev_rx ../lib/librte_vhost/virtio_net.c:1435 (dpdk-
> testpmd+0x7b0654)
>     #6 rte_vhost_enqueue_burst ../lib/librte_vhost/virtio_net.c:1465
> (dpdk-testpmd+0x7b0654)
>     #7 eth_vhost_tx ../drivers/net/vhost/rte_eth_vhost.c:470 (dpdk-
> testpmd+0x1ddfbd8)
>     #8 rte_eth_tx_burst ../lib/librte_ethdev/rte_ethdev.h:4800 (dpdk-
> testpmd+0x505fdb)
>     #9 pkt_burst_transmit ../app/test-pmd/txonly.c:365 (dpdk-
> testpmd+0x5106ad)
>     #10 run_pkt_fwd_on_lcore ../app/test-pmd/testpmd.c:2080 (dpdk-
> testpmd+0x4f8951)
>     #11 start_pkt_forward_on_core ../app/test-pmd/testpmd.c:2106 (dpdk-
> testpmd+0x4f89d7)
>     #12 eal_thread_loop ../lib/librte_eal/linux/eal_thread.c:127 (dpdk-
> testpmd+0xa5b20a)
>     #13 <null> <null> (libtsan.so.0+0x2a68d)
> 
> --
> 
> As a consequence, the two threads can modify the same entry of the mempool.
> Usually, this cause a loop in iotlb_pending_entries list.
> 
> Fixes: d012d1f293f4 ("vhost: add IOTLB helper functions")
> Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
> ---
>  lib/librte_vhost/iotlb.c | 3 +--
>  1 file changed, 1 insertion(+), 2 deletions(-)
> 
> diff --git a/lib/librte_vhost/iotlb.c b/lib/librte_vhost/iotlb.c
> index 5b3a0c090..e0b67721b 100644
> --- a/lib/librte_vhost/iotlb.c
> +++ b/lib/librte_vhost/iotlb.c
> @@ -321,8 +321,7 @@ vhost_user_iotlb_init(struct virtio_net *dev, int
> vq_index)
>  			IOTLB_CACHE_SIZE, sizeof(struct vhost_iotlb_entry), 0,
>  			0, 0, NULL, NULL, NULL, socket,
>  			MEMPOOL_F_NO_CACHE_ALIGN |
> -			MEMPOOL_F_SP_PUT |
> -			MEMPOOL_F_SC_GET);
> +			MEMPOOL_F_SP_PUT);
>  	if (!vq->iotlb_pool) {
>  		VHOST_LOG_CONFIG(ERR,
>  				"Failed to create IOTLB cache pool (%s)\n",
> --
> 2.18.1


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [dpdk-dev] [PATCH 1/1] vhost: fix iotlb mempool single-consumer flag
  2020-08-26  6:28   ` Xia, Chenbo
@ 2020-08-26 12:50     ` Eugenio Perez Martin
  2020-08-27  1:20       ` Xia, Chenbo
  0 siblings, 1 reply; 13+ messages in thread
From: Eugenio Perez Martin @ 2020-08-26 12:50 UTC (permalink / raw)
  To: Xia, Chenbo
  Cc: dev, Adrian Moreno Zapata, Maxime Coquelin, stable, Wang, Zhihong

Hi Chenbo.

On Wed, Aug 26, 2020 at 8:29 AM Xia, Chenbo <chenbo.xia@intel.com> wrote:
>
> Hi Eugenio,
>
> > -----Original Message-----
> > From: Eugenio Pérez <eperezma@redhat.com>
> > Sent: Monday, August 10, 2020 10:11 PM
> > To: dev@dpdk.org
> > Cc: Adrian Moreno Zapata <amorenoz@redhat.com>; Maxime Coquelin
> > <maxime.coquelin@redhat.com>; stable@dpdk.org; Wang, Zhihong
> > <zhihong.wang@intel.com>; Xia, Chenbo <chenbo.xia@intel.com>
> > Subject: [PATCH 1/1] vhost: fix iotlb mempool single-consumer flag
> >
> > Bugzilla bug: 523
> >
> > Using testpmd as a vhost-user with iommu:
> >
> > /home/dpdk/build/app/dpdk-testpmd -l 1,3 \
> >         --vdev net_vhost0,iface=/tmp/vhost-user1,queues=1,iommu-support=1
> > \
> >         -- --auto-start --stats-period 5 --forward-mode=txonly
> >
> > And qemu with packed virtqueue:
> >
> >     <interface type='vhostuser'>
> >       <mac address='88:67:11:5f:dd:02'/>
> >       <source type='unix' path='/tmp/vhost-user1' mode='client'/>
> >       <model type='virtio'/>
> >       <driver name='vhost' rx_queue_size='1024' iommu='on' ats='on'/>
> >       <address type='pci' domain='0x0000' bus='0x07' slot='0x00'
> > function='0x0'/>
> >     </interface>
> > ...
> >
> >   <qemu:commandline>
> >     <qemu:arg value='-set'/>
> >     <qemu:arg value='device.net1.packed=on'/>
> >   </qemu:commandline>
> >
>
> The fix looks fine to me. But the commit message is a little bit complicated
> to me (also, some lines too long). Since this bug is clear and could be
> described by something like 'control thread which handles iotlb msg and forwarding
> thread which uses iotlb to translate address may modify same entry of mempool
> and may cause a loop in iotlb_pending_entries list'. Do you think it makes
> sense?

Sure, I just wanted to give enough information to reproduce it, but
that can be in the bugzilla case too if you prefer. Do you need me to
send a v2?

Thanks!

>
> Thanks for the fix!
> Chenbo
>
> > --
> >
> > Is it possible to consume the iotlb's entries of the mempoo from different
> > threads. Thread sanitizer example output (after change rwlocks to POSIX
> > ones):
> >
> > WARNING: ThreadSanitizer: data race (pid=76927)
> >   Write of size 8 at 0x00017ffd5628 by thread T5:
> >     #0 vhost_user_iotlb_cache_insert ../lib/librte_vhost/iotlb.c:181
> > (dpdk-testpmd+0x769343)
> >     #1 vhost_user_iotlb_msg ../lib/librte_vhost/vhost_user.c:2380 (dpdk-
> > testpmd+0x78e4bf)
> >     #2 vhost_user_msg_handler ../lib/librte_vhost/vhost_user.c:2848 (dpdk-
> > testpmd+0x78fcf8)
> >     #3 vhost_user_read_cb ../lib/librte_vhost/socket.c:311 (dpdk-
> > testpmd+0x770162)
> >     #4 fdset_event_dispatch ../lib/librte_vhost/fd_man.c:286 (dpdk-
> > testpmd+0x7591c2)
> >     #5 ctrl_thread_init ../lib/librte_eal/common/eal_common_thread.c:193
> > (dpdk-testpmd+0xa2890b)
> >     #6 <null> <null> (libtsan.so.0+0x2a68d)
> >
> >   Previous read of size 8 at 0x00017ffd5628 by thread T3:
> >     #0 vhost_user_iotlb_cache_find ../lib/librte_vhost/iotlb.c:252 (dpdk-
> > testpmd+0x76ee96)
> >     #1 __vhost_iova_to_vva ../lib/librte_vhost/vhost.c:42 (dpdk-
> > testpmd+0x77488c)
> >     #2 vhost_iova_to_vva ../lib/librte_vhost/vhost.h:753 (dpdk-
> > testpmd+0x7abeb3)
> >     #3 map_one_desc ../lib/librte_vhost/virtio_net.c:497 (dpdk-
> > testpmd+0x7abeb3)
> >     #4 fill_vec_buf_packed ../lib/librte_vhost/virtio_net.c:751 (dpdk-
> > testpmd+0x7abeb3)
> >     #5 vhost_enqueue_single_packed ../lib/librte_vhost/virtio_net.c:1170
> > (dpdk-testpmd+0x7abeb3)
> >     #6 virtio_dev_rx_single_packed ../lib/librte_vhost/virtio_net.c:1346
> > (dpdk-testpmd+0x7abeb3)
> >     #7 virtio_dev_rx_packed ../lib/librte_vhost/virtio_net.c:1384 (dpdk-
> > testpmd+0x7abeb3)
> >     #8 virtio_dev_rx ../lib/librte_vhost/virtio_net.c:1435 (dpdk-
> > testpmd+0x7b0654)
> >     #9 rte_vhost_enqueue_burst ../lib/librte_vhost/virtio_net.c:1465
> > (dpdk-testpmd+0x7b0654)
> >     #10 eth_vhost_tx ../drivers/net/vhost/rte_eth_vhost.c:470 (dpdk-
> > testpmd+0x1ddfbd8)
> >     #11 rte_eth_tx_burst ../lib/librte_ethdev/rte_ethdev.h:4800 (dpdk-
> > testpmd+0x505fdb)
> >     #12 pkt_burst_transmit ../app/test-pmd/txonly.c:365 (dpdk-
> > testpmd+0x5106ad)
> >     #13 run_pkt_fwd_on_lcore ../app/test-pmd/testpmd.c:2080 (dpdk-
> > testpmd+0x4f8951)
> >     #14 start_pkt_forward_on_core ../app/test-pmd/testpmd.c:2106 (dpdk-
> > testpmd+0x4f89d7)
> >     #15 eal_thread_loop ../lib/librte_eal/linux/eal_thread.c:127 (dpdk-
> > testpmd+0xa5b20a)
> >     #16 <null> <null> (libtsan.so.0+0x2a68d)
> >
> >   Location is global '<null>' at 0x000000000000 (rtemap_0+0x00003ffd5628)
> >
> >   Thread T5 'vhost-events' (tid=76933, running) created by main thread at:
> >     #0 pthread_create <null> (libtsan.so.0+0x2cd42)
> >     #1
> > rte_ctrl_thread_create ../lib/librte_eal/common/eal_common_thread.c:216
> > (dpdk-testpmd+0xa289e7)
> >     #2 rte_vhost_driver_start ../lib/librte_vhost/socket.c:1190 (dpdk-
> > testpmd+0x7728ef)
> >     #3 vhost_driver_setup ../drivers/net/vhost/rte_eth_vhost.c:1028 (dpdk-
> > testpmd+0x1de233d)
> >     #4 eth_dev_configure ../drivers/net/vhost/rte_eth_vhost.c:1126 (dpdk-
> > testpmd+0x1de29cc)
> >     #5 rte_eth_dev_configure ../lib/librte_ethdev/rte_ethdev.c:1439 (dpdk-
> > testpmd+0x991ce2)
> >     #6 start_port ../app/test-pmd/testpmd.c:2450 (dpdk-testpmd+0x4f9b45)
> >     #7 main ../app/test-pmd/testpmd.c:3777 (dpdk-testpmd+0x4fe1ac)
> >
> >   Thread T3 'lcore-slave-3' (tid=76931, running) created by main thread at:
> >     #0 pthread_create <null> (libtsan.so.0+0x2cd42)
> >     #1 rte_eal_init ../lib/librte_eal/linux/eal.c:1244 (dpdk-
> > testpmd+0xa46e2b)
> >     #2 main ../app/test-pmd/testpmd.c:3673 (dpdk-testpmd+0x4fdd75)
> >
> > --
> >
> > Or:
> > WARNING: ThreadSanitizer: data race (pid=76927)
> >   Write of size 1 at 0x00017ffd00f8 by thread T5:
> >     #0 vhost_user_iotlb_cache_insert ../lib/librte_vhost/iotlb.c:182
> > (dpdk-testpmd+0x769370)
> >     #1 vhost_user_iotlb_msg ../lib/librte_vhost/vhost_user.c:2380 (dpdk-
> > testpmd+0x78e4bf)
> >     #2 vhost_user_msg_handler ../lib/librte_vhost/vhost_user.c:2848 (dpdk-
> > testpmd+0x78fcf8)
> >     #3 vhost_user_read_cb ../lib/librte_vhost/socket.c:311 (dpdk-
> > testpmd+0x770162)
> >     #4 fdset_event_dispatch ../lib/librte_vhost/fd_man.c:286 (dpdk-
> > testpmd+0x7591c2)
> >     #5 ctrl_thread_init ../lib/librte_eal/common/eal_common_thread.c:193
> > (dpdk-testpmd+0xa2890b)
> >     #6 <null> <null> (libtsan.so.0+0x2a68d)
> >
> >   Previous write of size 1 at 0x00017ffd00f8 by thread T3:
> >     #0 vhost_user_iotlb_pending_insert ../lib/librte_vhost/iotlb.c:86
> > (dpdk-testpmd+0x75eb0c)
> >     #1 __vhost_iova_to_vva ../lib/librte_vhost/vhost.c:58 (dpdk-
> > testpmd+0x774926)
> >     #2 vhost_iova_to_vva ../lib/librte_vhost/vhost.h:753 (dpdk-
> > testpmd+0x7a79d1)
> >     #3 virtio_dev_rx_batch_packed ../lib/librte_vhost/virtio_net.c:1295
> > (dpdk-testpmd+0x7a79d1)
> >     #4 virtio_dev_rx_packed ../lib/librte_vhost/virtio_net.c:1376 (dpdk-
> > testpmd+0x7a79d1)
> >     #5 virtio_dev_rx ../lib/librte_vhost/virtio_net.c:1435 (dpdk-
> > testpmd+0x7b0654)
> >     #6 rte_vhost_enqueue_burst ../lib/librte_vhost/virtio_net.c:1465
> > (dpdk-testpmd+0x7b0654)
> >     #7 eth_vhost_tx ../drivers/net/vhost/rte_eth_vhost.c:470 (dpdk-
> > testpmd+0x1ddfbd8)
> >     #8 rte_eth_tx_burst ../lib/librte_ethdev/rte_ethdev.h:4800 (dpdk-
> > testpmd+0x505fdb)
> >     #9 pkt_burst_transmit ../app/test-pmd/txonly.c:365 (dpdk-
> > testpmd+0x5106ad)
> >     #10 run_pkt_fwd_on_lcore ../app/test-pmd/testpmd.c:2080 (dpdk-
> > testpmd+0x4f8951)
> >     #11 start_pkt_forward_on_core ../app/test-pmd/testpmd.c:2106 (dpdk-
> > testpmd+0x4f89d7)
> >     #12 eal_thread_loop ../lib/librte_eal/linux/eal_thread.c:127 (dpdk-
> > testpmd+0xa5b20a)
> >     #13 <null> <null> (libtsan.so.0+0x2a68d)
> >
> > --
> >
> > As a consequence, the two threads can modify the same entry of the mempool.
> > Usually, this cause a loop in iotlb_pending_entries list.
> >
> > Fixes: d012d1f293f4 ("vhost: add IOTLB helper functions")
> > Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
> > ---
> >  lib/librte_vhost/iotlb.c | 3 +--
> >  1 file changed, 1 insertion(+), 2 deletions(-)
> >
> > diff --git a/lib/librte_vhost/iotlb.c b/lib/librte_vhost/iotlb.c
> > index 5b3a0c090..e0b67721b 100644
> > --- a/lib/librte_vhost/iotlb.c
> > +++ b/lib/librte_vhost/iotlb.c
> > @@ -321,8 +321,7 @@ vhost_user_iotlb_init(struct virtio_net *dev, int
> > vq_index)
> >                       IOTLB_CACHE_SIZE, sizeof(struct vhost_iotlb_entry), 0,
> >                       0, 0, NULL, NULL, NULL, socket,
> >                       MEMPOOL_F_NO_CACHE_ALIGN |
> > -                     MEMPOOL_F_SP_PUT |
> > -                     MEMPOOL_F_SC_GET);
> > +                     MEMPOOL_F_SP_PUT);
> >       if (!vq->iotlb_pool) {
> >               VHOST_LOG_CONFIG(ERR,
> >                               "Failed to create IOTLB cache pool (%s)\n",
> > --
> > 2.18.1
>


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [dpdk-dev] [PATCH 1/1] vhost: fix iotlb mempool single-consumer flag
  2020-08-26 12:50     ` Eugenio Perez Martin
@ 2020-08-27  1:20       ` Xia, Chenbo
  0 siblings, 0 replies; 13+ messages in thread
From: Xia, Chenbo @ 2020-08-27  1:20 UTC (permalink / raw)
  To: Eugenio Perez Martin
  Cc: dev, Adrian Moreno Zapata, Maxime Coquelin, stable, Wang, Zhihong

Hi Eugenio,

> -----Original Message-----
> From: Eugenio Perez Martin <eperezma@redhat.com>
> Sent: Wednesday, August 26, 2020 8:51 PM
> To: Xia, Chenbo <chenbo.xia@intel.com>
> Cc: dev@dpdk.org; Adrian Moreno Zapata <amorenoz@redhat.com>; Maxime
> Coquelin <maxime.coquelin@redhat.com>; stable@dpdk.org; Wang, Zhihong
> <zhihong.wang@intel.com>
> Subject: Re: [PATCH 1/1] vhost: fix iotlb mempool single-consumer flag
> 
> Hi Chenbo.
> 
> On Wed, Aug 26, 2020 at 8:29 AM Xia, Chenbo <chenbo.xia@intel.com> wrote:
> >
> > Hi Eugenio,
> >
> > > -----Original Message-----
> > > From: Eugenio Pérez <eperezma@redhat.com>
> > > Sent: Monday, August 10, 2020 10:11 PM
> > > To: dev@dpdk.org
> > > Cc: Adrian Moreno Zapata <amorenoz@redhat.com>; Maxime Coquelin
> > > <maxime.coquelin@redhat.com>; stable@dpdk.org; Wang, Zhihong
> > > <zhihong.wang@intel.com>; Xia, Chenbo <chenbo.xia@intel.com>
> > > Subject: [PATCH 1/1] vhost: fix iotlb mempool single-consumer flag
> > >
> > > Bugzilla bug: 523
> > >
> > > Using testpmd as a vhost-user with iommu:
> > >
> > > /home/dpdk/build/app/dpdk-testpmd -l 1,3 \
> > >         --vdev net_vhost0,iface=/tmp/vhost-user1,queues=1,iommu-
> support=1
> > > \
> > >         -- --auto-start --stats-period 5 --forward-mode=txonly
> > >
> > > And qemu with packed virtqueue:
> > >
> > >     <interface type='vhostuser'>
> > >       <mac address='88:67:11:5f:dd:02'/>
> > >       <source type='unix' path='/tmp/vhost-user1' mode='client'/>
> > >       <model type='virtio'/>
> > >       <driver name='vhost' rx_queue_size='1024' iommu='on' ats='on'/>
> > >       <address type='pci' domain='0x0000' bus='0x07' slot='0x00'
> > > function='0x0'/>
> > >     </interface>
> > > ...
> > >
> > >   <qemu:commandline>
> > >     <qemu:arg value='-set'/>
> > >     <qemu:arg value='device.net1.packed=on'/>
> > >   </qemu:commandline>
> > >
> >
> > The fix looks fine to me. But the commit message is a little bit
> complicated
> > to me (also, some lines too long). Since this bug is clear and could be
> > described by something like 'control thread which handles iotlb msg and
> forwarding
> > thread which uses iotlb to translate address may modify same entry of
> mempool
> > and may cause a loop in iotlb_pending_entries list'. Do you think it
> makes
> > sense?
> 
> Sure, I just wanted to give enough information to reproduce it, but
> that can be in the bugzilla case too if you prefer. Do you need me to
> send a v2?
> 

Yes, the information is very detailed for review! Since there's already one
warning for commits message in patchwork, I'd like a brief description
with bugzilla link and the details could be in that link. Is this ok for you?

Thanks!
Chenbo  

> Thanks!
> 
> >
> > Thanks for the fix!
> > Chenbo
> >
> > > --
> > >
> > > Is it possible to consume the iotlb's entries of the mempoo from
> different
> > > threads. Thread sanitizer example output (after change rwlocks to
> POSIX
> > > ones):
> > >
> > > WARNING: ThreadSanitizer: data race (pid=76927)
> > >   Write of size 8 at 0x00017ffd5628 by thread T5:
> > >     #0 vhost_user_iotlb_cache_insert ../lib/librte_vhost/iotlb.c:181
> > > (dpdk-testpmd+0x769343)
> > >     #1 vhost_user_iotlb_msg ../lib/librte_vhost/vhost_user.c:2380
> (dpdk-
> > > testpmd+0x78e4bf)
> > >     #2 vhost_user_msg_handler ../lib/librte_vhost/vhost_user.c:2848
> (dpdk-
> > > testpmd+0x78fcf8)
> > >     #3 vhost_user_read_cb ../lib/librte_vhost/socket.c:311 (dpdk-
> > > testpmd+0x770162)
> > >     #4 fdset_event_dispatch ../lib/librte_vhost/fd_man.c:286 (dpdk-
> > > testpmd+0x7591c2)
> > >     #5
> ctrl_thread_init ../lib/librte_eal/common/eal_common_thread.c:193
> > > (dpdk-testpmd+0xa2890b)
> > >     #6 <null> <null> (libtsan.so.0+0x2a68d)
> > >
> > >   Previous read of size 8 at 0x00017ffd5628 by thread T3:
> > >     #0 vhost_user_iotlb_cache_find ../lib/librte_vhost/iotlb.c:252
> (dpdk-
> > > testpmd+0x76ee96)
> > >     #1 __vhost_iova_to_vva ../lib/librte_vhost/vhost.c:42 (dpdk-
> > > testpmd+0x77488c)
> > >     #2 vhost_iova_to_vva ../lib/librte_vhost/vhost.h:753 (dpdk-
> > > testpmd+0x7abeb3)
> > >     #3 map_one_desc ../lib/librte_vhost/virtio_net.c:497 (dpdk-
> > > testpmd+0x7abeb3)
> > >     #4 fill_vec_buf_packed ../lib/librte_vhost/virtio_net.c:751 (dpdk-
> > > testpmd+0x7abeb3)
> > >     #5
> vhost_enqueue_single_packed ../lib/librte_vhost/virtio_net.c:1170
> > > (dpdk-testpmd+0x7abeb3)
> > >     #6
> virtio_dev_rx_single_packed ../lib/librte_vhost/virtio_net.c:1346
> > > (dpdk-testpmd+0x7abeb3)
> > >     #7 virtio_dev_rx_packed ../lib/librte_vhost/virtio_net.c:1384
> (dpdk-
> > > testpmd+0x7abeb3)
> > >     #8 virtio_dev_rx ../lib/librte_vhost/virtio_net.c:1435 (dpdk-
> > > testpmd+0x7b0654)
> > >     #9 rte_vhost_enqueue_burst ../lib/librte_vhost/virtio_net.c:1465
> > > (dpdk-testpmd+0x7b0654)
> > >     #10 eth_vhost_tx ../drivers/net/vhost/rte_eth_vhost.c:470 (dpdk-
> > > testpmd+0x1ddfbd8)
> > >     #11 rte_eth_tx_burst ../lib/librte_ethdev/rte_ethdev.h:4800 (dpdk-
> > > testpmd+0x505fdb)
> > >     #12 pkt_burst_transmit ../app/test-pmd/txonly.c:365 (dpdk-
> > > testpmd+0x5106ad)
> > >     #13 run_pkt_fwd_on_lcore ../app/test-pmd/testpmd.c:2080 (dpdk-
> > > testpmd+0x4f8951)
> > >     #14 start_pkt_forward_on_core ../app/test-pmd/testpmd.c:2106
> (dpdk-
> > > testpmd+0x4f89d7)
> > >     #15 eal_thread_loop ../lib/librte_eal/linux/eal_thread.c:127
> (dpdk-
> > > testpmd+0xa5b20a)
> > >     #16 <null> <null> (libtsan.so.0+0x2a68d)
> > >
> > >   Location is global '<null>' at 0x000000000000
> (rtemap_0+0x00003ffd5628)
> > >
> > >   Thread T5 'vhost-events' (tid=76933, running) created by main thread
> at:
> > >     #0 pthread_create <null> (libtsan.so.0+0x2cd42)
> > >     #1
> > >
> rte_ctrl_thread_create ../lib/librte_eal/common/eal_common_thread.c:216
> > > (dpdk-testpmd+0xa289e7)
> > >     #2 rte_vhost_driver_start ../lib/librte_vhost/socket.c:1190 (dpdk-
> > > testpmd+0x7728ef)
> > >     #3 vhost_driver_setup ../drivers/net/vhost/rte_eth_vhost.c:1028
> (dpdk-
> > > testpmd+0x1de233d)
> > >     #4 eth_dev_configure ../drivers/net/vhost/rte_eth_vhost.c:1126
> (dpdk-
> > > testpmd+0x1de29cc)
> > >     #5 rte_eth_dev_configure ../lib/librte_ethdev/rte_ethdev.c:1439
> (dpdk-
> > > testpmd+0x991ce2)
> > >     #6 start_port ../app/test-pmd/testpmd.c:2450 (dpdk-
> testpmd+0x4f9b45)
> > >     #7 main ../app/test-pmd/testpmd.c:3777 (dpdk-testpmd+0x4fe1ac)
> > >
> > >   Thread T3 'lcore-slave-3' (tid=76931, running) created by main
> thread at:
> > >     #0 pthread_create <null> (libtsan.so.0+0x2cd42)
> > >     #1 rte_eal_init ../lib/librte_eal/linux/eal.c:1244 (dpdk-
> > > testpmd+0xa46e2b)
> > >     #2 main ../app/test-pmd/testpmd.c:3673 (dpdk-testpmd+0x4fdd75)
> > >
> > > --
> > >
> > > Or:
> > > WARNING: ThreadSanitizer: data race (pid=76927)
> > >   Write of size 1 at 0x00017ffd00f8 by thread T5:
> > >     #0 vhost_user_iotlb_cache_insert ../lib/librte_vhost/iotlb.c:182
> > > (dpdk-testpmd+0x769370)
> > >     #1 vhost_user_iotlb_msg ../lib/librte_vhost/vhost_user.c:2380
> (dpdk-
> > > testpmd+0x78e4bf)
> > >     #2 vhost_user_msg_handler ../lib/librte_vhost/vhost_user.c:2848
> (dpdk-
> > > testpmd+0x78fcf8)
> > >     #3 vhost_user_read_cb ../lib/librte_vhost/socket.c:311 (dpdk-
> > > testpmd+0x770162)
> > >     #4 fdset_event_dispatch ../lib/librte_vhost/fd_man.c:286 (dpdk-
> > > testpmd+0x7591c2)
> > >     #5
> ctrl_thread_init ../lib/librte_eal/common/eal_common_thread.c:193
> > > (dpdk-testpmd+0xa2890b)
> > >     #6 <null> <null> (libtsan.so.0+0x2a68d)
> > >
> > >   Previous write of size 1 at 0x00017ffd00f8 by thread T3:
> > >     #0 vhost_user_iotlb_pending_insert ../lib/librte_vhost/iotlb.c:86
> > > (dpdk-testpmd+0x75eb0c)
> > >     #1 __vhost_iova_to_vva ../lib/librte_vhost/vhost.c:58 (dpdk-
> > > testpmd+0x774926)
> > >     #2 vhost_iova_to_vva ../lib/librte_vhost/vhost.h:753 (dpdk-
> > > testpmd+0x7a79d1)
> > >     #3
> virtio_dev_rx_batch_packed ../lib/librte_vhost/virtio_net.c:1295
> > > (dpdk-testpmd+0x7a79d1)
> > >     #4 virtio_dev_rx_packed ../lib/librte_vhost/virtio_net.c:1376
> (dpdk-
> > > testpmd+0x7a79d1)
> > >     #5 virtio_dev_rx ../lib/librte_vhost/virtio_net.c:1435 (dpdk-
> > > testpmd+0x7b0654)
> > >     #6 rte_vhost_enqueue_burst ../lib/librte_vhost/virtio_net.c:1465
> > > (dpdk-testpmd+0x7b0654)
> > >     #7 eth_vhost_tx ../drivers/net/vhost/rte_eth_vhost.c:470 (dpdk-
> > > testpmd+0x1ddfbd8)
> > >     #8 rte_eth_tx_burst ../lib/librte_ethdev/rte_ethdev.h:4800 (dpdk-
> > > testpmd+0x505fdb)
> > >     #9 pkt_burst_transmit ../app/test-pmd/txonly.c:365 (dpdk-
> > > testpmd+0x5106ad)
> > >     #10 run_pkt_fwd_on_lcore ../app/test-pmd/testpmd.c:2080 (dpdk-
> > > testpmd+0x4f8951)
> > >     #11 start_pkt_forward_on_core ../app/test-pmd/testpmd.c:2106
> (dpdk-
> > > testpmd+0x4f89d7)
> > >     #12 eal_thread_loop ../lib/librte_eal/linux/eal_thread.c:127
> (dpdk-
> > > testpmd+0xa5b20a)
> > >     #13 <null> <null> (libtsan.so.0+0x2a68d)
> > >
> > > --
> > >
> > > As a consequence, the two threads can modify the same entry of the
> mempool.
> > > Usually, this cause a loop in iotlb_pending_entries list.
> > >
> > > Fixes: d012d1f293f4 ("vhost: add IOTLB helper functions")
> > > Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
> > > ---
> > >  lib/librte_vhost/iotlb.c | 3 +--
> > >  1 file changed, 1 insertion(+), 2 deletions(-)
> > >
> > > diff --git a/lib/librte_vhost/iotlb.c b/lib/librte_vhost/iotlb.c
> > > index 5b3a0c090..e0b67721b 100644
> > > --- a/lib/librte_vhost/iotlb.c
> > > +++ b/lib/librte_vhost/iotlb.c
> > > @@ -321,8 +321,7 @@ vhost_user_iotlb_init(struct virtio_net *dev, int
> > > vq_index)
> > >                       IOTLB_CACHE_SIZE, sizeof(struct
> vhost_iotlb_entry), 0,
> > >                       0, 0, NULL, NULL, NULL, socket,
> > >                       MEMPOOL_F_NO_CACHE_ALIGN |
> > > -                     MEMPOOL_F_SP_PUT |
> > > -                     MEMPOOL_F_SC_GET);
> > > +                     MEMPOOL_F_SP_PUT);
> > >       if (!vq->iotlb_pool) {
> > >               VHOST_LOG_CONFIG(ERR,
> > >                               "Failed to create IOTLB cache pool
> (%s)\n",
> > > --
> > > 2.18.1
> >


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [dpdk-dev] [PATCH 1/1] vhost: fix iotlb mempool single-consumer flag
  2020-08-10 14:11 ` [dpdk-dev] [PATCH 1/1] " Eugenio Pérez
  2020-08-25  9:17   ` Kevin Traynor
  2020-08-26  6:28   ` Xia, Chenbo
@ 2020-08-28 18:40   ` Jens Freimann
  2 siblings, 0 replies; 13+ messages in thread
From: Jens Freimann @ 2020-08-28 18:40 UTC (permalink / raw)
  To: Eugenio Pérez
  Cc: dev, Adrian Moreno Zapata, Maxime Coquelin, stable, Zhihong Wang,
	Chenbo Xia

Hi Eugenio,

On Mon, Aug 10, 2020 at 04:11:03PM +0200, Eugenio Pérez wrote:
>Bugzilla bug: 523
>
>Using testpmd as a vhost-user with iommu:
>
>/home/dpdk/build/app/dpdk-testpmd -l 1,3 \
>        --vdev net_vhost0,iface=/tmp/vhost-user1,queues=1,iommu-support=1 \
>        -- --auto-start --stats-period 5 --forward-mode=txonly
>
>And qemu with packed virtqueue:
>
>    <interface type='vhostuser'>
>      <mac address='88:67:11:5f:dd:02'/>
>      <source type='unix' path='/tmp/vhost-user1' mode='client'/>
>      <model type='virtio'/>
>      <driver name='vhost' rx_queue_size='1024' iommu='on' ats='on'/>
>      <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
>    </interface>
>...
>
>  <qemu:commandline>
>    <qemu:arg value='-set'/>
>    <qemu:arg value='device.net1.packed=on'/>
>  </qemu:commandline>
>
>--
>
>Is it possible to consume the iotlb's entries of the mempoo from different
>threads. Thread sanitizer example output (after change rwlocks to POSIX ones):
>
>WARNING: ThreadSanitizer: data race (pid=76927)
>  Write of size 8 at 0x00017ffd5628 by thread T5:
>    #0 vhost_user_iotlb_cache_insert ../lib/librte_vhost/iotlb.c:181 (dpdk-testpmd+0x769343)
>    #1 vhost_user_iotlb_msg ../lib/librte_vhost/vhost_user.c:2380 (dpdk-testpmd+0x78e4bf)
>    #2 vhost_user_msg_handler ../lib/librte_vhost/vhost_user.c:2848 (dpdk-testpmd+0x78fcf8)
>    #3 vhost_user_read_cb ../lib/librte_vhost/socket.c:311 (dpdk-testpmd+0x770162)
>    #4 fdset_event_dispatch ../lib/librte_vhost/fd_man.c:286 (dpdk-testpmd+0x7591c2)
>    #5 ctrl_thread_init ../lib/librte_eal/common/eal_common_thread.c:193 (dpdk-testpmd+0xa2890b)
>    #6 <null> <null> (libtsan.so.0+0x2a68d)
>
>  Previous read of size 8 at 0x00017ffd5628 by thread T3:
>    #0 vhost_user_iotlb_cache_find ../lib/librte_vhost/iotlb.c:252 (dpdk-testpmd+0x76ee96)
>    #1 __vhost_iova_to_vva ../lib/librte_vhost/vhost.c:42 (dpdk-testpmd+0x77488c)
>    #2 vhost_iova_to_vva ../lib/librte_vhost/vhost.h:753 (dpdk-testpmd+0x7abeb3)
>    #3 map_one_desc ../lib/librte_vhost/virtio_net.c:497 (dpdk-testpmd+0x7abeb3)
>    #4 fill_vec_buf_packed ../lib/librte_vhost/virtio_net.c:751 (dpdk-testpmd+0x7abeb3)
>    #5 vhost_enqueue_single_packed ../lib/librte_vhost/virtio_net.c:1170 (dpdk-testpmd+0x7abeb3)
>    #6 virtio_dev_rx_single_packed ../lib/librte_vhost/virtio_net.c:1346 (dpdk-testpmd+0x7abeb3)
>    #7 virtio_dev_rx_packed ../lib/librte_vhost/virtio_net.c:1384 (dpdk-testpmd+0x7abeb3)
>    #8 virtio_dev_rx ../lib/librte_vhost/virtio_net.c:1435 (dpdk-testpmd+0x7b0654)
>    #9 rte_vhost_enqueue_burst ../lib/librte_vhost/virtio_net.c:1465 (dpdk-testpmd+0x7b0654)
>    #10 eth_vhost_tx ../drivers/net/vhost/rte_eth_vhost.c:470 (dpdk-testpmd+0x1ddfbd8)
>    #11 rte_eth_tx_burst ../lib/librte_ethdev/rte_ethdev.h:4800 (dpdk-testpmd+0x505fdb)
>    #12 pkt_burst_transmit ../app/test-pmd/txonly.c:365 (dpdk-testpmd+0x5106ad)
>    #13 run_pkt_fwd_on_lcore ../app/test-pmd/testpmd.c:2080 (dpdk-testpmd+0x4f8951)
>    #14 start_pkt_forward_on_core ../app/test-pmd/testpmd.c:2106 (dpdk-testpmd+0x4f89d7)
>    #15 eal_thread_loop ../lib/librte_eal/linux/eal_thread.c:127 (dpdk-testpmd+0xa5b20a)
>    #16 <null> <null> (libtsan.so.0+0x2a68d)
>
>  Location is global '<null>' at 0x000000000000 (rtemap_0+0x00003ffd5628)
>
>  Thread T5 'vhost-events' (tid=76933, running) created by main thread at:
>    #0 pthread_create <null> (libtsan.so.0+0x2cd42)
>    #1 rte_ctrl_thread_create ../lib/librte_eal/common/eal_common_thread.c:216 (dpdk-testpmd+0xa289e7)
>    #2 rte_vhost_driver_start ../lib/librte_vhost/socket.c:1190 (dpdk-testpmd+0x7728ef)
>    #3 vhost_driver_setup ../drivers/net/vhost/rte_eth_vhost.c:1028 (dpdk-testpmd+0x1de233d)
>    #4 eth_dev_configure ../drivers/net/vhost/rte_eth_vhost.c:1126 (dpdk-testpmd+0x1de29cc)
>    #5 rte_eth_dev_configure ../lib/librte_ethdev/rte_ethdev.c:1439 (dpdk-testpmd+0x991ce2)
>    #6 start_port ../app/test-pmd/testpmd.c:2450 (dpdk-testpmd+0x4f9b45)
>    #7 main ../app/test-pmd/testpmd.c:3777 (dpdk-testpmd+0x4fe1ac)
>
>  Thread T3 'lcore-slave-3' (tid=76931, running) created by main thread at:
>    #0 pthread_create <null> (libtsan.so.0+0x2cd42)
>    #1 rte_eal_init ../lib/librte_eal/linux/eal.c:1244 (dpdk-testpmd+0xa46e2b)
>    #2 main ../app/test-pmd/testpmd.c:3673 (dpdk-testpmd+0x4fdd75)
>
>--
>
>Or:
>WARNING: ThreadSanitizer: data race (pid=76927)
>  Write of size 1 at 0x00017ffd00f8 by thread T5:
>    #0 vhost_user_iotlb_cache_insert ../lib/librte_vhost/iotlb.c:182 (dpdk-testpmd+0x769370)
>    #1 vhost_user_iotlb_msg ../lib/librte_vhost/vhost_user.c:2380 (dpdk-testpmd+0x78e4bf)
>    #2 vhost_user_msg_handler ../lib/librte_vhost/vhost_user.c:2848 (dpdk-testpmd+0x78fcf8)
>    #3 vhost_user_read_cb ../lib/librte_vhost/socket.c:311 (dpdk-testpmd+0x770162)
>    #4 fdset_event_dispatch ../lib/librte_vhost/fd_man.c:286 (dpdk-testpmd+0x7591c2)
>    #5 ctrl_thread_init ../lib/librte_eal/common/eal_common_thread.c:193 (dpdk-testpmd+0xa2890b)
>    #6 <null> <null> (libtsan.so.0+0x2a68d)
>
>  Previous write of size 1 at 0x00017ffd00f8 by thread T3:
>    #0 vhost_user_iotlb_pending_insert ../lib/librte_vhost/iotlb.c:86 (dpdk-testpmd+0x75eb0c)
>    #1 __vhost_iova_to_vva ../lib/librte_vhost/vhost.c:58 (dpdk-testpmd+0x774926)
>    #2 vhost_iova_to_vva ../lib/librte_vhost/vhost.h:753 (dpdk-testpmd+0x7a79d1)
>    #3 virtio_dev_rx_batch_packed ../lib/librte_vhost/virtio_net.c:1295 (dpdk-testpmd+0x7a79d1)
>    #4 virtio_dev_rx_packed ../lib/librte_vhost/virtio_net.c:1376 (dpdk-testpmd+0x7a79d1)
>    #5 virtio_dev_rx ../lib/librte_vhost/virtio_net.c:1435 (dpdk-testpmd+0x7b0654)
>    #6 rte_vhost_enqueue_burst ../lib/librte_vhost/virtio_net.c:1465 (dpdk-testpmd+0x7b0654)
>    #7 eth_vhost_tx ../drivers/net/vhost/rte_eth_vhost.c:470 (dpdk-testpmd+0x1ddfbd8)
>    #8 rte_eth_tx_burst ../lib/librte_ethdev/rte_ethdev.h:4800 (dpdk-testpmd+0x505fdb)
>    #9 pkt_burst_transmit ../app/test-pmd/txonly.c:365 (dpdk-testpmd+0x5106ad)
>    #10 run_pkt_fwd_on_lcore ../app/test-pmd/testpmd.c:2080 (dpdk-testpmd+0x4f8951)
>    #11 start_pkt_forward_on_core ../app/test-pmd/testpmd.c:2106 (dpdk-testpmd+0x4f89d7)
>    #12 eal_thread_loop ../lib/librte_eal/linux/eal_thread.c:127 (dpdk-testpmd+0xa5b20a)
>    #13 <null> <null> (libtsan.so.0+0x2a68d)
>
>--
>
>As a consequence, the two threads can modify the same entry of the mempool.
>Usually, this cause a loop in iotlb_pending_entries list.
>
>Fixes: d012d1f293f4 ("vhost: add IOTLB helper functions")
>Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
>---
> lib/librte_vhost/iotlb.c | 3 +--
> 1 file changed, 1 insertion(+), 2 deletions(-)
>

looks good to me.

Reviewed-by: Jens Freimann <jfreimann@redhat.com>

regards,
Jens 


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [dpdk-dev] [PATCH v2 0/1] vhost: Make iotlb mempool not single-consumer
  2020-08-10 14:11 [dpdk-dev] [PATCH 0/1] vhost: fix iotlb mempool single-consumer flag Eugenio Pérez
  2020-08-10 14:11 ` [dpdk-dev] [PATCH 1/1] " Eugenio Pérez
  2020-08-18 14:21 ` [dpdk-dev] [PATCH 0/1] " Eugenio Perez Martin
@ 2020-08-31  7:59 ` Eugenio Pérez
  2020-08-31  7:59   ` [dpdk-dev] [PATCH v2 1/1] vhost: fix iotlb mempool single-consumer flag Eugenio Pérez
  2 siblings, 1 reply; 13+ messages in thread
From: Eugenio Pérez @ 2020-08-31  7:59 UTC (permalink / raw)
  To: dev
  Cc: Adrian Moreno Zapata, Chenbo Xia, Zhihong Wang, Jens Freimann,
	stable, Maxime Coquelin, Kevin Traynor

Bugzilla bug: 523

This behavior is only observed on packed vq + virtio-net kernel driver in the
guest, so we could make the single-consumer flag optional. However, I have not
found why I cannot see the issue in split, so the safer option is to never set
it.

Any comments?

Thanks!

v2: Modify commit message to not to include all trace and commands.

Eugenio Pérez (1):
  vhost: fix iotlb mempool single-consumer flag

 lib/librte_vhost/iotlb.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

-- 
2.18.1


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [dpdk-dev] [PATCH v2 1/1] vhost: fix iotlb mempool single-consumer flag
  2020-08-31  7:59 ` [dpdk-dev] [PATCH v2 0/1] vhost: Make iotlb mempool not single-consumer Eugenio Pérez
@ 2020-08-31  7:59   ` Eugenio Pérez
  2020-08-31 10:21     ` Xia, Chenbo
                       ` (2 more replies)
  0 siblings, 3 replies; 13+ messages in thread
From: Eugenio Pérez @ 2020-08-31  7:59 UTC (permalink / raw)
  To: dev
  Cc: Adrian Moreno Zapata, Chenbo Xia, Zhihong Wang, Jens Freimann,
	stable, Maxime Coquelin, Kevin Traynor

Bugzilla bug: 523

Control thread (which handles iotlb msg) and forwarding
thread both use iotlb to translate address. The former may modify the
same entry of mempool and may cause a loop in iotlb_pending_entries
list.

Fixes: d012d1f293f4 ("vhost: add IOTLB helper functions")
Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
---
 lib/librte_vhost/iotlb.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/lib/librte_vhost/iotlb.c b/lib/librte_vhost/iotlb.c
index 5b3a0c090..e0b67721b 100644
--- a/lib/librte_vhost/iotlb.c
+++ b/lib/librte_vhost/iotlb.c
@@ -321,8 +321,7 @@ vhost_user_iotlb_init(struct virtio_net *dev, int vq_index)
 			IOTLB_CACHE_SIZE, sizeof(struct vhost_iotlb_entry), 0,
 			0, 0, NULL, NULL, NULL, socket,
 			MEMPOOL_F_NO_CACHE_ALIGN |
-			MEMPOOL_F_SP_PUT |
-			MEMPOOL_F_SC_GET);
+			MEMPOOL_F_SP_PUT);
 	if (!vq->iotlb_pool) {
 		VHOST_LOG_CONFIG(ERR,
 				"Failed to create IOTLB cache pool (%s)\n",
-- 
2.18.1


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [dpdk-dev] [PATCH v2 1/1] vhost: fix iotlb mempool single-consumer flag
  2020-08-31  7:59   ` [dpdk-dev] [PATCH v2 1/1] vhost: fix iotlb mempool single-consumer flag Eugenio Pérez
@ 2020-08-31 10:21     ` Xia, Chenbo
  2020-09-18  9:49     ` Maxime Coquelin
  2020-09-18 12:29     ` Maxime Coquelin
  2 siblings, 0 replies; 13+ messages in thread
From: Xia, Chenbo @ 2020-08-31 10:21 UTC (permalink / raw)
  To: Eugenio Pérez, dev
  Cc: Adrian Moreno Zapata, Wang, Zhihong, Jens Freimann, stable,
	Maxime Coquelin, Kevin Traynor


> -----Original Message-----
> From: Eugenio Pérez <eperezma@redhat.com>
> Sent: Monday, August 31, 2020 3:59 PM
> To: dev@dpdk.org
> Cc: Adrian Moreno Zapata <amorenoz@redhat.com>; Xia, Chenbo
> <chenbo.xia@intel.com>; Wang, Zhihong <zhihong.wang@intel.com>; Jens
> Freimann <jfreimann@redhat.com>; stable@dpdk.org; Maxime Coquelin
> <maxime.coquelin@redhat.com>; Kevin Traynor <ktraynor@redhat.com>
> Subject: [PATCH v2 1/1] vhost: fix iotlb mempool single-consumer flag
> 
> Bugzilla bug: 523
> 
> Control thread (which handles iotlb msg) and forwarding
> thread both use iotlb to translate address. The former may modify the
> same entry of mempool and may cause a loop in iotlb_pending_entries
> list.
> 
> Fixes: d012d1f293f4 ("vhost: add IOTLB helper functions")
> Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
> ---
>  lib/librte_vhost/iotlb.c | 3 +--
>  1 file changed, 1 insertion(+), 2 deletions(-)
> 
> diff --git a/lib/librte_vhost/iotlb.c b/lib/librte_vhost/iotlb.c
> index 5b3a0c090..e0b67721b 100644
> --- a/lib/librte_vhost/iotlb.c
> +++ b/lib/librte_vhost/iotlb.c
> @@ -321,8 +321,7 @@ vhost_user_iotlb_init(struct virtio_net *dev, int
> vq_index)
>  			IOTLB_CACHE_SIZE, sizeof(struct vhost_iotlb_entry), 0,
>  			0, 0, NULL, NULL, NULL, socket,
>  			MEMPOOL_F_NO_CACHE_ALIGN |
> -			MEMPOOL_F_SP_PUT |
> -			MEMPOOL_F_SC_GET);
> +			MEMPOOL_F_SP_PUT);
>  	if (!vq->iotlb_pool) {
>  		VHOST_LOG_CONFIG(ERR,
>  				"Failed to create IOTLB cache pool (%s)\n",
> --
> 2.18.1

Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [dpdk-dev] [PATCH v2 1/1] vhost: fix iotlb mempool single-consumer flag
  2020-08-31  7:59   ` [dpdk-dev] [PATCH v2 1/1] vhost: fix iotlb mempool single-consumer flag Eugenio Pérez
  2020-08-31 10:21     ` Xia, Chenbo
@ 2020-09-18  9:49     ` Maxime Coquelin
  2020-09-18 12:29     ` Maxime Coquelin
  2 siblings, 0 replies; 13+ messages in thread
From: Maxime Coquelin @ 2020-09-18  9:49 UTC (permalink / raw)
  To: Eugenio Pérez, dev
  Cc: Adrian Moreno Zapata, Chenbo Xia, Zhihong Wang, Jens Freimann,
	stable, Kevin Traynor



On 8/31/20 9:59 AM, Eugenio Pérez wrote:
> Bugzilla bug: 523
> 
> Control thread (which handles iotlb msg) and forwarding
> thread both use iotlb to translate address. The former may modify the
> same entry of mempool and may cause a loop in iotlb_pending_entries
> list.

Bugzilla ID: 523

> Fixes: d012d1f293f4 ("vhost: add IOTLB helper functions")
> Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
> ---
>  lib/librte_vhost/iotlb.c | 3 +--
>  1 file changed, 1 insertion(+), 2 deletions(-)
> 
> diff --git a/lib/librte_vhost/iotlb.c b/lib/librte_vhost/iotlb.c
> index 5b3a0c090..e0b67721b 100644
> --- a/lib/librte_vhost/iotlb.c
> +++ b/lib/librte_vhost/iotlb.c
> @@ -321,8 +321,7 @@ vhost_user_iotlb_init(struct virtio_net *dev, int vq_index)
>  			IOTLB_CACHE_SIZE, sizeof(struct vhost_iotlb_entry), 0,
>  			0, 0, NULL, NULL, NULL, socket,
>  			MEMPOOL_F_NO_CACHE_ALIGN |
> -			MEMPOOL_F_SP_PUT |
> -			MEMPOOL_F_SC_GET);
> +			MEMPOOL_F_SP_PUT);
>  	if (!vq->iotlb_pool) {
>  		VHOST_LOG_CONFIG(ERR,
>  				"Failed to create IOTLB cache pool (%s)\n",
> 

I'll fix commit message while applying.

Thanks for your contribution:
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>

Maxime


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [dpdk-dev] [PATCH v2 1/1] vhost: fix iotlb mempool single-consumer flag
  2020-08-31  7:59   ` [dpdk-dev] [PATCH v2 1/1] vhost: fix iotlb mempool single-consumer flag Eugenio Pérez
  2020-08-31 10:21     ` Xia, Chenbo
  2020-09-18  9:49     ` Maxime Coquelin
@ 2020-09-18 12:29     ` Maxime Coquelin
  2 siblings, 0 replies; 13+ messages in thread
From: Maxime Coquelin @ 2020-09-18 12:29 UTC (permalink / raw)
  To: Eugenio Pérez, dev
  Cc: Adrian Moreno Zapata, Chenbo Xia, Zhihong Wang, Jens Freimann,
	stable, Kevin Traynor



On 8/31/20 9:59 AM, Eugenio Pérez wrote:
> Bugzilla bug: 523
> 
> Control thread (which handles iotlb msg) and forwarding
> thread both use iotlb to translate address. The former may modify the
> same entry of mempool and may cause a loop in iotlb_pending_entries
> list.
> 
> Fixes: d012d1f293f4 ("vhost: add IOTLB helper functions")
> Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
> ---
>  lib/librte_vhost/iotlb.c | 3 +--
>  1 file changed, 1 insertion(+), 2 deletions(-)

Applied to dpdk-next-virtio/master.

Thanks,
Maxime


^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2020-09-18 12:29 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-08-10 14:11 [dpdk-dev] [PATCH 0/1] vhost: fix iotlb mempool single-consumer flag Eugenio Pérez
2020-08-10 14:11 ` [dpdk-dev] [PATCH 1/1] " Eugenio Pérez
2020-08-25  9:17   ` Kevin Traynor
2020-08-26  6:28   ` Xia, Chenbo
2020-08-26 12:50     ` Eugenio Perez Martin
2020-08-27  1:20       ` Xia, Chenbo
2020-08-28 18:40   ` Jens Freimann
2020-08-18 14:21 ` [dpdk-dev] [PATCH 0/1] " Eugenio Perez Martin
2020-08-31  7:59 ` [dpdk-dev] [PATCH v2 0/1] vhost: Make iotlb mempool not single-consumer Eugenio Pérez
2020-08-31  7:59   ` [dpdk-dev] [PATCH v2 1/1] vhost: fix iotlb mempool single-consumer flag Eugenio Pérez
2020-08-31 10:21     ` Xia, Chenbo
2020-09-18  9:49     ` Maxime Coquelin
2020-09-18 12:29     ` Maxime Coquelin

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).