From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 683F2A0508; Thu, 7 Apr 2022 03:40:38 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id F22F440689; Thu, 7 Apr 2022 03:40:37 +0200 (CEST) Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by mails.dpdk.org (Postfix) with ESMTP id 226454014F for ; Thu, 7 Apr 2022 03:40:35 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1649295636; x=1680831636; h=from:to:cc:subject:date:message-id:references: in-reply-to:content-transfer-encoding:mime-version; bh=fvTGgNMAdEsF+i8XGPQXNLfBLz30GGM6kUe3WqfO+TY=; b=UL8gs8ZH74hHvSGAqnyriTXVrZ0jqkV+yOpCwW9k3U3Vo8XoEzDXiULu d1LZEeVjCMPIUTXk9JA8huv8muv0T3dKG1hqNbTXMjiiQI4HU3txpUZGd ASv3t/tVmMVV0f7uUbm65moLdSwt1GGRUzm88mFE8L9cC97AF7Us0q7KX sBRZEX8oc3o2i/L7K5vGZw35f6jU4ZOHdo68xe3G1qo10dYrAmwCkA1vW DbFCPRFzfIUTFK4QeMvaoCefutW2lqlBze7O7ftxjO8vTh+A+rLISdw3F ZTfMNWWhtIX5+QjhiHeHdA8q45hGamb+38/D054AvJ4phImEFgN3K7mx6 g==; X-IronPort-AV: E=McAfee;i="6200,9189,10309"; a="324365943" X-IronPort-AV: E=Sophos;i="5.90,241,1643702400"; d="scan'208";a="324365943" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Apr 2022 18:40:34 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.90,241,1643702400"; d="scan'208";a="609128422" Received: from fmsmsx606.amr.corp.intel.com ([10.18.126.86]) by fmsmga008.fm.intel.com with ESMTP; 06 Apr 2022 18:40:34 -0700 Received: from fmsmsx605.amr.corp.intel.com (10.18.126.85) by fmsmsx606.amr.corp.intel.com (10.18.126.86) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.27; Wed, 6 Apr 2022 18:40:34 -0700 Received: from fmsmsx605.amr.corp.intel.com ([10.18.126.85]) by fmsmsx605.amr.corp.intel.com ([10.18.126.85]) with mapi id 15.01.2308.027; Wed, 6 Apr 2022 18:40:34 -0700 From: "Hu, Jiayu" To: David Marchand , "dev@dpdk.org" CC: "maxime.coquelin@redhat.com" , "Xia, Chenbo" , "Wang, YuanX" , "Ding, Xuan" Subject: RE: [RFC PATCH v2 3/9] vhost: annotate virtqueue access lock Thread-Topic: [RFC PATCH v2 3/9] vhost: annotate virtqueue access lock Thread-Index: AQHYRD0cQS/Pe7jmtUC8+uyxXlSXBqzjs6Rg Date: Thu, 7 Apr 2022 01:40:33 +0000 Message-ID: References: <20220328121758.26632-1-david.marchand@redhat.com> <20220330134956.18927-1-david.marchand@redhat.com> <20220330134956.18927-4-david.marchand@redhat.com> In-Reply-To: <20220330134956.18927-4-david.marchand@redhat.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: dlp-reaction: no-action dlp-version: 11.6.401.20 dlp-product: dlpe-windows x-originating-ip: [10.239.127.36] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Hi David, Please see replies inline. > -----Original Message----- > From: David Marchand > Sent: Wednesday, March 30, 2022 9:50 PM > To: dev@dpdk.org > Cc: maxime.coquelin@redhat.com; Xia, Chenbo ; Hu, > Jiayu ; Wang, YuanX ; Ding, > Xuan > Subject: [RFC PATCH v2 3/9] vhost: annotate virtqueue access lock >=20 > This change simply annotates existing paths of the code leading to > manipulations of the vq->access_lock. >=20 > One small change is required: vhost_poll_enqueue_completed was getting a > queue_id to get hold of the vq, while its callers already knew of the vq.= For > the annotation sake, vq is now directly passed. >=20 > vhost_user_lock_all_queue_pairs and vhost_user_unlock_all_queue_pairs > are skipped since vq->access_lock are conditionally held. >=20 > Signed-off-by: David Marchand > --- > lib/vhost/vhost.h | 2 ++ > lib/vhost/vhost_user.c | 2 ++ > lib/vhost/virtio_net.c | 16 ++++++++++++---- > 3 files changed, 16 insertions(+), 4 deletions(-) >=20 > diff --git a/lib/vhost/vhost.h b/lib/vhost/vhost.h index > a9edc271aa..158460b7d7 100644 > --- a/lib/vhost/vhost.h > +++ b/lib/vhost/vhost.h > @@ -834,6 +834,7 @@ vhost_need_event(uint16_t event_idx, uint16_t > new_idx, uint16_t old) >=20 > static __rte_always_inline void > vhost_vring_call_split(struct virtio_net *dev, struct vhost_virtqueue *v= q) > + RTE_EXC_LOCK_REQUIRES(vq->access_lock) vhost_vring_call_split() is called in rte_vhost_vring_call() too, but it do= esn't acquire vq->access_lock before calling vhost_vring_call_split(). > { > /* Flush used->idx update before we read avail->flags. */ > rte_atomic_thread_fence(__ATOMIC_SEQ_CST); > @@ -872,6 +873,7 @@ vhost_vring_call_split(struct virtio_net *dev, struct > vhost_virtqueue *vq) >=20 > static __rte_always_inline void > vhost_vring_call_packed(struct virtio_net *dev, struct vhost_virtqueue *= vq) > + RTE_EXC_LOCK_REQUIRES(vq->access_lock) Ditto. > { > uint16_t old, new, off, off_wrap; > bool signalled_used_valid, kick =3D false; diff --git > a/lib/vhost/vhost_user.c b/lib/vhost/vhost_user.c index > 1d390677fa..87eaa2ab4a 100644 > --- a/lib/vhost/vhost_user.c > +++ b/lib/vhost/vhost_user.c > @@ -2909,6 +2909,7 @@ vhost_user_check_and_alloc_queue_pair(struct > virtio_net *dev, >=20 > static void > vhost_user_lock_all_queue_pairs(struct virtio_net *dev) > + RTE_NO_ANNOTATED_LOCK_CHECK > { > unsigned int i =3D 0; > unsigned int vq_num =3D 0; > @@ -2926,6 +2927,7 @@ vhost_user_lock_all_queue_pairs(struct virtio_net > *dev) >=20 > static void > vhost_user_unlock_all_queue_pairs(struct virtio_net *dev) > + RTE_NO_ANNOTATED_LOCK_CHECK > { > unsigned int i =3D 0; > unsigned int vq_num =3D 0; > diff --git a/lib/vhost/virtio_net.c b/lib/vhost/virtio_net.c index > 5f432b0d77..514ee00993 100644 > --- a/lib/vhost/virtio_net.c > +++ b/lib/vhost/virtio_net.c > @@ -1246,6 +1246,7 @@ vhost_enqueue_single_packed(struct virtio_net > *dev, static __rte_noinline uint32_t virtio_dev_rx_split(struct virtio_= net > *dev, struct vhost_virtqueue *vq, > struct rte_mbuf **pkts, uint32_t count) > + RTE_EXC_LOCK_REQUIRES(vq->access_lock) > { > uint32_t pkt_idx =3D 0; > uint16_t num_buffers; > @@ -1441,6 +1442,7 @@ virtio_dev_rx_packed(struct virtio_net *dev, > struct vhost_virtqueue *__rte_restrict vq, > struct rte_mbuf **__rte_restrict pkts, > uint32_t count) > + RTE_EXC_LOCK_REQUIRES(vq->access_lock) > { > uint32_t pkt_idx =3D 0; >=20 > @@ -1955,11 +1957,11 @@ write_back_completed_descs_packed(struct > vhost_virtqueue *vq, } >=20 > static __rte_always_inline uint16_t > -vhost_poll_enqueue_completed(struct virtio_net *dev, uint16_t queue_id, > +vhost_poll_enqueue_completed(struct virtio_net *dev, struct > +vhost_virtqueue *vq, > struct rte_mbuf **pkts, uint16_t count, int16_t dma_id, > uint16_t vchan_id) > + RTE_EXC_LOCK_REQUIRES(vq->access_lock) rte_vhost_clear_queue_thread_unsafe() doesn't acquire vq->access_lock. Will it cause a compiler warning? Thanks, Jiayu > { > - struct vhost_virtqueue *vq =3D dev->virtqueue[queue_id]; > struct vhost_async *async =3D vq->async; > struct async_inflight_info *pkts_info =3D async->pkts_info; > uint16_t nr_cpl_pkts =3D 0; > @@ -2062,7 +2064,7 @@ rte_vhost_poll_enqueue_completed(int vid, > uint16_t queue_id, > goto out; > } >=20 > - n_pkts_cpl =3D vhost_poll_enqueue_completed(dev, queue_id, pkts, > count, dma_id, vchan_id); > + n_pkts_cpl =3D vhost_poll_enqueue_completed(dev, vq, pkts, count, > +dma_id, vchan_id); >=20 > out: > rte_spinlock_unlock(&vq->access_lock); > @@ -2104,7 +2106,7 @@ rte_vhost_clear_queue_thread_unsafe(int vid, > uint16_t queue_id, > return 0; > } >=20 > - n_pkts_cpl =3D vhost_poll_enqueue_completed(dev, queue_id, pkts, > count, dma_id, vchan_id); > + n_pkts_cpl =3D vhost_poll_enqueue_completed(dev, vq, pkts, count, > +dma_id, vchan_id); >=20 > return n_pkts_cpl; > } > @@ -2679,6 +2681,7 @@ static uint16_t > virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq, > struct rte_mempool *mbuf_pool, struct rte_mbuf **pkts, uint16_t > count, > bool legacy_ol_flags) > + RTE_EXC_LOCK_REQUIRES(vq->access_lock) > { > uint16_t i; > uint16_t free_entries; > @@ -2774,6 +2777,7 @@ static uint16_t > virtio_dev_tx_split_legacy(struct virtio_net *dev, > struct vhost_virtqueue *vq, struct rte_mempool *mbuf_pool, > struct rte_mbuf **pkts, uint16_t count) > + RTE_EXC_LOCK_REQUIRES(vq->access_lock) > { > return virtio_dev_tx_split(dev, vq, mbuf_pool, pkts, count, true); } > @@ -2783,6 +2787,7 @@ static uint16_t > virtio_dev_tx_split_compliant(struct virtio_net *dev, > struct vhost_virtqueue *vq, struct rte_mempool *mbuf_pool, > struct rte_mbuf **pkts, uint16_t count) > + RTE_EXC_LOCK_REQUIRES(vq->access_lock) > { > return virtio_dev_tx_split(dev, vq, mbuf_pool, pkts, count, false); } > @@ -2982,6 +2987,7 @@ virtio_dev_tx_packed(struct virtio_net *dev, > struct rte_mbuf **__rte_restrict pkts, > uint32_t count, > bool legacy_ol_flags) > + RTE_EXC_LOCK_REQUIRES(vq->access_lock) > { > uint32_t pkt_idx =3D 0; >=20 > @@ -3025,6 +3031,7 @@ static uint16_t > virtio_dev_tx_packed_legacy(struct virtio_net *dev, > struct vhost_virtqueue *__rte_restrict vq, struct rte_mempool > *mbuf_pool, > struct rte_mbuf **__rte_restrict pkts, uint32_t count) > + RTE_EXC_LOCK_REQUIRES(vq->access_lock) > { > return virtio_dev_tx_packed(dev, vq, mbuf_pool, pkts, count, true); } > @@ -3034,6 +3041,7 @@ static uint16_t > virtio_dev_tx_packed_compliant(struct virtio_net *dev, > struct vhost_virtqueue *__rte_restrict vq, struct rte_mempool > *mbuf_pool, > struct rte_mbuf **__rte_restrict pkts, uint32_t count) > + RTE_EXC_LOCK_REQUIRES(vq->access_lock) > { > return virtio_dev_tx_packed(dev, vq, mbuf_pool, pkts, count, false); } > -- > 2.23.0