DPDK patches and discussions
 help / color / mirror / Atom feed
From: "Xia, Chenbo" <chenbo.xia@intel.com>
To: Eelco Chaudron <echaudro@redhat.com>,
	Maxime Coquelin <maxime.coquelin@redhat.com>
Cc: "david.marchand@redhat.com" <david.marchand@redhat.com>,
	"dev@dpdk.org" <dev@dpdk.org>
Subject: RE: [PATCH v3 1/4] vhost: change vhost_virtqueue access lock to a read/write one
Date: Thu, 1 Jun 2023 01:45:16 +0000	[thread overview]
Message-ID: <SN6PR11MB350464DBEEAC878BE79D4B299C499@SN6PR11MB3504.namprd11.prod.outlook.com> (raw)
In-Reply-To: <04BFE798-9FA7-4D8F-9360-266946DAD5BC@redhat.com>

> -----Original Message-----
> From: Eelco Chaudron <echaudro@redhat.com>
> Sent: Wednesday, May 31, 2023 7:14 PM
> To: Maxime Coquelin <maxime.coquelin@redhat.com>
> Cc: Xia, Chenbo <chenbo.xia@intel.com>; david.marchand@redhat.com;
> dev@dpdk.org
> Subject: Re: [PATCH v3 1/4] vhost: change vhost_virtqueue access lock to a
> read/write one
> 
> 
> 
> On 31 May 2023, at 11:27, Maxime Coquelin wrote:
> 
> > On 5/31/23 08:37, Xia, Chenbo wrote:
> >> Hi Eelco,
> >>
> >>> -----Original Message-----
> >>> From: Eelco Chaudron <echaudro@redhat.com>
> >>> Sent: Wednesday, May 17, 2023 5:09 PM
> >>> To: maxime.coquelin@redhat.com; Xia, Chenbo <chenbo.xia@intel.com>;
> >>> david.marchand@redhat.com
> >>> Cc: dev@dpdk.org
> >>> Subject: [PATCH v3 1/4] vhost: change vhost_virtqueue access lock to a
> >>> read/write one
> >>>
> >>> This change will allow the vhost interrupt datapath handling to be
> split
> >>> between two processed without one of them holding an explicit lock.
> >>>
> >>> Signed-off-by: Eelco Chaudron <echaudro@redhat.com>
> >>> ---
> >>>   lib/eal/include/generic/rte_rwlock.h |   17 ++++++
> >>>   lib/vhost/vhost.c                    |   46 +++++++++--------
> >>>   lib/vhost/vhost.h                    |    4 +-
> >>>   lib/vhost/vhost_user.c               |   14 +++--
> >>>   lib/vhost/virtio_net.c               |   90 +++++++++++++++++-------
> -----
> >>> -----
> >>>   5 files changed, 94 insertions(+), 77 deletions(-)
> >>>
> >>> diff --git a/lib/eal/include/generic/rte_rwlock.h
> >>> b/lib/eal/include/generic/rte_rwlock.h
> >>> index 71e2d8d5f4..9e083bbc61 100644
> >>> --- a/lib/eal/include/generic/rte_rwlock.h
> >>> +++ b/lib/eal/include/generic/rte_rwlock.h
> >>> @@ -236,6 +236,23 @@ rte_rwlock_write_unlock(rte_rwlock_t *rwl)
> >>>   	__atomic_fetch_sub(&rwl->cnt, RTE_RWLOCK_WRITE,
> __ATOMIC_RELEASE);
> >>>   }
> >>>
> >>> +/**
> >>> + * Test if the write lock is taken.
> >>> + *
> >>> + * @param rwl
> >>> + *   A pointer to a rwlock structure.
> >>> + * @return
> >>> + *   1 if the write lock is currently taken; 0 otherwise.
> >>> + */
> >>> +static inline int
> >>> +rte_rwlock_write_is_locked(rte_rwlock_t *rwl)
> >>> +{
> >>> +	if (__atomic_load_n(&rwl->cnt, __ATOMIC_RELAXED) & RTE_RWLOCK_WRITE)
> >>> +		return 1;
> >>> +
> >>> +	return 0;
> >>> +}
> >>> +
> >>
> >> Again we need to update release note as it's a new EAL API.
> >>
> >>>   /**
> >>>    * Try to execute critical section in a hardware memory transaction,
> if
> >>> it
> >>>    * fails or not available take a read lock
> >>> diff --git a/lib/vhost/vhost.c b/lib/vhost/vhost.c
> >>> index ef37943817..74bdbfd810 100644
> >>> --- a/lib/vhost/vhost.c
> >>> +++ b/lib/vhost/vhost.c
> >>> @@ -393,9 +393,9 @@ free_vq(struct virtio_net *dev, struct
> vhost_virtqueue
> >>> *vq)
> >>>   	else
> >>>   		rte_free(vq->shadow_used_split);
> >>>
> >>> -	rte_spinlock_lock(&vq->access_lock);
> >>> +	rte_rwlock_write_lock(&vq->access_lock);
> >>>   	vhost_free_async_mem(vq);
> >>> -	rte_spinlock_unlock(&vq->access_lock);
> >>> +	rte_rwlock_write_unlock(&vq->access_lock);
> >>>   	rte_free(vq->batch_copy_elems);
> >>>   	vhost_user_iotlb_destroy(vq);
> >>>   	rte_free(vq->log_cache);
> >>> @@ -630,7 +630,7 @@ alloc_vring_queue(struct virtio_net *dev, uint32_t
> >>> vring_idx)
> >>>
> >>>   		dev->virtqueue[i] = vq;
> >>>   		init_vring_queue(dev, vq, i);
> >>> -		rte_spinlock_init(&vq->access_lock);
> >>> +		rte_rwlock_init(&vq->access_lock);
> >>>   		vq->avail_wrap_counter = 1;
> >>>   		vq->used_wrap_counter = 1;
> >>>   		vq->signalled_used_valid = false;
> >>> @@ -1305,14 +1305,14 @@ rte_vhost_vring_call(int vid, uint16_t
> vring_idx)
> >>>   	if (!vq)
> >>>   		return -1;
> >>>
> >>> -	rte_spinlock_lock(&vq->access_lock);
> >>> +	rte_rwlock_read_lock(&vq->access_lock);
> >>>
> >>>   	if (vq_is_packed(dev))
> >>>   		vhost_vring_call_packed(dev, vq);
> >>>   	else
> >>>   		vhost_vring_call_split(dev, vq);
> >>>
> >>> -	rte_spinlock_unlock(&vq->access_lock);
> >>> +	rte_rwlock_read_unlock(&vq->access_lock);
> >>
> >> Not sure about this. vhost_ring_call_packed/split is changing some
> field in
> >> Vq. Should we use write lock here?
> >
> > I don't think so, the purpose of the access_lock is not to make the
> > datapath threads-safe, but to protect the datapath from metadata changes
> > by the control path.
> 
> Thanks Chinbo for the review, and see Maxime’s comment above. Does this
> clarify your concern/question?

Make sense to me. Thanks Eelco and Maxime!

With the release note added:

Reviewed-by: Chenbo Xia <chenbo.xia@intel.com> 

> 
> >>
> >> Thanks,
> >> Chenbo
> >>


  reply	other threads:[~2023-06-01  1:45 UTC|newest]

Thread overview: 35+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-05-17  9:08 [PATCH v3 0/4] vhost: add device op to offload the interrupt kick Eelco Chaudron
2023-05-17  9:08 ` [PATCH v3 1/4] vhost: change vhost_virtqueue access lock to a read/write one Eelco Chaudron
2023-05-17 17:33   ` Maxime Coquelin
2023-05-18 14:46     ` Eelco Chaudron
2023-05-31  6:37   ` Xia, Chenbo
2023-05-31  9:27     ` Maxime Coquelin
2023-05-31 11:13       ` Eelco Chaudron
2023-06-01  1:45         ` Xia, Chenbo [this message]
2023-05-17  9:08 ` [PATCH v3 2/4] vhost: make the guest_notifications statistic counter atomic Eelco Chaudron
2023-05-30 12:52   ` Maxime Coquelin
2023-05-31  7:03   ` Xia, Chenbo
2023-05-17  9:09 ` [PATCH v3 3/4] vhost: fix invalid call FD handling Eelco Chaudron
2023-05-30 12:54   ` Maxime Coquelin
2023-05-31  6:12     ` Xia, Chenbo
2023-05-31  9:30       ` Maxime Coquelin
2023-05-17  9:09 ` [PATCH v3 4/4] vhost: add device op to offload the interrupt kick Eelco Chaudron
2023-05-30 13:02   ` Maxime Coquelin
2023-05-30 13:16     ` Thomas Monjalon
2023-05-30 15:16       ` Maxime Coquelin
2023-05-31  6:19         ` Xia, Chenbo
2023-05-31  9:29           ` Maxime Coquelin
2023-05-31 11:21             ` Eelco Chaudron
2023-06-01  2:18             ` Xia, Chenbo
2023-06-01  8:15               ` Eelco Chaudron
2023-06-01  8:29                 ` Maxime Coquelin
2023-06-01  8:49                   ` Eelco Chaudron
2023-06-01  8:53                     ` Maxime Coquelin
2023-05-31 11:49     ` David Marchand
2023-05-31 12:01   ` David Marchand
2023-05-31 12:48     ` Maxime Coquelin
2023-05-31 13:13       ` Eelco Chaudron
2023-05-31 14:12   ` David Marchand
2023-05-31 14:18     ` Maxime Coquelin
2023-06-01 20:00 ` [PATCH v3 0/4] " Maxime Coquelin
2023-06-02  6:20   ` Eelco Chaudron

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=SN6PR11MB350464DBEEAC878BE79D4B299C499@SN6PR11MB3504.namprd11.prod.outlook.com \
    --to=chenbo.xia@intel.com \
    --cc=david.marchand@redhat.com \
    --cc=dev@dpdk.org \
    --cc=echaudro@redhat.com \
    --cc=maxime.coquelin@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).