From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id CF10642BEF; Wed, 31 May 2023 11:27:37 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C199140ED7; Wed, 31 May 2023 11:27:37 +0200 (CEST) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by mails.dpdk.org (Postfix) with ESMTP id D6E7A40A82 for ; Wed, 31 May 2023 11:27:35 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1685525255; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=AIXAUstyMVFErv2GFRXYpBTxJuRLgTDO1BIzSH1JnJI=; b=Vu9FMkl8/ERBiVLEHthuFxwhsv6tSigKJGZ1guN9kl0JSqOh5+P5tDhtEQ0Tez1hGiX8Q+ LDWuEPRLzlBNkPR+FV7H3FY/iNd/RleDqtZaSJepyPqO5bfR4MdgFaomowdThWmd3muyvx G+k+WR9Ko2fKqAwiC7w7ZPhlel/SkkA= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-530-Qpm8hTFHMp-rcGfoV7BNjQ-1; Wed, 31 May 2023 05:27:33 -0400 X-MC-Unique: Qpm8hTFHMp-rcGfoV7BNjQ-1 Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com [10.11.54.10]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 9B34D802A55; Wed, 31 May 2023 09:27:33 +0000 (UTC) Received: from [10.39.208.19] (unknown [10.39.208.19]) by smtp.corp.redhat.com (Postfix) with ESMTPS id BFF31492B0A; Wed, 31 May 2023 09:27:32 +0000 (UTC) Message-ID: Date: Wed, 31 May 2023 11:27:31 +0200 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Thunderbird/102.10.0 Subject: Re: [PATCH v3 1/4] vhost: change vhost_virtqueue access lock to a read/write one To: "Xia, Chenbo" , Eelco Chaudron , "david.marchand@redhat.com" Cc: "dev@dpdk.org" References: <168431450017.558450.16680518469610688737.stgit@ebuild.local> <168431452543.558450.14131829672896784074.stgit@ebuild.local> From: Maxime Coquelin In-Reply-To: X-Scanned-By: MIMEDefang 3.1 on 10.11.54.10 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Language: en-US Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org On 5/31/23 08:37, Xia, Chenbo wrote: > Hi Eelco, > >> -----Original Message----- >> From: Eelco Chaudron >> Sent: Wednesday, May 17, 2023 5:09 PM >> To: maxime.coquelin@redhat.com; Xia, Chenbo ; >> david.marchand@redhat.com >> Cc: dev@dpdk.org >> Subject: [PATCH v3 1/4] vhost: change vhost_virtqueue access lock to a >> read/write one >> >> This change will allow the vhost interrupt datapath handling to be split >> between two processed without one of them holding an explicit lock. >> >> Signed-off-by: Eelco Chaudron >> --- >> lib/eal/include/generic/rte_rwlock.h | 17 ++++++ >> lib/vhost/vhost.c | 46 +++++++++-------- >> lib/vhost/vhost.h | 4 +- >> lib/vhost/vhost_user.c | 14 +++-- >> lib/vhost/virtio_net.c | 90 +++++++++++++++++------------ >> ----- >> 5 files changed, 94 insertions(+), 77 deletions(-) >> >> diff --git a/lib/eal/include/generic/rte_rwlock.h >> b/lib/eal/include/generic/rte_rwlock.h >> index 71e2d8d5f4..9e083bbc61 100644 >> --- a/lib/eal/include/generic/rte_rwlock.h >> +++ b/lib/eal/include/generic/rte_rwlock.h >> @@ -236,6 +236,23 @@ rte_rwlock_write_unlock(rte_rwlock_t *rwl) >> __atomic_fetch_sub(&rwl->cnt, RTE_RWLOCK_WRITE, __ATOMIC_RELEASE); >> } >> >> +/** >> + * Test if the write lock is taken. >> + * >> + * @param rwl >> + * A pointer to a rwlock structure. >> + * @return >> + * 1 if the write lock is currently taken; 0 otherwise. >> + */ >> +static inline int >> +rte_rwlock_write_is_locked(rte_rwlock_t *rwl) >> +{ >> + if (__atomic_load_n(&rwl->cnt, __ATOMIC_RELAXED) & RTE_RWLOCK_WRITE) >> + return 1; >> + >> + return 0; >> +} >> + > > Again we need to update release note as it's a new EAL API. > >> /** >> * Try to execute critical section in a hardware memory transaction, if >> it >> * fails or not available take a read lock >> diff --git a/lib/vhost/vhost.c b/lib/vhost/vhost.c >> index ef37943817..74bdbfd810 100644 >> --- a/lib/vhost/vhost.c >> +++ b/lib/vhost/vhost.c >> @@ -393,9 +393,9 @@ free_vq(struct virtio_net *dev, struct vhost_virtqueue >> *vq) >> else >> rte_free(vq->shadow_used_split); >> >> - rte_spinlock_lock(&vq->access_lock); >> + rte_rwlock_write_lock(&vq->access_lock); >> vhost_free_async_mem(vq); >> - rte_spinlock_unlock(&vq->access_lock); >> + rte_rwlock_write_unlock(&vq->access_lock); >> rte_free(vq->batch_copy_elems); >> vhost_user_iotlb_destroy(vq); >> rte_free(vq->log_cache); >> @@ -630,7 +630,7 @@ alloc_vring_queue(struct virtio_net *dev, uint32_t >> vring_idx) >> >> dev->virtqueue[i] = vq; >> init_vring_queue(dev, vq, i); >> - rte_spinlock_init(&vq->access_lock); >> + rte_rwlock_init(&vq->access_lock); >> vq->avail_wrap_counter = 1; >> vq->used_wrap_counter = 1; >> vq->signalled_used_valid = false; >> @@ -1305,14 +1305,14 @@ rte_vhost_vring_call(int vid, uint16_t vring_idx) >> if (!vq) >> return -1; >> >> - rte_spinlock_lock(&vq->access_lock); >> + rte_rwlock_read_lock(&vq->access_lock); >> >> if (vq_is_packed(dev)) >> vhost_vring_call_packed(dev, vq); >> else >> vhost_vring_call_split(dev, vq); >> >> - rte_spinlock_unlock(&vq->access_lock); >> + rte_rwlock_read_unlock(&vq->access_lock); > > Not sure about this. vhost_ring_call_packed/split is changing some field in > Vq. Should we use write lock here? I don't think so, the purpose of the access_lock is not to make the datapath threads-safe, but to protect the datapath from metadata changes by the control path. Thanks, Maxime > > Thanks, > Chenbo >