From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 46D09A054F; Tue, 16 Mar 2021 14:13:33 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 09F0E4069F; Tue, 16 Mar 2021 14:13:33 +0100 (CET) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [63.128.21.124]) by mails.dpdk.org (Postfix) with ESMTP id 413A74003D for ; Tue, 16 Mar 2021 14:13:32 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1615900411; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=iwG16Cv04mXJL1iL35rZvducWpTsYrmEUi7NWqjkbLI=; b=F4GoNIjLJCTNYtS79l8sF7XuU8KJfzJjw51ZfXLA4bmCe0QlTMWo69vc7OYXpA0ze9lP7K QJ1x3gOEk3NdKo8krkQ7lS+5x7UgdVQ0v2NUgF4JUeBlhN+yq3QZjiebz5cG3XuZ4D4Lib sGZiTke99kLL2KlJOAHrMRTOfyABjq8= Received: from mail-vs1-f70.google.com (mail-vs1-f70.google.com [209.85.217.70]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-508-Gjd24ZVSMEO7_SJ8GOMuXQ-1; Tue, 16 Mar 2021 09:13:30 -0400 X-MC-Unique: Gjd24ZVSMEO7_SJ8GOMuXQ-1 Received: by mail-vs1-f70.google.com with SMTP id o1so2481435vsp.3 for ; Tue, 16 Mar 2021 06:13:29 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=iwG16Cv04mXJL1iL35rZvducWpTsYrmEUi7NWqjkbLI=; b=q4CfeDTwErEKWY/zXIz1OpUJrGY0tk8Y6pVKWUqj+CeZP5HlGKmCmCelTgzEv54fLp +HwoozUiXIz+TH7sGI6TSIDde1U8jVwE3zrLq2yzGPbMJdPycIWT1mMiKPtriybCBFIT 6ZgEqv52jItSOqBCMxBqUFUPfPCnLqAzqRw28MPX/z2VOVnPRX21P8xhdudcykw2ojnB F25u6OOA/Vjk/yYcjib6fi8y0rNlGfn5uYUsJ45oiU7KVcW/+KwdBup3ohZ8tcIpsC4z Q54ooZCWnfrWtmFojL2vzMMIamNgLu3pjWNPbSrRquArgsUVRDE8+qVv4Ezjxl2+Lrcb 5LaA== X-Gm-Message-State: AOAM533Rc8xB28Y8Thv5JZYQJY01cOepGB6sldfl1VyImDt21GcsI81I 8AqYfm4wVZ9Ke8Krki8ZfLVDD1AdNnF+cCHn/ZbwishlAMQVrlu3eLJCvTGMkW6dr2hnS+gSR4J sOCeLP7hCk4YWNkG4UIo= X-Received: by 2002:ac5:ccc4:: with SMTP id j4mr7936502vkn.18.1615900409475; Tue, 16 Mar 2021 06:13:29 -0700 (PDT) X-Google-Smtp-Source: ABdhPJx2EUToPzj7kR8Ls+bGS+wWJFdmpzpMnwo5jjoBhZcsk48x9b+yOuqFlIoov0iy3FOw5IaikPLCpnk8XQcDxVU= X-Received: by 2002:ac5:ccc4:: with SMTP id j4mr7936473vkn.18.1615900409173; Tue, 16 Mar 2021 06:13:29 -0700 (PDT) MIME-Version: 1.0 References: <20210316124153.503928-1-maxime.coquelin@redhat.com> <20210316124153.503928-3-maxime.coquelin@redhat.com> In-Reply-To: <20210316124153.503928-3-maxime.coquelin@redhat.com> From: David Marchand Date: Tue, 16 Mar 2021 14:13:18 +0100 Message-ID: To: Maxime Coquelin Cc: dev , "Xia, Chenbo" , Adrian Moreno Zapata , Olivier Matz , bnemeth@redhat.com Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=dmarchan@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset="UTF-8" Subject: Re: [dpdk-dev] [PATCH v2 2/3] vhost: move dirty logging cache out of the virtqueue X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On Tue, Mar 16, 2021 at 1:42 PM Maxime Coquelin wrote: > > This patch moves the per-virtqueue's dirty logging cache > out of the virtqueue struct, by allocating it dynamically > only when live-migration is enabled. > > It saves 8 cachelines in vhost_virtqueue struct. > > Signed-off-by: Maxime Coquelin > --- > lib/librte_vhost/vhost.c | 14 ++++++++++++++ > lib/librte_vhost/vhost.h | 2 +- > lib/librte_vhost/vhost_user.c | 25 +++++++++++++++++++++++++ > 3 files changed, 40 insertions(+), 1 deletion(-) > > diff --git a/lib/librte_vhost/vhost.c b/lib/librte_vhost/vhost.c > index 5a7c0c6cff..c3490ce897 100644 > --- a/lib/librte_vhost/vhost.c > +++ b/lib/librte_vhost/vhost.c > @@ -145,6 +145,10 @@ __vhost_log_cache_sync(struct virtio_net *dev, struct vhost_virtqueue *vq) > if (unlikely(!dev->log_base)) > return; > > + /* No cache, nothing to sync */ > + if (unlikely(!vq->log_cache)) > + return; > + > rte_atomic_thread_fence(__ATOMIC_RELEASE); > > log_base = (unsigned long *)(uintptr_t)dev->log_base; > @@ -177,6 +181,14 @@ vhost_log_cache_page(struct virtio_net *dev, struct vhost_virtqueue *vq, > uint32_t offset = page / (sizeof(unsigned long) << 3); > int i; > > + if (unlikely(!vq->log_cache)) { > + /* No logging cache allocated, write dirty log map directly */ > + rte_smp_wmb(); We try not to reintroduce full barriers (checkpatch caught this). > + vhost_log_page((uint8_t *)(uintptr_t)dev->log_base, page); > + > + return; > + } > + > for (i = 0; i < vq->log_cache_nb_elem; i++) { > struct log_cache_entry *elem = vq->log_cache + i; > > @@ -354,6 +366,8 @@ free_vq(struct virtio_net *dev, struct vhost_virtqueue *vq) > } > rte_free(vq->batch_copy_elems); > rte_mempool_free(vq->iotlb_pool); > + if (vq->log_cache) > + rte_free(vq->log_cache); No if() needed. > rte_free(vq); > } > > diff --git a/lib/librte_vhost/vhost.h b/lib/librte_vhost/vhost.h > index 717f410548..3a71dfeed9 100644 > --- a/lib/librte_vhost/vhost.h > +++ b/lib/librte_vhost/vhost.h > @@ -183,7 +183,7 @@ struct vhost_virtqueue { > bool used_wrap_counter; > bool avail_wrap_counter; > > - struct log_cache_entry log_cache[VHOST_LOG_CACHE_NR]; > + struct log_cache_entry *log_cache; > uint16_t log_cache_nb_elem; > > rte_rwlock_t iotlb_lock; > diff --git a/lib/librte_vhost/vhost_user.c b/lib/librte_vhost/vhost_user.c > index a60bb945ad..0f452d6ff3 100644 > --- a/lib/librte_vhost/vhost_user.c > +++ b/lib/librte_vhost/vhost_user.c > @@ -2022,6 +2022,11 @@ vhost_user_get_vring_base(struct virtio_net **pdev, > rte_free(vq->batch_copy_elems); > vq->batch_copy_elems = NULL; > > + if (vq->log_cache) { > + rte_free(vq->log_cache); > + vq->log_cache = NULL; > + } Idem. > + > msg->size = sizeof(msg->payload.state); > msg->fd_num = 0; > > @@ -2121,6 +2126,7 @@ vhost_user_set_log_base(struct virtio_net **pdev, struct VhostUserMsg *msg, > int fd = msg->fds[0]; > uint64_t size, off; > void *addr; > + uint32_t i; > > if (validate_msg_fds(msg, 1) != 0) > return RTE_VHOST_MSG_RESULT_ERR; > @@ -2174,6 +2180,25 @@ vhost_user_set_log_base(struct virtio_net **pdev, struct VhostUserMsg *msg, > dev->log_base = dev->log_addr + off; > dev->log_size = size; > > + for (i = 0; i < dev->nr_vring; i++) { > + struct vhost_virtqueue *vq = dev->virtqueue[i]; > + > + if (vq->log_cache) { > + rte_free(vq->log_cache); > + vq->log_cache = NULL; > + } Idem. > + vq->log_cache_nb_elem = 0; > + vq->log_cache = rte_zmalloc("vq log cache", > + sizeof(struct log_cache_entry) * VHOST_LOG_CACHE_NR, > + 0); > + /* > + * If log cache alloc fail, don't fail migration, but no > + * caching will be done, which will impact performance > + */ > + if (!vq->log_cache) > + VHOST_LOG_CONFIG(ERR, "Failed to allocate VQ logging cache\n"); > + } > + > /* > * The spec is not clear about it (yet), but QEMU doesn't expect > * any payload in the reply. > -- > 2.29.2 > -- David Marchand