From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 7EDE8A0562; Wed, 14 Apr 2021 15:41:00 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 5B26D161B0B; Wed, 14 Apr 2021 15:41:00 +0200 (CEST) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by mails.dpdk.org (Postfix) with ESMTP id 9E1724013F for ; Wed, 14 Apr 2021 15:40:58 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1618407658; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=X7eYPtp2BnZmAYo09SgV9mkclU0WxiRfF9BVCmCFBZ0=; b=ZDpd3UrUEUgzrcYtaxtnSCikUcaVFIi3OIj6fVhR/fS7HSQ3SS/boTjn+im7Cm9/cGqnEi U4imP3IXbQXlFXSysNgXYLwWZ5n/Nqbm/SpJeEyOjl3jxDL7GPHX+KQTi2MqR+wB6dQZr5 GY4Djs5MwhLyJto+JDswr+EEhZD/Q5A= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-334-bxZzaUTBOA6aMn54jQg1uw-1; Wed, 14 Apr 2021 09:40:54 -0400 X-MC-Unique: bxZzaUTBOA6aMn54jQg1uw-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com [10.5.11.15]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 22F16189C44D; Wed, 14 Apr 2021 13:40:53 +0000 (UTC) Received: from [10.36.110.28] (unknown [10.36.110.28]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 0F5C83AC0; Wed, 14 Apr 2021 13:40:50 +0000 (UTC) To: Cheng Jiang , chenbo.xia@intel.com Cc: dev@dpdk.org, jiayu.hu@intel.com, yvonnex.yang@intel.com, yinan.wang@intel.com, yong.liu@intel.com References: <20210317085426.10119-1-Cheng1.jiang@intel.com> <20210414061343.54919-1-Cheng1.jiang@intel.com> <20210414061343.54919-3-Cheng1.jiang@intel.com> From: Maxime Coquelin Message-ID: Date: Wed, 14 Apr 2021 15:40:49 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.8.1 MIME-Version: 1.0 In-Reply-To: <20210414061343.54919-3-Cheng1.jiang@intel.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=maxime.coquelin@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 8bit Subject: Re: [dpdk-dev] [PATCH v7 2/4] vhost: add support for packed ring in async vhost X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On 4/14/21 8:13 AM, Cheng Jiang wrote: > For now async vhost data path only supports split ring. This patch > enables packed ring in async vhost data path to make async vhost > compatible with virtio 1.1 spec. > > Signed-off-by: Cheng Jiang > --- > lib/librte_vhost/rte_vhost_async.h | 1 + > lib/librte_vhost/vhost.c | 49 ++-- > lib/librte_vhost/vhost.h | 15 +- > lib/librte_vhost/virtio_net.c | 432 +++++++++++++++++++++++++++-- > 4 files changed, 456 insertions(+), 41 deletions(-) > > diff --git a/lib/librte_vhost/rte_vhost_async.h b/lib/librte_vhost/rte_vhost_async.h > index c855ff875..6faa31f5a 100644 > --- a/lib/librte_vhost/rte_vhost_async.h > +++ b/lib/librte_vhost/rte_vhost_async.h > @@ -89,6 +89,7 @@ struct rte_vhost_async_channel_ops { > struct async_inflight_info { > struct rte_mbuf *mbuf; > uint16_t descs; /* num of descs inflight */ > + uint16_t nr_buffers; /* num of buffers inflight for packed ring */ > }; > > /** > diff --git a/lib/librte_vhost/vhost.c b/lib/librte_vhost/vhost.c > index a70fe01d8..f509186c6 100644 > --- a/lib/librte_vhost/vhost.c > +++ b/lib/librte_vhost/vhost.c > @@ -338,19 +338,22 @@ cleanup_device(struct virtio_net *dev, int destroy) > } > > static void > -vhost_free_async_mem(struct vhost_virtqueue *vq) > +vhost_free_async_mem(struct virtio_net *dev, struct vhost_virtqueue *vq) > { > - if (vq->async_pkts_info) > - rte_free(vq->async_pkts_info); > - if (vq->async_descs_split) > + rte_free(vq->async_pkts_info); > + > + if (vq_is_packed(dev)) { > + rte_free(vq->async_buffers_packed); > + vq->async_buffers_packed = NULL; > + } else { Doing this is not necessary: rte_free(vq->async_buffers_packed); vq->async_buffers_packed = NULL; rte_free(vq->async_descs_split); vq->async_descs_split = NULL; Above will just work and will avoid adding dev parameter. > rte_free(vq->async_descs_split); > - if (vq->it_pool) > - rte_free(vq->it_pool); > - if (vq->vec_pool) > - rte_free(vq->vec_pool); > + vq->async_descs_split = NULL; > + } > + > + rte_free(vq->it_pool); > + rte_free(vq->vec_pool); > > vq->async_pkts_info = NULL; > - vq->async_descs_split = NULL; > vq->it_pool = NULL; > vq->vec_pool = NULL; > } > @@ -360,10 +363,10 @@ free_vq(struct virtio_net *dev, struct vhost_virtqueue *vq) > { > if (vq_is_packed(dev)) > rte_free(vq->shadow_used_packed); > - else { > + else > rte_free(vq->shadow_used_split); > - vhost_free_async_mem(vq); > - } > + > + vhost_free_async_mem(dev, vq); > rte_free(vq->batch_copy_elems); > if (vq->iotlb_pool) > rte_mempool_free(vq->iotlb_pool); > @@ -1626,10 +1629,9 @@ int rte_vhost_async_channel_register(int vid, uint16_t queue_id, > if (unlikely(vq == NULL || !dev->async_copy)) > return -1; > > - /* packed queue is not supported */ > - if (unlikely(vq_is_packed(dev) || !f.async_inorder)) { > + if (unlikely(!f.async_inorder)) { > VHOST_LOG_CONFIG(ERR, > - "async copy is not supported on packed queue or non-inorder mode " > + "async copy is not supported on non-inorder mode " > "(vid %d, qid: %d)\n", vid, queue_id); > return -1; > } > @@ -1667,12 +1669,19 @@ int rte_vhost_async_channel_register(int vid, uint16_t queue_id, > vq->vec_pool = rte_malloc_socket(NULL, > VHOST_MAX_ASYNC_VEC * sizeof(struct iovec), > RTE_CACHE_LINE_SIZE, node); > - vq->async_descs_split = rte_malloc_socket(NULL, > + if (vq_is_packed(dev)) { > + vq->async_buffers_packed = rte_malloc_socket(NULL, > + vq->size * sizeof(struct vring_used_elem_packed), > + RTE_CACHE_LINE_SIZE, node); > + } else { > + vq->async_descs_split = rte_malloc_socket(NULL, > vq->size * sizeof(struct vring_used_elem), > RTE_CACHE_LINE_SIZE, node); > - if (!vq->async_descs_split || !vq->async_pkts_info || > - !vq->it_pool || !vq->vec_pool) { > - vhost_free_async_mem(vq); > + } > + > + if (!vq->async_buffers_packed || !vq->async_descs_split || > + !vq->async_pkts_info || !vq->it_pool || !vq->vec_pool) { Not really than of this error handling. Checking after every malloc if it suceed would be cleaner. > + vhost_free_async_mem(dev, vq); > VHOST_LOG_CONFIG(ERR, > "async register failed: cannot allocate memory for vq data " > "(vid %d, qid: %d)\n", vid, queue_id); > @@ -1728,7 +1737,7 @@ int rte_vhost_async_channel_unregister(int vid, uint16_t queue_id) > goto out; > } > > - vhost_free_async_mem(vq); > + vhost_free_async_mem(dev, vq); > > vq->async_ops.transfer_data = NULL; > vq->async_ops.check_completed_copies = NULL; > diff --git a/lib/librte_vhost/vhost.h b/lib/librte_vhost/vhost.h > index f628714c2..673335217 100644 > --- a/lib/librte_vhost/vhost.h > +++ b/lib/librte_vhost/vhost.h > @@ -201,9 +201,18 @@ struct vhost_virtqueue { > uint16_t async_pkts_idx; > uint16_t async_pkts_inflight_n; > uint16_t async_last_pkts_n; > - struct vring_used_elem *async_descs_split; > - uint16_t async_desc_idx; > - uint16_t last_async_desc_idx; > + union { > + struct vring_used_elem *async_descs_split; > + struct vring_used_elem_packed *async_buffers_packed; > + }; > + union { > + uint16_t async_desc_idx; > + uint16_t async_packed_buffer_idx; > + }; > + union { > + uint16_t last_async_desc_idx; > + uint16_t last_async_buffer_idx; > + }; Looks almost good to me now, thanks for doing the change. Only minor issue is the naming which is not consistent in the different fields. Sometimes it contains split or packed, sometimes not. Maxime