From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 2CD2BA0C47; Tue, 12 Oct 2021 10:34:31 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 15B6740142; Tue, 12 Oct 2021 10:34:31 +0200 (CEST) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by mails.dpdk.org (Postfix) with ESMTP id 9E69C4003C for ; Tue, 12 Oct 2021 10:34:29 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1634027669; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=+e61tgUdQep0VMw/q+oQyOUBxYp0HwnOX6hAEsN1Uhk=; b=IHtlFdFobKGhTA/UmlyoACqqj3c8bC2L+oKWf9W9hpdkh0bBqKCo/1YYbjP6HfdYciyKVQ CibsK6ezoGMIsfqsGqX8CbII60PbRCz6J/HtDgUPwPbAnQJCzyopW17MkIXEorBnGcEl/k C6Ap4dB+1N6olEUOYlX4VCfdSVBaclg= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-356-Ryvp8II3N2CVNHVPrscS9g-1; Tue, 12 Oct 2021 04:34:27 -0400 X-MC-Unique: Ryvp8II3N2CVNHVPrscS9g-1 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 9CC771808304; Tue, 12 Oct 2021 08:34:26 +0000 (UTC) Received: from [10.39.208.25] (unknown [10.39.208.25]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 1DCCC5C1B4; Tue, 12 Oct 2021 08:34:24 +0000 (UTC) Message-ID: <93422eb2-fb3a-ab28-c112-34edb3da0499@redhat.com> Date: Tue, 12 Oct 2021 10:34:23 +0200 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Thunderbird/91.1.0 To: "Hu, Jiayu" , "dev@dpdk.org" , "Xia, Chenbo" , "Wang, YuanX" , "Ma, WenwuX" , "Richardson, Bruce" , "Mcnamara, John" References: <20211007220013.355530-1-maxime.coquelin@redhat.com> <20211007220013.355530-9-maxime.coquelin@redhat.com> From: Maxime Coquelin In-Reply-To: X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=maxime.coquelin@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Language: en-US Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit Subject: Re: [dpdk-dev] [RFC 08/14] vhost: improve IO vector logic X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Hi, On 10/12/21 08:05, Hu, Jiayu wrote: > Hi, > >> -----Original Message----- >> From: Maxime Coquelin >> Sent: Friday, October 8, 2021 6:00 AM >> To: dev@dpdk.org; Xia, Chenbo ; Hu, Jiayu >> ; Wang, YuanX ; Ma, >> WenwuX ; Richardson, Bruce >> ; Mcnamara, John >> >> Cc: Maxime Coquelin >> Subject: [RFC 08/14] vhost: improve IO vector logic >> >> IO vectors and their iterators arrays were part of the async metadata but not >> their indexes. >> >> In order to makes this more consistent, the patch adds the indexes to the >> async metadata. Doing that, we can avoid triggering DMA transfer within the >> loop as it IO vector index overflow is now prevented in the >> async_mbuf_to_desc() function. >> >> Note that previous detection mechanism was broken since the overflow >> already happened when detected, so OOB memory access would already >> have happened. >> >> With this changes done, virtio_dev_rx_async_submit_split() >> and virtio_dev_rx_async_submit_packed() can be further simplified. >> >> Signed-off-by: Maxime Coquelin >> --- >> lib/vhost/vhost.h | 2 + >> lib/vhost/virtio_net.c | 296 +++++++++++++++++++---------------------- >> 2 files changed, 136 insertions(+), 162 deletions(-) >> >> diff --git a/lib/vhost/vhost.h b/lib/vhost/vhost.h index >> dae9a1ac2d..812d4c55a5 100644 >> --- a/lib/vhost/vhost.h >> +++ b/lib/vhost/vhost.h >> @@ -134,6 +134,8 @@ struct vhost_async { >> >> struct rte_vhost_iov_iter iov_iter[VHOST_MAX_ASYNC_IT]; >> struct rte_vhost_iovec iovec[VHOST_MAX_ASYNC_VEC]; >> + uint16_t iter_idx; >> + uint16_t iovec_idx; >> >> /* data transfer status */ >> struct async_inflight_info *pkts_info; diff --git a/lib/vhost/virtio_net.c >> b/lib/vhost/virtio_net.c index ae7dded979..5ce4c14a73 100644 >> --- a/lib/vhost/virtio_net.c >> +++ b/lib/vhost/virtio_net.c >> @@ -924,33 +924,91 @@ copy_mbuf_to_desc(struct virtio_net *dev, struct >> vhost_virtqueue *vq, >> return error; >> } >> >> +static __rte_always_inline int >> +async_iter_initialize(struct vhost_async *async) { >> + struct rte_vhost_iov_iter *iter; >> + >> + if (unlikely(async->iter_idx >= VHOST_MAX_ASYNC_IT)) { >> + VHOST_LOG_DATA(ERR, "no more async iterators >> available\n"); >> + return -1; >> + } > > async->iter_idx will not exceed VHOST_MAX_ASYNC_IT, as virtio_dev_rx > makes sure the number of packets to enqueue is less than or equal to > MAX_PKT_BURST and it is the same as VHOST_MAX_ASYNC_IT. Agree, this may not be necessary. >> + >> + if (unlikely(async->iovec_idx >= VHOST_MAX_ASYNC_VEC)) { >> + VHOST_LOG_DATA(ERR, "no more async iovec available\n"); >> + return -1; >> + } >> + >> + >> + iter = async->iov_iter + async->iter_idx; >> + iter->iov = async->iovec + async->iovec_idx; >> + iter->nr_segs = 0; >> + >> + return 0; >> +} >> + >> +static __rte_always_inline int >> +async_iter_add_iovec(struct vhost_async *async, void *src, void *dst, >> +size_t len) { >> + struct rte_vhost_iov_iter *iter; >> + struct rte_vhost_iovec *iovec; >> + >> + if (unlikely(async->iovec_idx >= VHOST_MAX_ASYNC_VEC)) { >> + VHOST_LOG_DATA(ERR, "no more async iovec available\n"); >> + return -1; >> + } >> + >> + iter = async->iov_iter + async->iter_idx; >> + iovec = async->iovec + async->iovec_idx; > > async->iovec_idx is never gotten increased. Good catch! Thanks, Maxime > Thanks, > Jiayu >