From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 8908BA0C43; Thu, 23 Sep 2021 13:30:15 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 4B9B941257; Thu, 23 Sep 2021 13:30:15 +0200 (CEST) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [216.205.24.124]) by mails.dpdk.org (Postfix) with ESMTP id 00C5941250 for ; Thu, 23 Sep 2021 13:30:13 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1632396613; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=BDwEjbDRgBHiEd4f4Hda6VCUbxvOfGrMn3lz4IVNNh8=; b=iaFmYUo0qlqzK42tA0slw1I5hq0AwPRkCmySSJFB2QGXOeiLmKJSh+jW2NWDUEl1LfNRya oeT3BXjyvUTerrjxE5+QG0c9S/ryfywhH+WYpSui/+73shtv59t3JP2Vx1wdki8/E9N/qW PRE63niPcjKzr2qr2ILxMwqWC3rOm3A= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-585-W78huYH-NLWbBvZz1BfiBg-1; Thu, 23 Sep 2021 07:30:12 -0400 X-MC-Unique: W78huYH-NLWbBvZz1BfiBg-1 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com [10.5.11.13]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id ED442835DE2; Thu, 23 Sep 2021 11:30:10 +0000 (UTC) Received: from [10.39.208.17] (unknown [10.39.208.17]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 9AB0C5BB06; Thu, 23 Sep 2021 11:30:09 +0000 (UTC) Message-ID: Date: Thu, 23 Sep 2021 13:30:08 +0200 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Thunderbird/91.1.0 To: Gaoxiang Liu , chenbo.xia@intel.com Cc: dev@dpdk.org, liugaoxiang@huawei.com, stable@dpdk.org References: <20210910021117.130-1-gaoxiangliu0@163.com> <20210910090530.893-1-gaoxiangliu0@163.com> From: Maxime Coquelin In-Reply-To: <20210910090530.893-1-gaoxiangliu0@163.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.13 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=maxime.coquelin@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Language: en-US Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit Subject: Re: [dpdk-dev] [PATCH] vhost: merge repeated loop in vhost Tx X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On 9/10/21 11:05, Gaoxiang Liu wrote: > To improve performance of vhost Tx, merge repeated loop in eth_vhost_tx. > Move "vlan insert" from eth_vhost_tx to virtio_dev_rx_packed > and virtio_dev_rx_split to reduce a loop iteration. > > Fixes: f63d356ee993 ("net/vhost: insert/strip VLAN header in software") > Cc: stable@dpdk.org This kind of performance optimization should not be backported to stable branches. > > Signed-off-by: Gaoxiang Liu > --- > drivers/net/vhost/rte_eth_vhost.c | 25 ++++--------------------- > lib/vhost/virtio_net.c | 21 +++++++++++++++++++++ > 2 files changed, 25 insertions(+), 21 deletions(-) > > diff --git a/drivers/net/vhost/rte_eth_vhost.c b/drivers/net/vhost/rte_eth_vhost.c > index a202931e9a..ae20550976 100644 > --- a/drivers/net/vhost/rte_eth_vhost.c > +++ b/drivers/net/vhost/rte_eth_vhost.c > @@ -428,7 +428,6 @@ eth_vhost_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs) > { > struct vhost_queue *r = q; > uint16_t i, nb_tx = 0; > - uint16_t nb_send = 0; > uint64_t nb_bytes = 0; > uint64_t nb_missed = 0; > > @@ -440,33 +439,17 @@ eth_vhost_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs) > if (unlikely(rte_atomic32_read(&r->allow_queuing) == 0)) > goto out; > > - for (i = 0; i < nb_bufs; i++) { > - struct rte_mbuf *m = bufs[i]; > - > - /* Do VLAN tag insertion */ > - if (m->ol_flags & PKT_TX_VLAN_PKT) { > - int error = rte_vlan_insert(&m); > - if (unlikely(error)) { > - rte_pktmbuf_free(m); > - continue; > - } > - } > - > - bufs[nb_send] = m; > - ++nb_send; > - } > - > /* Enqueue packets to guest RX queue */ > - while (nb_send) { > + while (nb_bufs) { > uint16_t nb_pkts; > - uint16_t num = (uint16_t)RTE_MIN(nb_send, > + uint16_t num = (uint16_t)RTE_MIN(nb_bufs, > VHOST_MAX_PKT_BURST); > > nb_pkts = rte_vhost_enqueue_burst(r->vid, r->virtqueue_id, > &bufs[nb_tx], num); > > nb_tx += nb_pkts; > - nb_send -= nb_pkts; > + nb_bufs -= nb_pkts; > if (nb_pkts < num) > break; > } > @@ -474,7 +457,7 @@ eth_vhost_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs) > for (i = 0; likely(i < nb_tx); i++) > nb_bytes += bufs[i]->pkt_len; > > - nb_missed = nb_bufs - nb_tx; > + nb_missed = nb_bufs; > > r->stats.pkts += nb_tx; > r->stats.bytes += nb_bytes; > diff --git a/lib/vhost/virtio_net.c b/lib/vhost/virtio_net.c > index 8549afbbe1..2057f4e7fe 100644 > --- a/lib/vhost/virtio_net.c > +++ b/lib/vhost/virtio_net.c > @@ -1218,6 +1218,16 @@ virtio_dev_rx_split(struct virtio_net *dev, struct vhost_virtqueue *vq, > uint32_t pkt_len = pkts[pkt_idx]->pkt_len + dev->vhost_hlen; > uint16_t nr_vec = 0; > > + /* Do VLAN tag insertion */ > + if (pkts[pkt_idx]->ol_flags & PKT_TX_VLAN_PKT) { > + int error = rte_vlan_insert(&pkts[pkt_idx]); > + if (unlikely(error)) { > + rte_pktmbuf_free(pkts[pkt_idx]); > + pkts[pkt_idx] = NULL; > + continue; > + } > + } > + > if (unlikely(reserve_avail_buf_split(dev, vq, > pkt_len, buf_vec, &num_buffers, > avail_head, &nr_vec) < 0)) { > @@ -1490,6 +1500,17 @@ virtio_dev_rx_packed(struct virtio_net *dev, > do { > rte_prefetch0(&vq->desc_packed[vq->last_avail_idx]); > > + /* Do VLAN tag insertion */ > + if (pkts[pkt_idx]->ol_flags & PKT_TX_VLAN_PKT) { > + int error = rte_vlan_insert(&pkts[pkt_idx]); > + if (unlikely(error)) { > + rte_pktmbuf_free(pkts[pkt_idx]); > + pkts[pkt_idx] = NULL; > + pkt_idx++; > + continue; > + } > + } > + > if (count - pkt_idx >= PACKED_BATCH_SIZE) { > if (!virtio_dev_rx_sync_batch_packed(dev, vq, > &pkts[pkt_idx])) { > It would make sense to do that in virtio_enqueue_offload, and it would avoid code duplication. Regards, Maxime