From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 72430A00E6 for ; Wed, 10 Jul 2019 06:28:28 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 465871B96E; Wed, 10 Jul 2019 06:28:27 +0200 (CEST) Received: from mx1.redhat.com (mx1.redhat.com [209.132.183.28]) by dpdk.org (Postfix) with ESMTP id D281A1B95A for ; Wed, 10 Jul 2019 06:28:25 +0200 (CEST) Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id CFEE03082E1E; Wed, 10 Jul 2019 04:28:24 +0000 (UTC) Received: from [10.72.12.42] (ovpn-12-42.pek2.redhat.com [10.72.12.42]) by smtp.corp.redhat.com (Postfix) with ESMTP id 2686D7E48C; Wed, 10 Jul 2019 04:28:20 +0000 (UTC) To: Marvin Liu , tiwei.bie@intel.com, maxime.coquelin@redhat.com, dev@dpdk.org References: <20190708171320.38802-1-yong.liu@intel.com> <20190708171320.38802-3-yong.liu@intel.com> From: Jason Wang Message-ID: Date: Wed, 10 Jul 2019 12:28:19 +0800 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.7.2 MIME-Version: 1.0 In-Reply-To: <20190708171320.38802-3-yong.liu@intel.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 8bit Content-Language: en-US X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.46]); Wed, 10 Jul 2019 04:28:24 +0000 (UTC) Subject: Re: [dpdk-dev] [RFC PATCH 02/13] add vhost packed ring fast enqueue function X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On 2019/7/9 上午1:13, Marvin Liu wrote: > In fast enqueue function, will first check whether descriptors are > cache aligned. Fast enqueue function will check prerequisites in the > beginning. Fast enqueue function do not support chained mbufs, normal > function will handle that. > > Signed-off-by: Marvin Liu > > diff --git a/lib/librte_vhost/vhost.h b/lib/librte_vhost/vhost.h > index 884befa85..f24026acd 100644 > --- a/lib/librte_vhost/vhost.h > +++ b/lib/librte_vhost/vhost.h > @@ -39,6 +39,8 @@ > > #define VHOST_LOG_CACHE_NR 32 > > +/* Used in fast packed ring functions */ > +#define PACKED_DESC_PER_CACHELINE (RTE_CACHE_LINE_SIZE / sizeof(struct vring_packed_desc)) > /** > * Structure contains buffer address, length and descriptor index > * from vring to do scatter RX. > diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c > index 003aec1d4..b877510da 100644 > --- a/lib/librte_vhost/virtio_net.c > +++ b/lib/librte_vhost/virtio_net.c > @@ -897,6 +897,115 @@ virtio_dev_rx_split(struct virtio_net *dev, struct vhost_virtqueue *vq, > return pkt_idx; > } > > +static __rte_always_inline uint16_t > +virtio_dev_rx_fast_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, > + struct rte_mbuf **pkts) > +{ > + bool wrap_counter = vq->avail_wrap_counter; > + struct vring_packed_desc *descs = vq->desc_packed; > + uint16_t avail_idx = vq->last_avail_idx; > + uint64_t desc_addr, desc_addr1, desc_addr2, desc_addr3, len, len1, > + len2, len3; > + struct virtio_net_hdr_mrg_rxbuf *hdr, *hdr1, *hdr2, *hdr3; > + uint32_t buf_offset = dev->vhost_hlen; > + > + if (unlikely(avail_idx & 0x3)) > + return -1; > + > + if (unlikely(avail_idx < (vq->size - PACKED_DESC_PER_CACHELINE))) > + rte_prefetch0((void *)(uintptr_t)&descs[avail_idx + > + PACKED_DESC_PER_CACHELINE]); > + else > + rte_prefetch0((void *)(uintptr_t)&descs[0]); > + > + if (unlikely((pkts[0]->next != NULL) | > + (pkts[1]->next != NULL) | > + (pkts[2]->next != NULL) | > + (pkts[3]->next != NULL))) > + return -1; > + > + if (unlikely(!desc_is_avail(&descs[avail_idx], wrap_counter)) | > + unlikely(!desc_is_avail(&descs[avail_idx + 1], wrap_counter)) | > + unlikely(!desc_is_avail(&descs[avail_idx + 2], wrap_counter)) | > + unlikely(!desc_is_avail(&descs[avail_idx + 3], wrap_counter))) > + return 1; Any reason for not letting compiler to unroll the loops? Thanks