From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 2371EA00E6 for ; Thu, 11 Jul 2019 10:35:25 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id F13C27CBC; Thu, 11 Jul 2019 10:35:24 +0200 (CEST) Received: from mx1.redhat.com (mx1.redhat.com [209.132.183.28]) by dpdk.org (Postfix) with ESMTP id 8588A5680 for ; Thu, 11 Jul 2019 10:35:23 +0200 (CEST) Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id EDD38308427C; Thu, 11 Jul 2019 08:35:22 +0000 (UTC) Received: from [10.72.12.56] (ovpn-12-56.pek2.redhat.com [10.72.12.56]) by smtp.corp.redhat.com (Postfix) with ESMTP id 3C7C31001B10; Thu, 11 Jul 2019 08:35:18 +0000 (UTC) To: Marvin Liu , tiwei.bie@intel.com, maxime.coquelin@redhat.com, dev@dpdk.org References: <20190708171320.38802-1-yong.liu@intel.com> <20190708171320.38802-3-yong.liu@intel.com> From: Jason Wang Message-ID: <7de99b4b-7538-5694-36ee-c33edc17f3d2@redhat.com> Date: Thu, 11 Jul 2019 16:35:17 +0800 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.7.2 MIME-Version: 1.0 In-Reply-To: <20190708171320.38802-3-yong.liu@intel.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 8bit Content-Language: en-US X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.40]); Thu, 11 Jul 2019 08:35:23 +0000 (UTC) Subject: Re: [dpdk-dev] [RFC PATCH 02/13] add vhost packed ring fast enqueue function X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On 2019/7/9 上午1:13, Marvin Liu wrote: > In fast enqueue function, will first check whether descriptors are > cache aligned. Fast enqueue function will check prerequisites in the > beginning. Fast enqueue function do not support chained mbufs, normal > function will handle that. > > Signed-off-by: Marvin Liu > > diff --git a/lib/librte_vhost/vhost.h b/lib/librte_vhost/vhost.h > index 884befa85..f24026acd 100644 > --- a/lib/librte_vhost/vhost.h > +++ b/lib/librte_vhost/vhost.h > @@ -39,6 +39,8 @@ > > #define VHOST_LOG_CACHE_NR 32 > > +/* Used in fast packed ring functions */ > +#define PACKED_DESC_PER_CACHELINE (RTE_CACHE_LINE_SIZE / sizeof(struct vring_packed_desc)) > /** > * Structure contains buffer address, length and descriptor index > * from vring to do scatter RX. > diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c > index 003aec1d4..b877510da 100644 > --- a/lib/librte_vhost/virtio_net.c > +++ b/lib/librte_vhost/virtio_net.c > @@ -897,6 +897,115 @@ virtio_dev_rx_split(struct virtio_net *dev, struct vhost_virtqueue *vq, > return pkt_idx; > } > > +static __rte_always_inline uint16_t > +virtio_dev_rx_fast_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, > + struct rte_mbuf **pkts) > +{ > + bool wrap_counter = vq->avail_wrap_counter; > + struct vring_packed_desc *descs = vq->desc_packed; > + uint16_t avail_idx = vq->last_avail_idx; > + uint64_t desc_addr, desc_addr1, desc_addr2, desc_addr3, len, len1, > + len2, len3; > + struct virtio_net_hdr_mrg_rxbuf *hdr, *hdr1, *hdr2, *hdr3; > + uint32_t buf_offset = dev->vhost_hlen; > + > + if (unlikely(avail_idx & 0x3)) > + return -1; > + > + if (unlikely(avail_idx < (vq->size - PACKED_DESC_PER_CACHELINE))) > + rte_prefetch0((void *)(uintptr_t)&descs[avail_idx + > + PACKED_DESC_PER_CACHELINE]); > + else > + rte_prefetch0((void *)(uintptr_t)&descs[0]); > + > + if (unlikely((pkts[0]->next != NULL) | > + (pkts[1]->next != NULL) | > + (pkts[2]->next != NULL) | > + (pkts[3]->next != NULL))) > + return -1; > + > + if (unlikely(!desc_is_avail(&descs[avail_idx], wrap_counter)) | > + unlikely(!desc_is_avail(&descs[avail_idx + 1], wrap_counter)) | > + unlikely(!desc_is_avail(&descs[avail_idx + 2], wrap_counter)) | > + unlikely(!desc_is_avail(&descs[avail_idx + 3], wrap_counter))) > + return 1; > + > + rte_smp_rmb(); > + > + len = descs[avail_idx].len; > + len1 = descs[avail_idx + 1].len; > + len2 = descs[avail_idx + 2].len; > + len3 = descs[avail_idx + 3].len; > + > + if (unlikely((pkts[0]->pkt_len > (len - buf_offset)) | > + (pkts[1]->pkt_len > (len1 - buf_offset)) | > + (pkts[2]->pkt_len > (len2 - buf_offset)) | > + (pkts[3]->pkt_len > (len3 - buf_offset)))) > + return -1; > + > + desc_addr = vhost_iova_to_vva(dev, vq, > + descs[avail_idx].addr, > + &len, > + VHOST_ACCESS_RW); > + > + desc_addr1 = vhost_iova_to_vva(dev, vq, > + descs[avail_idx + 1].addr, > + &len1, > + VHOST_ACCESS_RW); > + > + desc_addr2 = vhost_iova_to_vva(dev, vq, > + descs[avail_idx + 2].addr, > + &len2, > + VHOST_ACCESS_RW); > + > + desc_addr3 = vhost_iova_to_vva(dev, vq, > + descs[avail_idx + 3].addr, > + &len3, > + VHOST_ACCESS_RW); How can you guarantee that len3 is zero after this? Thanks