From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by dpdk.org (Postfix) with ESMTP id 075478D39 for ; Mon, 14 Dec 2015 02:55:01 +0100 (CET) Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by orsmga102.jf.intel.com with ESMTP; 13 Dec 2015 17:55:02 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.20,424,1444719600"; d="scan'208";a="12548542" Received: from yliu-dev.sh.intel.com (HELO yliu-dev) ([10.239.66.49]) by fmsmga004.fm.intel.com with ESMTP; 13 Dec 2015 17:54:59 -0800 Date: Mon, 14 Dec 2015 09:55:37 +0800 From: Yuanhan Liu To: Rich Lane Message-ID: <20151214015537.GZ29571@yliu-dev.sh.intel.com> References: <1449122773-25510-1-git-send-email-yuanhan.liu@linux.intel.com> <1449122773-25510-2-git-send-email-yuanhan.liu@linux.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: User-Agent: Mutt/1.5.21 (2010-09-15) Cc: dev@dpdk.org, Victor Kaplansky , "Michael S. Tsirkin" Subject: Re: [dpdk-dev] [PATCH 1/5] vhost: refactor rte_vhost_dequeue_burst X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 14 Dec 2015 01:55:02 -0000 On Fri, Dec 11, 2015 at 10:55:48PM -0800, Rich Lane wrote: > On Wed, Dec 2, 2015 at 10:06 PM, Yuanhan Liu > wrote: > > +static inline struct rte_mbuf * > +copy_desc_to_mbuf(struct virtio_net *dev, struct vhost_virtqueue *vq, > +                 uint16_t desc_idx, struct rte_mempool *mbuf_pool) > +{ > > ...  > > + > +       desc = &vq->desc[desc_idx]; > +       desc_addr = gpa_to_vva(dev, desc->addr); > +       rte_prefetch0((void *)(uintptr_t)desc_addr); > + > +       /* Discard virtio header */ > +       desc_avail  = desc->len - vq->vhost_hlen; > > > If desc->len is zero (happens all the time when restarting DPDK apps in the > guest) then desc_avail will be huge. I'm aware of it; I have already noted that in the cover letter. This patch set is just a code refactor. Things like that will do a in a latter patch set. (The thing is that Huawei is very cagy about making any changes to vhost rxtx code, as it's the hot data path. So, I will not make any future changes base on this refactor, unless it's applied). >   > > +       desc_offset = vq->vhost_hlen; > + > +       mbuf_avail  = 0; > +       mbuf_offset = 0; > +       while (desc_avail || (desc->flags & VRING_DESC_F_NEXT) != 0) {  > > +               /* This desc reachs to its end, get the next one */ > +               if (desc_avail == 0) { > +                       desc = &vq->desc[desc->next]; > > > Need to check desc->next against vq->size. Thanks for the remind. > > There should be a limit on the number of descriptors in a chain to prevent an > infinite loop. > > >  uint16_t >  rte_vhost_dequeue_burst(struct virtio_net *dev, uint16_t queue_id, >         struct rte_mempool *mbuf_pool, struct rte_mbuf **pkts, uint16_t > count) >  { > ... >         avail_idx =  *((volatile uint16_t *)&vq->avail->idx); > - > -       /* If there are no available buffers then return. */ > -       if (vq->last_used_idx == avail_idx) > +       free_entries = avail_idx - vq->last_used_idx; > +       if (free_entries == 0) >                 return 0; > > > A common problem is that avail_idx goes backwards when the guest zeroes its > virtqueues. This function could check for free_entries > vq->size and reset > vq->last_used_idx. Thanks, but ditto, will consider it in another patch set. > > > +       LOG_DEBUG(VHOST_DATA, "(%"PRIu64") about to dequene %u buffers\n", > +                       dev->device_fh, count); > > > Typo at "dequene". Oops; thanks. --yliu