From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by dpdk.org (Postfix) with ESMTP id C7EBA6CC8 for ; Tue, 11 Oct 2016 11:00:54 +0200 (CEST) Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga104.fm.intel.com with ESMTP; 11 Oct 2016 02:00:53 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.31,476,1473145200"; d="scan'208";a="1063199677" Received: from yliu-dev.sh.intel.com (HELO yliu-dev) ([10.239.67.162]) by orsmga002.jf.intel.com with ESMTP; 11 Oct 2016 02:00:52 -0700 Date: Tue, 11 Oct 2016 17:01:46 +0800 From: Yuanhan Liu To: Maxime Coquelin Cc: dev@dpdk.org, mst@redhat.com, jianfeng.tan@intel.com, olivier.matz@6wind.com, stephen@networkplumber.org Message-ID: <20161011090146.GP1597@yliu-dev.sh.intel.com> References: <1475773241-5714-1-git-send-email-maxime.coquelin@redhat.com> <1476171927-10134-1-git-send-email-maxime.coquelin@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1476171927-10134-1-git-send-email-maxime.coquelin@redhat.com> User-Agent: Mutt/1.5.23 (2014-03-12) Subject: Re: [dpdk-dev] [PATCH v2] vhost: Only access header if offloading is supported in dequeue path X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 11 Oct 2016 09:00:55 -0000 On Tue, Oct 11, 2016 at 09:45:27AM +0200, Maxime Coquelin wrote: > @@ -684,12 +699,12 @@ copy_desc_to_mbuf(struct virtio_net *dev, struct vring_desc *descs, > struct rte_mempool *mbuf_pool) > { > struct vring_desc *desc; > - uint64_t desc_addr; > + uint64_t desc_addr = 0; > uint32_t desc_avail, desc_offset; > uint32_t mbuf_avail, mbuf_offset; > uint32_t cpy_len; > struct rte_mbuf *cur = m, *prev = m; > - struct virtio_net_hdr *hdr; > + struct virtio_net_hdr *hdr = NULL; > /* A counter to avoid desc dead loop chain */ > uint32_t nr_desc = 1; > > @@ -698,12 +713,14 @@ copy_desc_to_mbuf(struct virtio_net *dev, struct vring_desc *descs, > (desc->flags & VRING_DESC_F_INDIRECT)) > return -1; > > - desc_addr = gpa_to_vva(dev, desc->addr); > - if (unlikely(!desc_addr)) > - return -1; > + if (virtio_net_with_host_offload(dev)) { > + desc_addr = gpa_to_vva(dev, desc->addr); > + if (unlikely(!desc_addr)) > + return -1; > > - hdr = (struct virtio_net_hdr *)((uintptr_t)desc_addr); > - rte_prefetch0(hdr); > + hdr = (struct virtio_net_hdr *)((uintptr_t)desc_addr); > + rte_prefetch0(hdr); > + } > > /* > * A virtio driver normally uses at least 2 desc buffers > @@ -720,18 +737,24 @@ copy_desc_to_mbuf(struct virtio_net *dev, struct vring_desc *descs, > if (unlikely(!desc_addr)) > return -1; > > - rte_prefetch0((void *)(uintptr_t)desc_addr); > - > desc_offset = 0; > desc_avail = desc->len; > nr_desc += 1; > - > - PRINT_PACKET(dev, (uintptr_t)desc_addr, desc->len, 0); > } else { > + if (!desc_addr) { > + desc_addr = gpa_to_vva(dev, desc->addr); > + if (unlikely(!desc_addr)) > + return -1; > + } > + I think this piece of code make things a bit complex. I think what you want to achieve is, besides saving hdr prefetch, to save one call to gpa_to_vva() for the non-ANY_LAYOUT case. Does that matter too much? How about just saving the hdr prefetch? if (virtio_net_with_host_offload(dev)) { hdr = (struct virtio_net_hdr *)((uintptr_t)desc_addr); rte_prefetch0(hdr); } --yliu > desc_avail = desc->len - dev->vhost_hlen; > desc_offset = dev->vhost_hlen; > } > > + rte_prefetch0((void *)(uintptr_t)(desc_addr + desc_offset)); > + > + PRINT_PACKET(dev, (uintptr_t)(desc_addr + desc_offset), desc_avail, 0); > + > mbuf_offset = 0; > mbuf_avail = m->buf_len - RTE_PKTMBUF_HEADROOM; > while (1) { > @@ -795,7 +818,7 @@ copy_desc_to_mbuf(struct virtio_net *dev, struct vring_desc *descs, > prev->data_len = mbuf_offset; > m->pkt_len += mbuf_offset; > > - if (hdr->flags != 0 || hdr->gso_type != VIRTIO_NET_HDR_GSO_NONE) > + if (virtio_net_with_host_offload(dev)) > vhost_dequeue_offload(hdr, m); > > return 0; > -- > 2.7.4