From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by dpdk.org (Postfix) with ESMTP id 0835B1B5AA for ; Fri, 29 Jun 2018 18:08:05 +0200 (CEST) X-Amp-Result: UNSCANNABLE X-Amp-File-Uploaded: False Received: from orsmga008.jf.intel.com ([10.7.209.65]) by orsmga102.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 29 Jun 2018 09:08:05 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.51,285,1526367600"; d="scan'208";a="53269036" Received: from debian.sh.intel.com (HELO debian) ([10.67.104.228]) by orsmga008.jf.intel.com with ESMTP; 29 Jun 2018 09:08:03 -0700 Date: Sat, 30 Jun 2018 00:08:05 +0800 From: Tiwei Bie To: Maxime Coquelin Cc: zhihong.wang@intel.com, jfreimann@redhat.com, dev@dpdk.org, mst@redhat.com, jasowang@redhat.com, wexu@redhat.com Message-ID: <20180629160805.GD31010@debian> References: <20180622134327.18973-1-maxime.coquelin@redhat.com> <20180622134327.18973-10-maxime.coquelin@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline In-Reply-To: <20180622134327.18973-10-maxime.coquelin@redhat.com> User-Agent: Mutt/1.9.5 (2018-04-13) Subject: Re: [dpdk-dev] [PATCH v5 09/15] vhost: add shadow used ring support for packed rings X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 29 Jun 2018 16:08:06 -0000 On Fri, Jun 22, 2018 at 03:43:21PM +0200, Maxime Coquelin wrote: [...] > index 671b4b3bf..62d49f238 100644 > --- a/lib/librte_vhost/vhost.h > +++ b/lib/librte_vhost/vhost.h > @@ -80,6 +80,12 @@ struct log_cache_entry { > unsigned long val; > }; > > +struct vring_used_elem_packed { > + uint32_t id; Id in packed ring is 16 bits long. Maybe it's better to define it as uint16_t here. > + uint32_t len; > + uint32_t count; > +}; > + [...] > + > +static __rte_always_inline void > +flush_shadow_used_ring_packed(struct virtio_net *dev, > + struct vhost_virtqueue *vq) > +{ > + int i; > + uint16_t used_idx = vq->last_used_idx; > + > + /* Split loop in two to save memory barriers */ > + for (i = 0; i < vq->shadow_used_idx; i++) { > + vq->desc_packed[used_idx].index = vq->shadow_used_packed[i].id; > + vq->desc_packed[used_idx].len = vq->shadow_used_packed[i].len; > + > + used_idx += vq->shadow_used_packed[i].count; used_idx may wrap. > + } > + > + rte_smp_wmb(); > + > + for (i = 0; i < vq->shadow_used_idx; i++) { > + uint16_t flags; > + > + if (vq->shadow_used_packed[i].len) > + flags = VRING_DESC_F_WRITE; > + else > + flags = 0; > + > + if (vq->used_wrap_counter) { > + flags |= VRING_DESC_F_USED; > + flags |= VRING_DESC_F_AVAIL; > + } else { > + flags &= ~VRING_DESC_F_USED; > + flags &= ~VRING_DESC_F_AVAIL; > + } > + > + vq->desc_packed[vq->last_used_idx].flags = flags; > + > + vhost_log_cache_used_vring(dev, vq, > + vq->last_used_idx * > + sizeof(struct vring_desc_packed), > + sizeof(struct vring_desc_packed)); > + > + vq->last_used_idx += vq->shadow_used_packed[i].count; > + if (vq->last_used_idx >= vq->size) { > + vq->used_wrap_counter ^= 1; > + vq->last_used_idx -= vq->size; > + } > + } > + > + rte_smp_wmb(); > + vq->shadow_used_idx = 0; > + vhost_log_cache_sync(dev, vq); > +} > + > +static __rte_always_inline void > +update_shadow_used_ring_packed(struct vhost_virtqueue *vq, > + uint16_t desc_idx, uint16_t len, uint16_t count) > +{ > + uint16_t i = vq->shadow_used_idx++; > + > + vq->shadow_used_packed[i].id = desc_idx; > + vq->shadow_used_packed[i].len = len; > + vq->shadow_used_packed[i].count = count; > } > > static inline void > -- > 2.14.4 >