From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mx1.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by dpdk.org (Postfix) with ESMTP id 6A3301BE9B for ; Wed, 4 Jul 2018 18:18:24 +0200 (CEST) Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com [10.11.54.3]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 08D23406C7A6; Wed, 4 Jul 2018 16:18:24 +0000 (UTC) Received: from [10.36.112.34] (ovpn-112-34.ams2.redhat.com [10.36.112.34]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 52EEE111C4AA; Wed, 4 Jul 2018 16:18:17 +0000 (UTC) To: Tiwei Bie Cc: zhihong.wang@intel.com, jfreimann@redhat.com, dev@dpdk.org, mst@redhat.com, jasowang@redhat.com, wexu@redhat.com References: <20180702081629.29258-1-maxime.coquelin@redhat.com> <20180702081629.29258-12-maxime.coquelin@redhat.com> <20180704055307.GD28826@debian> From: Maxime Coquelin Message-ID: Date: Wed, 4 Jul 2018 18:18:15 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.8.0 MIME-Version: 1.0 In-Reply-To: <20180704055307.GD28826@debian> Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit X-Scanned-By: MIMEDefang 2.78 on 10.11.54.3 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.5]); Wed, 04 Jul 2018 16:18:24 +0000 (UTC) X-Greylist: inspected by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.5]); Wed, 04 Jul 2018 16:18:24 +0000 (UTC) for IP:'10.11.54.3' DOMAIN:'int-mx03.intmail.prod.int.rdu2.redhat.com' HELO:'smtp.corp.redhat.com' FROM:'maxime.coquelin@redhat.com' RCPT:'' Subject: Re: [dpdk-dev] [PATCH v6 11/15] vhost: add vector filling support for packed ring X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 04 Jul 2018 16:18:24 -0000 On 07/04/2018 07:53 AM, Tiwei Bie wrote: > On Mon, Jul 02, 2018 at 10:16:25AM +0200, Maxime Coquelin wrote: > [...] >> +static __rte_always_inline int >> +fill_vec_buf_packed_indirect(struct virtio_net *dev, >> + struct vhost_virtqueue *vq, >> + struct vring_desc_packed *desc, uint16_t *vec_idx, >> + struct buf_vector *buf_vec, uint16_t *len, uint8_t perm) >> +{ >> + uint16_t i; >> + uint32_t nr_decs; >> + uint16_t vec_id = *vec_idx; >> + uint64_t dlen; >> + struct vring_desc_packed *descs, *idescs = NULL; >> + >> + dlen = desc->len; >> + descs = (struct vring_desc_packed *)(uintptr_t) >> + vhost_iova_to_vva(dev, vq, desc->addr, &dlen, VHOST_ACCESS_RO); >> + if (unlikely(!descs)) >> + return -1; >> + >> + if (unlikely(dlen < desc->len)) { >> + /* >> + * The indirect desc table is not contiguous >> + * in process VA space, we have to copy it. >> + */ >> + idescs = alloc_copy_ind_table(dev, vq, desc->addr, desc->len); >> + if (unlikely(!idescs)) >> + return -1; >> + >> + descs = idescs; >> + } >> + >> + nr_decs = desc->len / sizeof(struct vring_desc_packed); > > s/nr_decs = /nr_desc = / Fixed. > > >> + if (unlikely(nr_decs >= vq->size)) { >> + free_ind_table(idescs); >> + return -1; >> + } > [...] >> + >> +static inline int >> +fill_vec_buf_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, >> + uint16_t avail_idx, uint16_t *desc_count, >> + struct buf_vector *buf_vec, uint16_t *vec_idx, >> + uint16_t *buf_id, uint16_t *len, uint8_t perm) >> +{ >> + bool wrap_counter = vq->avail_wrap_counter; >> + struct vring_desc_packed *descs = vq->desc_packed; >> + uint16_t vec_id = *vec_idx; >> + >> + if (avail_idx < vq->last_avail_idx) >> + wrap_counter ^= 1; > > In which case avail_idx will be less than vq->last_avail_idx > and we need to wrap the wrap_counter? In the receive mergeable case, it can happen (see patch 12): static inline int reserve_avail_buf_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, uint32_t size, struct buf_vector *buf_vec, uint16_t *nr_vec, uint16_t *num_buffers, uint16_t *nr_descs) { uint16_t avail_idx; uint16_t vec_idx = 0; uint16_t max_tries, tries = 0; uint16_t buf_id = 0; uint16_t len = 0; uint16_t desc_count; *num_buffers = 0; avail_idx = vq->last_avail_idx; if (rxvq_is_mergeable(dev)) max_tries = vq->size; else max_tries = 1; while (size > 0) { if (unlikely(fill_vec_buf_packed(dev, vq, avail_idx, &desc_count, buf_vec, &vec_idx, &buf_id, &len, VHOST_ACCESS_RO) < 0)) return -1; len = RTE_MIN(len, size); update_shadow_used_ring_packed(vq, buf_id, len, desc_count); size -= len; avail_idx += desc_count; if (avail_idx >= vq->size) avail_idx -= vq->size; *nr_descs += desc_count; tries++; *num_buffers += 1; /* * if we tried all available ring items, and still * can't get enough buf, it means something abnormal * happened. */ if (unlikely(tries > max_tries)) return -1; } *nr_vec = vec_idx; return 0; } >> + >> + if (unlikely(!desc_is_avail(&descs[avail_idx], wrap_counter))) >> + return -1; >> + >> + *desc_count = 0; >> + >> + while (1) { >> + if (unlikely(vec_id >= BUF_VECTOR_MAX)) >> + return -1; >> + >> + *desc_count += 1; >> + *buf_id = descs[avail_idx].index; >> + >> + if (descs[avail_idx].flags & VRING_DESC_F_INDIRECT) { >> + if (unlikely(fill_vec_buf_packed_indirect(dev, vq, >> + &descs[avail_idx], >> + &vec_id, buf_vec, >> + len, perm) < 0)) >> + return -1; >> + } else { >> + *len += descs[avail_idx].len; >> + >> + if (unlikely(map_one_desc(dev, vq, buf_vec, &vec_id, >> + descs[avail_idx].addr, >> + descs[avail_idx].len, >> + perm))) >> + return -1; >> + } >> + >> + if ((descs[avail_idx].flags & VRING_DESC_F_NEXT) == 0) >> + break; >> + >> + if (++avail_idx >= vq->size) { >> + avail_idx -= vq->size; >> + wrap_counter ^= 1; >> + } >> + } >> + >> + *vec_idx = vec_id; >> + >> + return 0; >> +} > [...] >