From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mx1.redhat.com (mx1.redhat.com [209.132.183.28]) by dpdk.org (Postfix) with ESMTP id 83EE51B4B0 for ; Thu, 11 Oct 2018 19:32:05 +0200 (CEST) Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.23]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id C2D063082123; Thu, 11 Oct 2018 17:32:04 +0000 (UTC) Received: from [10.36.112.55] (ovpn-112-55.ams2.redhat.com [10.36.112.55]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 6EB5B85754; Thu, 11 Oct 2018 17:31:59 +0000 (UTC) To: Jens Freimann , dev@dpdk.org Cc: tiwei.bie@intel.com, Gavin.Hu@arm.com References: <20181003131118.21491-1-jfreimann@redhat.com> <20181003131118.21491-6-jfreimann@redhat.com> From: Maxime Coquelin Message-ID: <8772e063-9d04-be01-51a4-567665908505@redhat.com> Date: Thu, 11 Oct 2018 19:31:57 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.9.1 MIME-Version: 1.0 In-Reply-To: <20181003131118.21491-6-jfreimann@redhat.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.42]); Thu, 11 Oct 2018 17:32:04 +0000 (UTC) Subject: Re: [dpdk-dev] [PATCH v7 5/8] net/virtio: implement transmit path for packed queues X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 11 Oct 2018 17:32:06 -0000 I'm testing your series, and it gets stuck after 256 packets in transmit path. When it happens, descs flags indicate it has been made available by the driver (desc->flags = 0x80), but it is not consistent with the expected wrap counter value (0). Not sure this is the root cause, but it seems below code is broken: On 10/03/2018 03:11 PM, Jens Freimann wrote: > +static inline void > +virtqueue_enqueue_xmit_packed(struct virtnet_tx *txvq, struct rte_mbuf *cookie, > + uint16_t needed, int use_indirect, int can_push, > + int in_order) > +{ > + struct virtio_tx_region *txr = txvq->virtio_net_hdr_mz->addr; > + struct vq_desc_extra *dxp, *head_dxp; > + struct virtqueue *vq = txvq->vq; > + struct vring_desc_packed *start_dp, *head_dp; > + uint16_t seg_num = cookie->nb_segs; > + uint16_t idx, head_id; > + uint16_t head_size = vq->hw->vtnet_hdr_size; > + struct virtio_net_hdr *hdr; > + int wrap_counter = vq->vq_ring.avail_wrap_counter; > + > + head_id = vq->vq_desc_head_idx; > + idx = head_id; > + start_dp = vq->vq_ring.desc_packed; > + dxp = &vq->vq_descx[idx]; > + dxp->ndescs = needed; > + > + head_dp = &vq->vq_ring.desc_packed[head_id]; > + head_dxp = &vq->vq_descx[head_id]; > + head_dxp->cookie = (void *) cookie; > + > + if (can_push) { > + /* prepend cannot fail, checked by caller */ > + hdr = (struct virtio_net_hdr *) > + rte_pktmbuf_prepend(cookie, head_size); > + /* rte_pktmbuf_prepend() counts the hdr size to the pkt length, > + * which is wrong. Below subtract restores correct pkt size. > + */ > + cookie->pkt_len -= head_size; > + > + /* if offload disabled, it is not zeroed below, do it now */ > + if (!vq->hw->has_tx_offload) { > + ASSIGN_UNLESS_EQUAL(hdr->csum_start, 0); > + ASSIGN_UNLESS_EQUAL(hdr->csum_offset, 0); > + ASSIGN_UNLESS_EQUAL(hdr->flags, 0); > + ASSIGN_UNLESS_EQUAL(hdr->gso_type, 0); > + ASSIGN_UNLESS_EQUAL(hdr->gso_size, 0); > + ASSIGN_UNLESS_EQUAL(hdr->hdr_len, 0); > + } > + } else if (use_indirect) { > + /* setup tx ring slot to point to indirect > + * descriptor list stored in reserved region. > + * > + * the first slot in indirect ring is already preset > + * to point to the header in reserved region > + */ > + start_dp[idx].addr = txvq->virtio_net_hdr_mem + > + RTE_PTR_DIFF(&txr[idx].tx_indir_pq, txr); > + start_dp[idx].len = (seg_num + 1) * sizeof(struct vring_desc_packed); > + start_dp[idx].flags = VRING_DESC_F_INDIRECT; > + hdr = (struct virtio_net_hdr *)&txr[idx].tx_hdr; > + > + /* loop below will fill in rest of the indirect elements */ > + start_dp = txr[idx].tx_indir_pq; > + idx = 1; > + } else { > + /* setup first tx ring slot to point to header > + * stored in reserved region. > + */ > + start_dp[idx].addr = txvq->virtio_net_hdr_mem + > + RTE_PTR_DIFF(&txr[idx].tx_hdr, txr); > + start_dp[idx].len = vq->hw->vtnet_hdr_size; > + start_dp[idx].flags = VRING_DESC_F_NEXT; > + start_dp[idx].flags |= > + VRING_DESC_F_AVAIL(vq->vq_ring.avail_wrap_counter) | > + VRING_DESC_F_USED(!vq->vq_ring.avail_wrap_counter); > + hdr = (struct virtio_net_hdr *)&txr[idx].tx_hdr; > + idx = dxp->next; > + } > + > + virtqueue_xmit_offload(hdr, cookie, vq->hw->has_tx_offload); > + > + do { > + if (idx >= vq->vq_nentries) { > + idx = 0; > + vq->vq_ring.avail_wrap_counter ^= 1; > + } > + start_dp[idx].addr = VIRTIO_MBUF_DATA_DMA_ADDR(cookie, vq); > + start_dp[idx].len = cookie->data_len; > + start_dp[idx].flags = cookie->next ? VRING_DESC_F_NEXT : 0; > + start_dp[idx].flags |= > + VRING_DESC_F_AVAIL(vq->vq_ring.avail_wrap_counter) | > + VRING_DESC_F_USED(!vq->vq_ring.avail_wrap_counter); > + if (use_indirect) { > + if (++idx >= (seg_num + 1)) > + break; > + } else { > + dxp = &vq->vq_descx[idx]; > + idx = dxp->next; > + } Imagine current idx is 255, dxp->next will give idx 0, right? In that case, for desc[0], on next iteration, the flags won't be set available properly, as vq->vq_ring.avail_wrap_counter isn't updated. I'm not sure how it could work like this, shouldn't dxp save the wrap counter value in out-of-order case?