From: Maxime Coquelin <maxime.coquelin@redhat.com>
To: Jens Freimann <jfreimann@redhat.com>, dev@dpdk.org
Cc: tiwei.bie@intel.com, Gavin.Hu@arm.com
Subject: Re: [dpdk-dev] [PATCH v7 5/8] net/virtio: implement transmit path for packed queues
Date: Thu, 11 Oct 2018 19:31:57 +0200 [thread overview]
Message-ID: <8772e063-9d04-be01-51a4-567665908505@redhat.com> (raw)
In-Reply-To: <20181003131118.21491-6-jfreimann@redhat.com>
I'm testing your series, and it gets stuck after 256 packets in transmit
path. When it happens, descs flags indicate it has been made available
by the driver (desc->flags = 0x80), but it is not consistent with the
expected wrap counter value (0).
Not sure this is the root cause, but it seems below code is broken:
On 10/03/2018 03:11 PM, Jens Freimann wrote:
> +static inline void
> +virtqueue_enqueue_xmit_packed(struct virtnet_tx *txvq, struct rte_mbuf *cookie,
> + uint16_t needed, int use_indirect, int can_push,
> + int in_order)
> +{
> + struct virtio_tx_region *txr = txvq->virtio_net_hdr_mz->addr;
> + struct vq_desc_extra *dxp, *head_dxp;
> + struct virtqueue *vq = txvq->vq;
> + struct vring_desc_packed *start_dp, *head_dp;
> + uint16_t seg_num = cookie->nb_segs;
> + uint16_t idx, head_id;
> + uint16_t head_size = vq->hw->vtnet_hdr_size;
> + struct virtio_net_hdr *hdr;
> + int wrap_counter = vq->vq_ring.avail_wrap_counter;
> +
> + head_id = vq->vq_desc_head_idx;
> + idx = head_id;
> + start_dp = vq->vq_ring.desc_packed;
> + dxp = &vq->vq_descx[idx];
> + dxp->ndescs = needed;
> +
> + head_dp = &vq->vq_ring.desc_packed[head_id];
> + head_dxp = &vq->vq_descx[head_id];
> + head_dxp->cookie = (void *) cookie;
> +
> + if (can_push) {
> + /* prepend cannot fail, checked by caller */
> + hdr = (struct virtio_net_hdr *)
> + rte_pktmbuf_prepend(cookie, head_size);
> + /* rte_pktmbuf_prepend() counts the hdr size to the pkt length,
> + * which is wrong. Below subtract restores correct pkt size.
> + */
> + cookie->pkt_len -= head_size;
> +
> + /* if offload disabled, it is not zeroed below, do it now */
> + if (!vq->hw->has_tx_offload) {
> + ASSIGN_UNLESS_EQUAL(hdr->csum_start, 0);
> + ASSIGN_UNLESS_EQUAL(hdr->csum_offset, 0);
> + ASSIGN_UNLESS_EQUAL(hdr->flags, 0);
> + ASSIGN_UNLESS_EQUAL(hdr->gso_type, 0);
> + ASSIGN_UNLESS_EQUAL(hdr->gso_size, 0);
> + ASSIGN_UNLESS_EQUAL(hdr->hdr_len, 0);
> + }
> + } else if (use_indirect) {
> + /* setup tx ring slot to point to indirect
> + * descriptor list stored in reserved region.
> + *
> + * the first slot in indirect ring is already preset
> + * to point to the header in reserved region
> + */
> + start_dp[idx].addr = txvq->virtio_net_hdr_mem +
> + RTE_PTR_DIFF(&txr[idx].tx_indir_pq, txr);
> + start_dp[idx].len = (seg_num + 1) * sizeof(struct vring_desc_packed);
> + start_dp[idx].flags = VRING_DESC_F_INDIRECT;
> + hdr = (struct virtio_net_hdr *)&txr[idx].tx_hdr;
> +
> + /* loop below will fill in rest of the indirect elements */
> + start_dp = txr[idx].tx_indir_pq;
> + idx = 1;
> + } else {
> + /* setup first tx ring slot to point to header
> + * stored in reserved region.
> + */
> + start_dp[idx].addr = txvq->virtio_net_hdr_mem +
> + RTE_PTR_DIFF(&txr[idx].tx_hdr, txr);
> + start_dp[idx].len = vq->hw->vtnet_hdr_size;
> + start_dp[idx].flags = VRING_DESC_F_NEXT;
> + start_dp[idx].flags |=
> + VRING_DESC_F_AVAIL(vq->vq_ring.avail_wrap_counter) |
> + VRING_DESC_F_USED(!vq->vq_ring.avail_wrap_counter);
> + hdr = (struct virtio_net_hdr *)&txr[idx].tx_hdr;
> + idx = dxp->next;
> + }
> +
> + virtqueue_xmit_offload(hdr, cookie, vq->hw->has_tx_offload);
> +
> + do {
> + if (idx >= vq->vq_nentries) {
> + idx = 0;
> + vq->vq_ring.avail_wrap_counter ^= 1;
> + }
> + start_dp[idx].addr = VIRTIO_MBUF_DATA_DMA_ADDR(cookie, vq);
> + start_dp[idx].len = cookie->data_len;
> + start_dp[idx].flags = cookie->next ? VRING_DESC_F_NEXT : 0;
> + start_dp[idx].flags |=
> + VRING_DESC_F_AVAIL(vq->vq_ring.avail_wrap_counter) |
> + VRING_DESC_F_USED(!vq->vq_ring.avail_wrap_counter);
> + if (use_indirect) {
> + if (++idx >= (seg_num + 1))
> + break;
> + } else {
> + dxp = &vq->vq_descx[idx];
> + idx = dxp->next;
> + }
Imagine current idx is 255, dxp->next will give idx 0, right?
In that case, for desc[0], on next iteration, the flags won't be set
available properly, as vq->vq_ring.avail_wrap_counter isn't updated.
I'm not sure how it could work like this, shouldn't dxp save the wrap
counter value in out-of-order case?
next prev parent reply other threads:[~2018-10-11 17:32 UTC|newest]
Thread overview: 20+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-10-03 13:11 [dpdk-dev] [PATCH v7 0/8] implement packed virtqueues Jens Freimann
2018-10-03 13:11 ` [dpdk-dev] [PATCH v7 1/8] net/virtio: vring init for packed queues Jens Freimann
2018-10-04 11:54 ` Maxime Coquelin
2018-10-05 8:10 ` Jens Freimann
2018-10-03 13:11 ` [dpdk-dev] [PATCH v7 2/8] net/virtio: add packed virtqueue defines Jens Freimann
2018-10-04 11:54 ` Maxime Coquelin
2018-10-03 13:11 ` [dpdk-dev] [PATCH v7 3/8] net/virtio: add packed virtqueue helpers Jens Freimann
2018-10-03 13:11 ` [dpdk-dev] [PATCH v7 4/8] net/virtio: dump packed virtqueue data Jens Freimann
2018-10-04 13:23 ` Maxime Coquelin
2018-10-03 13:11 ` [dpdk-dev] [PATCH v7 5/8] net/virtio: implement transmit path for packed queues Jens Freimann
2018-10-10 7:27 ` Maxime Coquelin
2018-10-10 11:43 ` Jens Freimann
2018-10-11 17:31 ` Maxime Coquelin [this message]
2018-10-12 7:24 ` Jens Freimann
2018-10-12 7:41 ` Maxime Coquelin
2018-10-03 13:11 ` [dpdk-dev] [PATCH v7 6/8] net/virtio: implement receive " Jens Freimann
2018-10-03 13:11 ` [dpdk-dev] [PATCH v7 7/8] net/virtio: add virtio send command packed queue support Jens Freimann
2018-10-03 13:11 ` [dpdk-dev] [PATCH v7 8/8] net/virtio: enable packed virtqueues by default Jens Freimann
2018-10-03 13:19 ` [dpdk-dev] [PATCH v7 0/8] implement packed virtqueues Jens Freimann
2018-10-04 13:59 ` Maxime Coquelin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=8772e063-9d04-be01-51a4-567665908505@redhat.com \
--to=maxime.coquelin@redhat.com \
--cc=Gavin.Hu@arm.com \
--cc=dev@dpdk.org \
--cc=jfreimann@redhat.com \
--cc=tiwei.bie@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).