DPDK patches and discussions
 help / color / mirror / Atom feed
* [Bug 1295] Virtio driver, packed mode, the first desc misses the next flag, causing the message to be sent abnormally
@ 2023-10-07  3:37 bugzilla
  2023-11-28 16:25 ` bugzilla
  0 siblings, 1 reply; 2+ messages in thread
From: bugzilla @ 2023-10-07  3:37 UTC (permalink / raw)
  To: dev

[-- Attachment #1: Type: text/plain, Size: 6327 bytes --]

https://bugs.dpdk.org/show_bug.cgi?id=1295

            Bug ID: 1295
           Summary: Virtio driver, packed mode, the first desc misses the
                    next flag, causing the message to be sent abnormally
           Product: DPDK
           Version: 23.07
          Hardware: All
                OS: All
            Status: UNCONFIRMED
          Severity: major
          Priority: Normal
         Component: vhost/virtio
          Assignee: dev@dpdk.org
          Reporter: fengjiang_liu@163.com
  Target Milestone: ---

In the virtio_xmit_pkts_packed sending interface, when the virio header and pkt
payload are added in two descs, the desc of the virtio-header is missing
VRING_DESC_F_NEXT flag, resulting in the virio header and pkt payload being
considered to be two pkts, resulting in an exception.

static inline void
virtqueue_enqueue_xmit_packed(struct virtnet_tx *txvq, struct rte_mbuf *cookie,
                              uint16_t needed, int use_indirect, int can_push,
                              int in_order)
{
        struct virtio_tx_region *txr = txvq->hdr_mz->addr;
        struct vq_desc_extra *dxp;
        struct virtqueue *vq = virtnet_txq_to_vq(txvq);
        struct vring_packed_desc *start_dp, *head_dp;
        uint16_t idx, id, head_idx, head_flags;
        int16_t head_size = vq->hw->vtnet_hdr_size;
        struct virtio_net_hdr *hdr;
        uint16_t prev;
        bool prepend_header = false;
        uint16_t seg_num = cookie->nb_segs;

        id = in_order ? vq->vq_avail_idx : vq->vq_desc_head_idx;

        dxp = &vq->vq_descx[id];
        dxp->ndescs = needed;
        dxp->cookie = cookie;

        head_idx = vq->vq_avail_idx;
        idx = head_idx;
        prev = head_idx;
        start_dp = vq->vq_packed.ring.desc;

        head_dp = &vq->vq_packed.ring.desc[idx];
        head_flags = cookie->next ? VRING_DESC_F_NEXT : 0;
        ====>branch1: mbuf is not buffer-list, next == NULL, so No next flag
set 
        head_flags |= vq->vq_packed.cached_flags;

        if (can_push) {
                /* prepend cannot fail, checked by caller */
                hdr = rte_pktmbuf_mtod_offset(cookie, struct virtio_net_hdr *,
                                              -head_size);
                prepend_header = true;

                /* if offload disabled, it is not zeroed below, do it now */
                if (!vq->hw->has_tx_offload)
                        virtqueue_clear_net_hdr(hdr);
        } else if (use_indirect) {
                /* setup tx ring slot to point to indirect
                 * descriptor list stored in reserved region.
                 *
                 * the first slot in indirect ring is already preset
                 * to point to the header in reserved region
                 */
                start_dp[idx].addr = txvq->hdr_mem +
RTE_PTR_DIFF(&txr[idx].tx_packed_indir, txr);
                start_dp[idx].len = (seg_num + 1) * sizeof(struct
vring_packed_desc);
                /* Packed descriptor id needs to be restored when inorder. */
                if (in_order)
                        start_dp[idx].id = idx;
                /* reset flags for indirect desc */
                head_flags = VRING_DESC_F_INDIRECT;
                head_flags |= vq->vq_packed.cached_flags;
                hdr = (struct virtio_net_hdr *)&txr[idx].tx_hdr;

                /* loop below will fill in rest of the indirect elements */
                start_dp = txr[idx].tx_packed_indir;
                idx = 1;
        } else {
                /* setup first tx ring slot to point to header
                 * stored in reserved region.
                 */
                start_dp[idx].addr = txvq->hdr_mem +
RTE_PTR_DIFF(&txr[idx].tx_hdr, txr);
                start_dp[idx].len = vq->hw->vtnet_hdr_size;
                hdr = (struct virtio_net_hdr *)&txr[idx].tx_hdr;
                ====>branch2: Add virtio header desc, no next flag set
                idx++;
                if (idx >= vq->vq_nentries) {
                        idx -= vq->vq_nentries;
                        vq->vq_packed.cached_flags ^=
                                VRING_PACKED_DESC_F_AVAIL_USED;
                }
        }

        if (vq->hw->has_tx_offload)
                virtqueue_xmit_offload(hdr, cookie);

        do {
                uint16_t flags;

                start_dp[idx].addr = VIRTIO_MBUF_DATA_DMA_ADDR(cookie, vq);
                start_dp[idx].len  = cookie->data_len;
                if (prepend_header) {
                        start_dp[idx].addr -= head_size;
                        start_dp[idx].len += head_size;
                        prepend_header = false;
                }

                if (likely(idx != head_idx)) {
                        flags = cookie->next ? VRING_DESC_F_NEXT : 0;
                        flags |= vq->vq_packed.cached_flags;
                        start_dp[idx].flags = flags;
                }
                prev = idx;
                idx++;
                if (idx >= vq->vq_nentries) {
                        idx -= vq->vq_nentries;
                        vq->vq_packed.cached_flags ^=
                                VRING_PACKED_DESC_F_AVAIL_USED;
                }
        } while ((cookie = cookie->next) != NULL);

        start_dp[prev].id = id;

        if (use_indirect) {
                idx = head_idx;
                if (++idx >= vq->vq_nentries) {
                        idx -= vq->vq_nentries;
                        vq->vq_packed.cached_flags ^=
                                VRING_PACKED_DESC_F_AVAIL_USED;
                }
        }

        vq->vq_free_cnt = (uint16_t)(vq->vq_free_cnt - needed);
        vq->vq_avail_idx = idx;

        if (!in_order) {
                vq->vq_desc_head_idx = dxp->next;
                if (vq->vq_desc_head_idx == VQ_RING_DESC_CHAIN_END)
                        vq->vq_desc_tail_idx = VQ_RING_DESC_CHAIN_END;
        }
        ===》head_flags :no next flag set
        virtqueue_store_flags_packed(head_dp, head_flags,
                                     vq->hw->weak_barriers);
}

-- 
You are receiving this mail because:
You are the assignee for the bug.

[-- Attachment #2: Type: text/html, Size: 8417 bytes --]

^ permalink raw reply	[flat|nested] 2+ messages in thread

* [Bug 1295] Virtio driver, packed mode, the first desc misses the next flag, causing the message to be sent abnormally
  2023-10-07  3:37 [Bug 1295] Virtio driver, packed mode, the first desc misses the next flag, causing the message to be sent abnormally bugzilla
@ 2023-11-28 16:25 ` bugzilla
  0 siblings, 0 replies; 2+ messages in thread
From: bugzilla @ 2023-11-28 16:25 UTC (permalink / raw)
  To: dev

[-- Attachment #1: Type: text/plain, Size: 548 bytes --]

https://bugs.dpdk.org/show_bug.cgi?id=1295

Thomas Monjalon (thomas@monjalon.net) changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
             Status|UNCONFIRMED                 |RESOLVED
         Resolution|---                         |FIXED

--- Comment #1 from Thomas Monjalon (thomas@monjalon.net) ---
Resolved in http://git.dpdk.org/dpdk/commit/?id=f923636411

-- 
You are receiving this mail because:
You are the assignee for the bug.

[-- Attachment #2: Type: text/html, Size: 2787 bytes --]

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2023-11-28 16:25 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-10-07  3:37 [Bug 1295] Virtio driver, packed mode, the first desc misses the next flag, causing the message to be sent abnormally bugzilla
2023-11-28 16:25 ` bugzilla

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).