From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 10159A0548 for ; Tue, 17 Aug 2021 11:44:55 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 05AB14014E; Tue, 17 Aug 2021 11:44:55 +0200 (CEST) Received: from smtp-relay-canonical-0.canonical.com (smtp-relay-canonical-0.canonical.com [185.125.188.120]) by mails.dpdk.org (Postfix) with ESMTP id E08284014E for ; Tue, 17 Aug 2021 11:44:53 +0200 (CEST) Received: from mail-qt1-f200.google.com (mail-qt1-f200.google.com [209.85.160.200]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by smtp-relay-canonical-0.canonical.com (Postfix) with ESMTPS id A1F8640C9C for ; Tue, 17 Aug 2021 09:44:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=canonical.com; s=20210705; t=1629193493; bh=bVufPsnSqvWYBroqBsEg39AZpqT9wRK8zb2EWF4qvwY=; h=MIME-Version:References:In-Reply-To:From:Date:Message-ID:Subject: To:Cc:Content-Type; b=nXP6k51KWfPkP0rc8W9Rg6vdars9kIeoWPcvQOUGJhLmEjUJKGyBpv6xLfxouNbVq pIaQG43qf6Cx3CIMQwptaNvq4Vl2dqfK3PwQQDhhniFlqw+LjadnsLTwU/uFVXLBQ8 oO79I4iO+N5Q55Uto2wV1fpCiM0ucHcWY0C91kR0j54jz/gJYd1JCeVWd3l8vfoioB s0qSDx2m/4TUkBtb2OMdRbcy7kOy2b/q2Nx3HEX30vAIfSnuetJ/jKpVd3mLzvPOC6 REIF3ebJbCvye8Zpbs0KVnn5Xb7pVsx4tQHE6ZMPXia3d5bBa2ZAIZQ/OnQ4qmJiEC nEAORKdv9EZUg== Received: by mail-qt1-f200.google.com with SMTP id g17-20020ac870d10000b02902928f62e229so10790086qtp.18 for ; Tue, 17 Aug 2021 02:44:53 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=bVufPsnSqvWYBroqBsEg39AZpqT9wRK8zb2EWF4qvwY=; b=t3ySm7R+e/K4t2OWSp//VV45CJCx9ESy4XP6/8u9DNNGEe6Q0oFx3CSzbLaJdY/reU I7VR064hjJzoV52rHUy7XXSw9pN6FUr6CISre11grEOL4WiJInFgFNXSGD/ssmxHuh5f lQO+yGIyhfb2gZTL9j6RM3t8p7rGIuRArI8HhM5mJ6op3QHOMePfyeHjagN8CbcxzU6g z7xffLXIEL4EHhMb3D4ZpHvy3VHfB94pAKCWUa97/zrE4YToNar3JwPcLob3VgUqG4HO bCB82NTsoFKv5DlsZAAycsLPINIsptBLotJ6o7CkGSWqvV8AesVf6okXg1CkHiItpdKw y8HA== X-Gm-Message-State: AOAM532eEHgaut8VYVgvJbQmDyF0W6eceVv0EtQkqF4slS8xXoGY6uZZ I6Qr0Ug1z4i7lUrcrE6z2uJfN88zdLJPYSd0XB+Gi8kiKdVJ5JZdym2+Xj949SjoodWaaZYhOBv 8va0JMJkJ8U0RvixIoz3TlrqHkPDWvcFkQFw1Le9i X-Received: by 2002:a05:622a:1a15:: with SMTP id f21mr2248833qtb.7.1629193492750; Tue, 17 Aug 2021 02:44:52 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwQXx1hgT7ZJ8UdNTDwKkPrizVtUsKzzR3Z5RpJ9SSlTv/Xm/TyRdYOUCZaMq0nt+Bmt+6m4lQyDMnPdSv+7Ak= X-Received: by 2002:a05:622a:1a15:: with SMTP id f21mr2248809qtb.7.1629193492410; Tue, 17 Aug 2021 02:44:52 -0700 (PDT) MIME-Version: 1.0 References: <20210817095236.36985-1-yong.liu@intel.com> In-Reply-To: <20210817095236.36985-1-yong.liu@intel.com> From: Christian Ehrhardt Date: Tue, 17 Aug 2021 11:44:26 +0200 Message-ID: To: Marvin Liu Cc: dpdk stable , Cheng Jiang Content-Type: text/plain; charset="UTF-8" Subject: Re: [dpdk-stable] [PATCH 19.11] net/virtio: fix refill order in packed ring datapath X-BeenThere: stable@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: patches for DPDK stable branches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: stable-bounces@dpdk.org Sender: "stable" On Tue, Aug 17, 2021 at 4:30 AM Marvin Liu wrote: > > [ upstream commit 2d91b28730a945def257bc372a525c9b5dbf181c ] Thanks, applied > The front-end should refill the descriptor with the mbuf indicated by > the buff_id rather then the index of used descriptor. Back-end may > return buffers out of order if async copy mode is enabled. > > When initializing rxq, refill the descriptors in order as buff_id is > not available at that time. > > Fixes: a76290c8f1cf ("net/virtio: implement Rx path for packed queues") > > Signed-off-by: Marvin Liu > Signed-off-by: Cheng Jiang > Reviewed-by: Maxime Coquelin > > diff --git a/drivers/net/virtio/virtio_rxtx.c b/drivers/net/virtio/virtio_rxtx.c > index 5211736d2..421e4847e 100644 > --- a/drivers/net/virtio/virtio_rxtx.c > +++ b/drivers/net/virtio/virtio_rxtx.c > @@ -474,13 +474,35 @@ virtqueue_enqueue_recv_refill(struct virtqueue *vq, struct rte_mbuf **cookie, > return 0; > } > > +static inline void > +virtqueue_refill_single_packed(struct virtqueue *vq, > + struct vring_packed_desc *dp, > + struct rte_mbuf *cookie) > +{ > + uint16_t flags = vq->vq_packed.cached_flags; > + struct virtio_hw *hw = vq->hw; > + > + dp->addr = VIRTIO_MBUF_ADDR(cookie, vq) + > + RTE_PKTMBUF_HEADROOM - hw->vtnet_hdr_size; > + dp->len = cookie->buf_len - > + RTE_PKTMBUF_HEADROOM + hw->vtnet_hdr_size; > + > + virtqueue_store_flags_packed(dp, flags, > + hw->weak_barriers); > + > + if (++vq->vq_avail_idx >= vq->vq_nentries) { > + vq->vq_avail_idx -= vq->vq_nentries; > + vq->vq_packed.cached_flags ^= > + VRING_PACKED_DESC_F_AVAIL_USED; > + flags = vq->vq_packed.cached_flags; > + } > +} > + > static inline int > -virtqueue_enqueue_recv_refill_packed(struct virtqueue *vq, > +virtqueue_enqueue_recv_refill_packed_init(struct virtqueue *vq, > struct rte_mbuf **cookie, uint16_t num) > { > struct vring_packed_desc *start_dp = vq->vq_packed.ring.desc; > - uint16_t flags = vq->vq_packed.cached_flags; > - struct virtio_hw *hw = vq->hw; > struct vq_desc_extra *dxp; > uint16_t idx; > int i; > @@ -496,24 +518,34 @@ virtqueue_enqueue_recv_refill_packed(struct virtqueue *vq, > dxp->cookie = (void *)cookie[i]; > dxp->ndescs = 1; > > - start_dp[idx].addr = VIRTIO_MBUF_ADDR(cookie[i], vq) + > - RTE_PKTMBUF_HEADROOM - hw->vtnet_hdr_size; > - start_dp[idx].len = cookie[i]->buf_len - RTE_PKTMBUF_HEADROOM > - + hw->vtnet_hdr_size; > + virtqueue_refill_single_packed(vq, &start_dp[idx], cookie[i]); > + } > + vq->vq_free_cnt = (uint16_t)(vq->vq_free_cnt - num); > + return 0; > +} > > - vq->vq_desc_head_idx = dxp->next; > - if (vq->vq_desc_head_idx == VQ_RING_DESC_CHAIN_END) > - vq->vq_desc_tail_idx = vq->vq_desc_head_idx; > +static inline int > +virtqueue_enqueue_recv_refill_packed(struct virtqueue *vq, > + struct rte_mbuf **cookie, uint16_t num) > +{ > + struct vring_packed_desc *start_dp = vq->vq_packed.ring.desc; > + struct vq_desc_extra *dxp; > + uint16_t idx, did; > + int i; > > - virtqueue_store_flags_packed(&start_dp[idx], flags, > - hw->weak_barriers); > + if (unlikely(vq->vq_free_cnt == 0)) > + return -ENOSPC; > + if (unlikely(vq->vq_free_cnt < num)) > + return -EMSGSIZE; > > - if (++vq->vq_avail_idx >= vq->vq_nentries) { > - vq->vq_avail_idx -= vq->vq_nentries; > - vq->vq_packed.cached_flags ^= > - VRING_PACKED_DESC_F_AVAIL_USED; > - flags = vq->vq_packed.cached_flags; > - } > + for (i = 0; i < num; i++) { > + idx = vq->vq_avail_idx; > + did = start_dp[idx].id; > + dxp = &vq->vq_descx[did]; > + dxp->cookie = (void *)cookie[i]; > + dxp->ndescs = 1; > + > + virtqueue_refill_single_packed(vq, &start_dp[idx], cookie[i]); > } > vq->vq_free_cnt = (uint16_t)(vq->vq_free_cnt - num); > return 0; > @@ -1022,7 +1054,7 @@ virtio_dev_rx_queue_setup_finish(struct rte_eth_dev *dev, uint16_t queue_idx) > > /* Enqueue allocated buffers */ > if (vtpci_packed_queue(vq->hw)) > - error = virtqueue_enqueue_recv_refill_packed(vq, > + error = virtqueue_enqueue_recv_refill_packed_init(vq, > &m, 1); > else > error = virtqueue_enqueue_recv_refill(vq, > -- > 2.17.1 > -- Christian Ehrhardt Staff Engineer, Ubuntu Server Canonical Ltd