From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id CE59CA0C4A; Thu, 8 Jul 2021 12:13:36 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id BC9F740C35; Thu, 8 Jul 2021 12:13:36 +0200 (CEST) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by mails.dpdk.org (Postfix) with ESMTP id 8D6D540696; Thu, 8 Jul 2021 12:13:35 +0200 (CEST) X-IronPort-AV: E=McAfee;i="6200,9189,10038"; a="295118437" X-IronPort-AV: E=Sophos;i="5.84,222,1620716400"; d="scan'208";a="295118437" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 08 Jul 2021 03:13:34 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.84,222,1620716400"; d="scan'208";a="457837344" Received: from dpdk_jiangcheng.sh.intel.com ([10.67.119.149]) by orsmga008.jf.intel.com with ESMTP; 08 Jul 2021 03:13:32 -0700 From: Cheng Jiang To: maxime.coquelin@redhat.com, Chenbo.Xia@intel.com Cc: dev@dpdk.org, jiayu.hu@intel.com, yong.liu@intel.com, yvonnex.yang@intel.com, Cheng Jiang , stable@dpdk.org Date: Thu, 8 Jul 2021 09:58:01 +0000 Message-Id: <20210708095801.23973-1-cheng1.jiang@intel.com> X-Mailer: git-send-email 2.29.2 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Subject: [dpdk-dev] [PATCH] net/virtio: fix refill order in packed ring datapath X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The front-end should refill the descriptor with the mbuf indicated by the buff_id rather then the index of used descriptor. Back-end may return buffers out of order if async copy mode is enabled. When initializing rxq, refill the descriptors in order as buff_id is not available at that time. Fixes: a76290c8f1cf ("net/virtio: implement Rx path for packed queues") Cc: stable@dpdk.org Signed-off-by: Cheng Jiang Signed-off-by: Marvin Liu diff --git a/drivers/net/virtio/virtio_rxtx.c b/drivers/net/virtio/virtio_rxtx.c index 3ac847317f..d35875d9ce 100644 --- a/drivers/net/virtio/virtio_rxtx.c +++ b/drivers/net/virtio/virtio_rxtx.c @@ -328,13 +328,35 @@ virtqueue_enqueue_recv_refill(struct virtqueue *vq, struct rte_mbuf **cookie, return 0; } +static inline void +virtqueue_refill_single_packed(struct virtqueue *vq, + struct vring_packed_desc *dp, + struct rte_mbuf *cookie) +{ + uint16_t flags = vq->vq_packed.cached_flags; + struct virtio_hw *hw = vq->hw; + + dp->addr = cookie->buf_iova + + RTE_PKTMBUF_HEADROOM - hw->vtnet_hdr_size; + dp->len = cookie->buf_len - + RTE_PKTMBUF_HEADROOM + hw->vtnet_hdr_size; + + virtqueue_store_flags_packed(dp, flags, + hw->weak_barriers); + + if (++vq->vq_avail_idx >= vq->vq_nentries) { + vq->vq_avail_idx -= vq->vq_nentries; + vq->vq_packed.cached_flags ^= + VRING_PACKED_DESC_F_AVAIL_USED; + flags = vq->vq_packed.cached_flags; + } +} + static inline int -virtqueue_enqueue_recv_refill_packed(struct virtqueue *vq, +virtqueue_enqueue_recv_refill_packed_init(struct virtqueue *vq, struct rte_mbuf **cookie, uint16_t num) { struct vring_packed_desc *start_dp = vq->vq_packed.ring.desc; - uint16_t flags = vq->vq_packed.cached_flags; - struct virtio_hw *hw = vq->hw; struct vq_desc_extra *dxp; uint16_t idx; int i; @@ -350,24 +372,34 @@ virtqueue_enqueue_recv_refill_packed(struct virtqueue *vq, dxp->cookie = (void *)cookie[i]; dxp->ndescs = 1; - start_dp[idx].addr = cookie[i]->buf_iova + - RTE_PKTMBUF_HEADROOM - hw->vtnet_hdr_size; - start_dp[idx].len = cookie[i]->buf_len - - RTE_PKTMBUF_HEADROOM + hw->vtnet_hdr_size; + virtqueue_refill_single_packed(vq, &start_dp[idx], cookie[i]); + } + vq->vq_free_cnt = (uint16_t)(vq->vq_free_cnt - num); + return 0; +} - vq->vq_desc_head_idx = dxp->next; - if (vq->vq_desc_head_idx == VQ_RING_DESC_CHAIN_END) - vq->vq_desc_tail_idx = vq->vq_desc_head_idx; +static inline int +virtqueue_enqueue_recv_refill_packed(struct virtqueue *vq, + struct rte_mbuf **cookie, uint16_t num) +{ + struct vring_packed_desc *start_dp = vq->vq_packed.ring.desc; + struct vq_desc_extra *dxp; + uint16_t idx, did; + int i; - virtqueue_store_flags_packed(&start_dp[idx], flags, - hw->weak_barriers); + if (unlikely(vq->vq_free_cnt == 0)) + return -ENOSPC; + if (unlikely(vq->vq_free_cnt < num)) + return -EMSGSIZE; - if (++vq->vq_avail_idx >= vq->vq_nentries) { - vq->vq_avail_idx -= vq->vq_nentries; - vq->vq_packed.cached_flags ^= - VRING_PACKED_DESC_F_AVAIL_USED; - flags = vq->vq_packed.cached_flags; - } + for (i = 0; i < num; i++) { + idx = vq->vq_avail_idx; + did = start_dp[idx].id; + dxp = &vq->vq_descx[did]; + dxp->cookie = (void *)cookie[i]; + dxp->ndescs = 1; + + virtqueue_refill_single_packed(vq, &start_dp[idx], cookie[i]); } vq->vq_free_cnt = (uint16_t)(vq->vq_free_cnt - num); return 0; @@ -742,7 +774,7 @@ virtio_dev_rx_queue_setup_finish(struct rte_eth_dev *dev, uint16_t queue_idx) /* Enqueue allocated buffers */ if (virtio_with_packed_queue(vq->hw)) - error = virtqueue_enqueue_recv_refill_packed(vq, + error = virtqueue_enqueue_recv_refill_packed_init(vq, &m, 1); else error = virtqueue_enqueue_recv_refill(vq, -- 2.17.1