From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by dpdk.org (Postfix) with ESMTP id 2944111F5; Tue, 29 Aug 2017 10:25:45 +0200 (CEST) Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by orsmga103.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 29 Aug 2017 01:25:44 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.41,444,1498546800"; d="scan'208";a="305574979" Received: from debian-zgviawfucg.sh.intel.com ([10.67.104.214]) by fmsmga004.fm.intel.com with ESMTP; 29 Aug 2017 01:25:43 -0700 From: Tiwei Bie To: dev@dpdk.org Cc: yliu@fridaylinux.org, maxime.coquelin@redhat.com, stable@dpdk.org Date: Tue, 29 Aug 2017 16:26:01 +0800 Message-Id: <20170829082601.30349-1-tiwei.bie@intel.com> X-Mailer: git-send-email 2.13.2 Subject: [dpdk-dev] [PATCH] net/virtio: fix an incorrect behavior of device stop/start X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 29 Aug 2017 08:25:46 -0000 After starting a device, the driver shouldn't deliver the packets that already existed in the device before it is started to the applications. This patch fixes this issue by flushing the Rx queues when starting the device. Fixes: a85786dc816f ("virtio: fix states handling during initialization") Cc: stable@dpdk.org Signed-off-by: Tiwei Bie --- drivers/net/virtio/virtio_ethdev.c | 6 ++++++ drivers/net/virtio/virtio_rxtx.c | 2 +- drivers/net/virtio/virtqueue.c | 25 +++++++++++++++++++++++++ drivers/net/virtio/virtqueue.h | 5 +++++ 4 files changed, 37 insertions(+), 1 deletion(-) diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c index e320811..6d60bc1 100644 --- a/drivers/net/virtio/virtio_ethdev.c +++ b/drivers/net/virtio/virtio_ethdev.c @@ -1737,6 +1737,12 @@ virtio_dev_start(struct rte_eth_dev *dev) } } + /* Flush the packets in Rx queues. */ + for (i = 0; i < dev->data->nb_rx_queues; i++) { + rxvq = dev->data->rx_queues[i]; + virtqueue_flush(rxvq->vq); + } + /*Notify the backend *Otherwise the tap backend might already stop its queue due to fullness. *vhost backend will have no chance to be waked up diff --git a/drivers/net/virtio/virtio_rxtx.c b/drivers/net/virtio/virtio_rxtx.c index e30377c..5e5fcfc 100644 --- a/drivers/net/virtio/virtio_rxtx.c +++ b/drivers/net/virtio/virtio_rxtx.c @@ -81,7 +81,7 @@ virtio_dev_rx_queue_done(void *rxq, uint16_t offset) return VIRTQUEUE_NUSED(vq) >= offset; } -static void +void vq_ring_free_chain(struct virtqueue *vq, uint16_t desc_idx) { struct vring_desc *dp, *dp_tail; diff --git a/drivers/net/virtio/virtqueue.c b/drivers/net/virtio/virtqueue.c index 9ad77b8..c3a536f 100644 --- a/drivers/net/virtio/virtqueue.c +++ b/drivers/net/virtio/virtqueue.c @@ -59,3 +59,28 @@ virtqueue_detatch_unused(struct virtqueue *vq) } return NULL; } + +/* Flush the elements in the used ring. */ +void +virtqueue_flush(struct virtqueue *vq) +{ + struct vring_used_elem *uep; + struct vq_desc_extra *dxp; + uint16_t used_idx, desc_idx; + uint16_t nb_used, i; + + nb_used = VIRTQUEUE_NUSED(vq); + + for (i = 0; i < nb_used; i++) { + used_idx = vq->vq_used_cons_idx & (vq->vq_nentries - 1); + uep = &vq->vq_ring.used->ring[used_idx]; + desc_idx = (uint16_t)uep->id; + dxp = &vq->vq_descx[desc_idx]; + if (dxp->cookie != NULL) { + rte_pktmbuf_free(dxp->cookie); + dxp->cookie = NULL; + } + vq->vq_used_cons_idx++; + vq_ring_free_chain(vq, desc_idx); + } +} diff --git a/drivers/net/virtio/virtqueue.h b/drivers/net/virtio/virtqueue.h index 2e12086..9fffcd8 100644 --- a/drivers/net/virtio/virtqueue.h +++ b/drivers/net/virtio/virtqueue.h @@ -304,6 +304,9 @@ void virtqueue_dump(struct virtqueue *vq); */ struct rte_mbuf *virtqueue_detatch_unused(struct virtqueue *vq); +/* Flush the elements in the used ring. */ +void virtqueue_flush(struct virtqueue *vq); + static inline int virtqueue_full(const struct virtqueue *vq) { @@ -312,6 +315,8 @@ virtqueue_full(const struct virtqueue *vq) #define VIRTQUEUE_NUSED(vq) ((uint16_t)((vq)->vq_ring.used->idx - (vq)->vq_used_cons_idx)) +void vq_ring_free_chain(struct virtqueue *vq, uint16_t desc_idx); + static inline void vq_update_avail_idx(struct virtqueue *vq) { -- 2.9.3