From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from TPC-E3-15-001.phaedrus.sandvine.com (lab.sandvine.com [98.159.241.2]) by dpdk.org (Postfix) with ESMTP id 61EA92FDD for ; Thu, 10 Mar 2016 15:44:14 +0100 (CET) Received: by TPC-E3-15-001.phaedrus.sandvine.com (Postfix, from userid 10523) id A4D0922426; Thu, 10 Mar 2016 09:44:13 -0500 (EST) From: Kyle Larose To: huawei.xie@intel.com Date: Thu, 10 Mar 2016 09:44:11 -0500 Message-Id: <1457621051-17317-1-git-send-email-klarose@sandvine.com> X-Mailer: git-send-email 1.8.3.1 Cc: dev@dpdk.org Subject: [dpdk-dev] [PATCH v2] virtio: fix rx ring descriptor starvation X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 10 Mar 2016 14:44:14 -0000 Virtio has an mbuf descriptor ring containing mbufs to be used for receiving traffic. When the host queues traffic to be sent to the guest, it consumes these descriptors. If none exist, it discards the packet. The virtio pmd allocates mbufs to the descriptor ring every time it succesfully receives a packet. However, it never does it if it does not receive a valid packet. If the descriptor ring is exhausted, and the mbuf mempool does not have any mbufs free (which can happen for various reasons, such as queueing along the processing pipeline), then the receive call will not allocate any mbufs to the descriptor ring, and when it finishes, the descriptor ring will be empty. The ring being empty means that we will never receive a packet again, which means we will never allocate mbufs to the ring: we are stuck. Ultimately, the problem arises because there is a dependency between receiving packets and making the descriptor ring not be empty, and a dependency between the descriptor ring not being empty, and receiving packets. To fix the problem, this pakes makes virtio always try to allocate mbufs to the descriptor ring, if necessary, when polling for packets. Do this by removing the early exit if no packets were received. Since the packet loop later will do nothing if there are no packets, this is fine. I reproduced the problem by pushing packets through a pipelined systems (such as the client_server sample application) after artificially decreasing the size of the mbuf pool and introducing a delay in a secondary stage. Without the fix, the process stops receiving packets fairly quicky. With the fix, it continues to receive packets. Signed-off-by: Kyle Larose --- v2: * Added missing sign-off. * Cleaned up the commit message a bit. --- drivers/net/virtio/virtio_rxtx.c | 6 ------ 1 file changed, 6 deletions(-) diff --git a/drivers/net/virtio/virtio_rxtx.c b/drivers/net/virtio/virtio_rxtx.c index 41a1366..9d2f7d6 100644 --- a/drivers/net/virtio/virtio_rxtx.c +++ b/drivers/net/virtio/virtio_rxtx.c @@ -571,9 +571,6 @@ virtio_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) if (likely(num > DESC_PER_CACHELINE)) num = num - ((rxvq->vq_used_cons_idx + num) % DESC_PER_CACHELINE); - if (num == 0) - return 0; - num = virtqueue_dequeue_burst_rx(rxvq, rcv_pkts, len, num); PMD_RX_LOG(DEBUG, "used:%d dequeue:%d", nb_used, num); @@ -671,9 +668,6 @@ virtio_recv_mergeable_pkts(void *rx_queue, virtio_rmb(); - if (nb_used == 0) - return 0; - PMD_RX_LOG(DEBUG, "used:%d\n", nb_used); hw = rxvq->hw; -- 1.8.3.1