From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from nm20-vm10.bullet.mail.sg3.yahoo.com (nm20-vm10.bullet.mail.sg3.yahoo.com [106.10.149.137]) by dpdk.org (Postfix) with ESMTP id 296625A7A for ; Thu, 23 Jul 2015 10:51:39 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com.hk; s=s2048; t=1437641498; bh=QHDKeOsHZr3Cn95Xhx7PU+yol5I4PFwRGiAPHO1u2ec=; h=From:To:Cc:Subject:Date:From:Subject; b=kRXQQLgZJOzjnE/jY6BfXKZMYViaRrFru8/kxhApD4xwWhH55++iGst3XiTXXQlBuwIPFrCZN/b8LxsCEOSA3h/b1hNdkUHIXUJ60kBogleleDT8VAGdS8fvMTtokHgLlX9eaTu6mkZTY0TkqF6JgGKkQwxV8CU0F0uozDyVwELh31EfTlT01TTL4y8JMB50PAtWXiZ+brQsJVNH2E1MGzxEGkYSqeRL1QH7HSHvr/lobwjztwGKSspE/qgzBipV1wBCw5zYXx1uLsMrCVkFkyW2X2tLPDAeFwiVWgAEgr6MOT9tzm3xhXHWns0cs6xOvh/zma1PWHrZ++o1ZODTjQ== Received: from [106.10.166.122] by nm20.bullet.mail.sg3.yahoo.com with NNFMP; 23 Jul 2015 08:51:38 -0000 Received: from [106.10.167.173] by tm11.bullet.mail.sg3.yahoo.com with NNFMP; 23 Jul 2015 08:51:38 -0000 Received: from [127.0.0.1] by smtp146.mail.sg3.yahoo.com with NNFMP; 23 Jul 2015 08:51:38 -0000 X-Yahoo-Newman-Id: 99610.98355.bm@smtp146.mail.sg3.yahoo.com X-Yahoo-Newman-Property: ymail-3 X-YMail-OSG: UBEvOwUVM1n8fg02wc._el2qZJEJpcQ.Mt10MHq5JL8Kihv SB54in1Y7jW_OaSVs5l655Guq1UOn4ShKki.jdg0YFVcoV5bGuW.pInlaYwN K21jyZqChX7jvvqNamvcXig.SK479276MIWedgikCjlV37YRvMc5nHiQXY95 qs9elWxbopRnfp9FrnTbGfbB.GQsprt6nnTf_YaXEulSVE86HiHq0I.xtJMh Q6r5ujSbdqttwZ5ljRB_Uff0HzoHSVn69_nTWMdhTZiPBLmUU7PCEjg8E89s gRRxi1oCzbmc5Km72k5_kwuRqKtn0fs9S0vy5KptPfefcxDGlIKKovp1B8rN uTycjkGOPnI_D7O3.YbQnVHeWy21Lc5Wc_xl6aqMhGshfPH0ponXWCBZz9D6 zgksd7SbsgFzUPsOzg.u1B0O6hX1b_ZEV8XFvKlKKOtr9t6MlJ5CunO.oXkm fXuy3JeDVUhnBVLf9ZKolkwtBDvDUyVKwpCSjJICYL4pVSPJBpFPRQHVt6CG n4zcsNm0M8h5LZyeq9TdghTcI.Bba X-Yahoo-SMTP: CmMEOJKswBDu0ApIhMmR9zco32o7KA-- From: Marco Lee To: dev@dpdk.org Date: Thu, 23 Jul 2015 16:51:32 +0800 Message-Id: <1437641492-7622-1-git-send-email-mac_leehk@yahoo.com.hk> X-Mailer: git-send-email 1.7.9.5 Cc: Marco Lee Subject: [dpdk-dev] [PATCH] The VMXNET3 PMD can't receive packet suddenly after a lot of traffic coming in X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 23 Jul 2015 08:51:41 -0000 The RX of VMXNET3 PMD will have deadlock when a lot of traffic coming in. The root cause is due to mbuf allocation fail in vmxnet3_post_rx_bufs() and there is no error handling when it is called from vmxnet3_recv_pkts(). The RXD will not have "free" mbuf for it but the counter still increment. Finally, no packet can be received. This fix is allocate the mbuf first, if the allocation is failed, then reuse the old mbuf If the allocation is sucess, the vmxnet3_post_rx_bufs() will call vmxnet3_renew_desc() and RXD will be renew inside. Signed-off-by: Marco Lee --- drivers/net/vmxnet3/vmxnet3_rxtx.c | 36 +++++++++++++++++++++++++++++++++++- 1 file changed, 35 insertions(+), 1 deletion(-) diff --git a/drivers/net/vmxnet3/vmxnet3_rxtx.c b/drivers/net/vmxnet3/vmxnet3_rxtx.c index 39ad6ef..3a15009 100644 --- a/drivers/net/vmxnet3/vmxnet3_rxtx.c +++ b/drivers/net/vmxnet3/vmxnet3_rxtx.c @@ -421,6 +421,34 @@ vmxnet3_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, return nb_tx; } +static inline void +vmxnet3_renew_desc(vmxnet3_rx_queue_t *rxq, uint8_t ring_id,struct rte_mbuf *mbuf) +{ + uint32_t val = 0; + struct vmxnet3_cmd_ring *ring = &rxq->cmd_ring[ring_id]; + + struct Vmxnet3_RxDesc *rxd; + vmxnet3_buf_info_t *buf_info = &ring->buf_info[ring->next2fill]; + + rxd = (struct Vmxnet3_RxDesc *)(ring->base + ring->next2fill); + + if (ring->rid == 0) { + val = VMXNET3_RXD_BTYPE_HEAD; + } else { + val = VMXNET3_RXD_BTYPE_BODY; + } + + buf_info->m = mbuf; + buf_info->len = (uint16_t)(mbuf->buf_len - RTE_PKTMBUF_HEADROOM); + buf_info->bufPA = RTE_MBUF_DATA_DMA_ADDR_DEFAULT(mbuf); + + rxd->addr = buf_info->bufPA; + rxd->btype = val; + rxd->len = buf_info->len; + rxd->gen = ring->gen; + + vmxnet3_cmd_ring_adv_next2fill(ring); +} /* * Allocates mbufs and clusters. Post rx descriptors with buffer details * so that device can receive packets in those buffers. @@ -578,6 +606,8 @@ vmxnet3_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) if (nb_rx >= nb_pkts) break; + struct rte_mbuf *rep; + rep = rte_rxmbuf_alloc(rxq->mb_pool); idx = rcd->rxdIdx; ring_idx = (uint8_t)((rcd->rqID == rxq->qid1) ? 0 : 1); rxd = (Vmxnet3_RxDesc *)rxq->cmd_ring[ring_idx].base + idx; @@ -651,13 +681,17 @@ vmxnet3_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) vmxnet3_rx_offload(rcd, rxm); + if (unlikely(rep == NULL)) { + rep = rxm; + goto rcd_done; + } rx_pkts[nb_rx++] = rxm; rcd_done: rxq->cmd_ring[ring_idx].next2comp = idx; VMXNET3_INC_RING_IDX_ONLY(rxq->cmd_ring[ring_idx].next2comp, rxq->cmd_ring[ring_idx].size); /* It's time to allocate some new buf and renew descriptors */ - vmxnet3_post_rx_bufs(rxq, ring_idx); + vmxnet3_renew_desc(rxq, ring_idx,rep); if (unlikely(rxq->shared->ctrl.updateRxProd)) { VMXNET3_WRITE_BAR0_REG(hw, rxprod_reg[ring_idx] + (rxq->queue_id * VMXNET3_REG_ALIGN), rxq->cmd_ring[ring_idx].next2fill); -- 1.7.9.5