From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from nm17-vm8.bullet.mail.sg3.yahoo.com (nm17-vm8.bullet.mail.sg3.yahoo.com [106.10.149.87]) by dpdk.org (Postfix) with ESMTP id 2B5C72716 for ; Thu, 23 Jul 2015 03:49:10 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com.hk; s=s2048; t=1437616148; bh=LSoIGippH4ITAkknQUmT2zsWeS5ogcIbk+YUUc40M4I=; h=From:To:Subject:Date:From:Subject; b=toIp0w9pSPHBvelB71OWA7wVfp7iRkfphzT0zSVszPJTK59xgbxxOFWoiujtn/dHcYmfmxr55hUCIqij3ortSuatEAdeFy0Vg6Oiue2U4v9oObHblAe9VW/z/OiyRsBwQwPv/CDJVG9CxdLUWR/SSt/KDToTTC5KYKK2CdfN0R/eLZQFWJ1gVgGX6N7sHA12loByPO2Uyu1b49PVxR7wg0wEeqs1uVEPoFwpJP1lcfKarNxVqkWtYV2yybno2c46WdFNz3ftY0QiQuxSD+FxwumY1OdsU7ucbuLFSZRaODZzYACK1+eAOHS+tvCXgn/F0nfKwzL5BjpPIUtYraYDQA== Received: from [106.10.166.113] by nm17.bullet.mail.sg3.yahoo.com with NNFMP; 23 Jul 2015 01:49:08 -0000 Received: from [106.10.167.155] by tm2.bullet.mail.sg3.yahoo.com with NNFMP; 23 Jul 2015 01:49:08 -0000 Received: from [127.0.0.1] by smtp128.mail.sg3.yahoo.com with NNFMP; 23 Jul 2015 01:49:08 -0000 X-Yahoo-Newman-Id: 912479.64825.bm@smtp128.mail.sg3.yahoo.com X-Yahoo-Newman-Property: ymail-3 X-YMail-OSG: SOXS1.4VM1lQYV9u2Tb2tMDyR.nWtZpcHiLcnOuelDgYgto EOCQKzaMPkrhI2T2hV.0UgvvHP9lPJGdWTDvHXiWVLHdvs4lLh78k0A.srOi xzxm8NZi1qbbmWO_2Y60gmMyp.qg635RYvow_JBr3gWyndaFNwQCpACEOOcS Y_XcBo3IZpMzFsm4FuqNXLFh0K2YH2Gy8SnrKGvFSaASsycT4kwSN5LvOh0V 20CFSovoTq5QbOy2Xod34zmWe233oxPYgfysl6s7OUkB7ZkhT0y_Pf46Mop0 BO0Km6o75PVYXcahqx7kpumHYOQyBuFACkM0gS_Mg_H_0so_OMClM_j2HgQk ONumQ0ogdhJZHXhj8JjAXNW2RfQgp_Ds8oBfACBqh._Xu.qhzqTVlah8Devb 6KtcGwVvJzkcjfVZsGD3e84hqXSf4mWQxYzI7jNDGLdO6Nu1RkJXFQf9W0a8 9OoZhgqLzMsaHMiJMk1Y6X6Qomy3LEwFaRgimqcJG687iEZ3cJiC.Inwar3A VQWEskxYMaMqhFdVwNAbS7SprJXgAP9yK3SO4d_k- X-Yahoo-SMTP: CmMEOJKswBDu0ApIhMmR9zco32o7KA-- From: mac_leehk@yahoo.com.hk To: dev@dpdk.org Date: Thu, 23 Jul 2015 09:48:55 +0800 Message-Id: <1437616135-5364-1-git-send-email-mac_leehk@yahoo.com.hk> X-Mailer: git-send-email 1.7.9.5 Subject: [dpdk-dev] [PATCH] The VMXNET3 PMD can't receive packet suddenly after a lot of traffic coming in. The root cause is due to mbuf allocation fail in vmxnet3_post_rx_bufs() and there is no error handling when it is called from vmxnet3_recv_pkts(). The RXD will not have "free" mbuf for it but the counter still increment. Finally, no packet can be received. This fix is allocate the mbuf first, if the allocation is failed, then don't receive any packet and the packet will remain in RXD to prevent any packet drop.If the allocation is sucess, the vmxnet3_post_rx_bufs() will call vmxnet3_renew_desc() and RXD will be renew inside. X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 23 Jul 2015 01:49:12 -0000 From: marco --- drivers/net/vmxnet3/vmxnet3_rxtx.c | 54 +++++++++++++++++++++++++++++++++++- 1 file changed, 53 insertions(+), 1 deletion(-) mode change 100644 => 100755 drivers/net/vmxnet3/vmxnet3_rxtx.c diff --git a/drivers/net/vmxnet3/vmxnet3_rxtx.c b/drivers/net/vmxnet3/vmxnet3_rxtx.c old mode 100644 new mode 100755 index 39ad6ef..d560bbb --- a/drivers/net/vmxnet3/vmxnet3_rxtx.c +++ b/drivers/net/vmxnet3/vmxnet3_rxtx.c @@ -421,6 +421,51 @@ vmxnet3_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, return nb_tx; } +static inline void +vmxnet3_renew_desc(vmxnet3_rx_queue_t *rxq, uint8_t ring_id,struct rte_mbuf *mbuf) +{ + uint32_t val = 0; + struct vmxnet3_cmd_ring *ring = &rxq->cmd_ring[ring_id]; + + struct Vmxnet3_RxDesc *rxd; + vmxnet3_buf_info_t *buf_info = &ring->buf_info[ring->next2fill]; + + rxd = (struct Vmxnet3_RxDesc *)(ring->base + ring->next2fill); + + if (ring->rid == 0) { + /* Usually: One HEAD type buf per packet + * val = (ring->next2fill % rxq->hw->bufs_per_pkt) ? + * VMXNET3_RXD_BTYPE_BODY : VMXNET3_RXD_BTYPE_HEAD; + */ + + /* We use single packet buffer so all heads here */ + val = VMXNET3_RXD_BTYPE_HEAD; + } else { + /* All BODY type buffers for 2nd ring; which won't be used at all by ESXi */ + val = VMXNET3_RXD_BTYPE_BODY; + } + + /* + * Load mbuf pointer into buf_info[ring_size] + * buf_info structure is equivalent to cookie for virtio-virtqueue + */ + buf_info->m = mbuf; + buf_info->len = (uint16_t)(mbuf->buf_len - + RTE_PKTMBUF_HEADROOM); + buf_info->bufPA = RTE_MBUF_DATA_DMA_ADDR_DEFAULT(mbuf); + + /* Load Rx Descriptor with the buffer's GPA */ + rxd->addr = buf_info->bufPA; + + /* After this point rxd->addr MUST not be NULL */ + rxd->btype = val; + rxd->len = buf_info->len; + /* Flip gen bit at the end to change ownership */ + rxd->gen = ring->gen; + + vmxnet3_cmd_ring_adv_next2fill(ring); + +} /* * Allocates mbufs and clusters. Post rx descriptors with buffer details * so that device can receive packets in those buffers. @@ -575,8 +620,15 @@ vmxnet3_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) } while (rcd->gen == rxq->comp_ring.gen) { + struct rte_mbuf *rep; if (nb_rx >= nb_pkts) break; + + rep = rte_rxmbuf_alloc(rxq->mp); + if (rep == NULL) { + rxq->stats.rx_buf_alloc_failure++; + break; + } idx = rcd->rxdIdx; ring_idx = (uint8_t)((rcd->rqID == rxq->qid1) ? 0 : 1); @@ -657,7 +709,7 @@ rcd_done: VMXNET3_INC_RING_IDX_ONLY(rxq->cmd_ring[ring_idx].next2comp, rxq->cmd_ring[ring_idx].size); /* It's time to allocate some new buf and renew descriptors */ - vmxnet3_post_rx_bufs(rxq, ring_idx); + vmxnet3_renew_desc(rxq, ring_idx,rep); if (unlikely(rxq->shared->ctrl.updateRxProd)) { VMXNET3_WRITE_BAR0_REG(hw, rxprod_reg[ring_idx] + (rxq->queue_id * VMXNET3_REG_ALIGN), rxq->cmd_ring[ring_idx].next2fill); -- 1.7.9.5