From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from nm25-vm8.bullet.mail.sg3.yahoo.com (nm25-vm8.bullet.mail.sg3.yahoo.com [106.10.151.103]) by dpdk.org (Postfix) with ESMTP id 681E2C3EE for ; Fri, 24 Jul 2015 02:53:45 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com.hk; s=s2048; t=1437699223; bh=zDOhp0Acg+otRRa6qR1bM8/20RZywNiYJFrsaX28BJ4=; h=From:To:Cc:Subject:Date:From:Subject; b=rNKPBJ27iwXR31mKykCaQTys0Jzb3ZsJVocgEUQ2Q1ho8x1v5fpaZ/53zzpx5PIaK8vfqqDdem9KnyR14p6+sCaAq14NUlfk9Yl2CrYleg6WqokxM5JUDufK2MjH9COH3kzduooZVQc5fbcaAXIwEtUarMNSeUWfcvN0tH7tzgOmfEg0eSJRhr5I3Si8FFhdvSGM34BvoIDo1Nr+8PpbxMiAXJFwe2ZBCTQz0yKNe/wSANKmoNAt6WcMNoLC8wye0mAB7rUaBLmvhs6+541fJ0vJVe4hQpuKW+SPHDoa75g96vEdwDQDWQgZXYVJQSFkP7izr1yMwq4ZVP462KALiw== Received: from [106.10.166.123] by nm25.bullet.mail.sg3.yahoo.com with NNFMP; 24 Jul 2015 00:53:43 -0000 Received: from [106.10.167.144] by tm12.bullet.mail.sg3.yahoo.com with NNFMP; 24 Jul 2015 00:53:43 -0000 Received: from [127.0.0.1] by smtp117.mail.sg3.yahoo.com with NNFMP; 24 Jul 2015 00:53:43 -0000 X-Yahoo-Newman-Id: 289799.12628.bm@smtp117.mail.sg3.yahoo.com X-Yahoo-Newman-Property: ymail-3 X-YMail-OSG: wwwrs8YVM1kGpdXQxpLHJhxY5LPr6vk7YxSFdCi6Jeur1mc jd0fufNuOWnu167sITKmE82eA7f0PvvGIiZCgG42Uqo_BSVJsVo6Ms4V4LZi lBA.WodOAD0kbGYqwr0Q3KishmjjEC9a1LFLczdQoy6mnJaO6C3ayHQH_i6n zShM48JEvD1NZit3BrCOSwSm5rShI0TC6ie_BTIoFR3Ylb_6zz230hIXpH8O pXzu0Iwwf9DkKbjXFD27.30rtP7CNnyU3iEgb.IUioohkgR3MU7.HGaevRyr L.m9Bgtcg93hRV22EigK5xhI_WXpW5xJttbuynkAHtsKSSG68zjeWHSzsuhX CIza8ZGFk97G2nSQ8Qr5Tq7sytXbfq38Oqf9WfiDGs2uTtdLF3WJ87S.pJI7 3IaQ5l9y2Y53C7Avp7pZ2hOzfJuA614MJtaFVgvs4jF14vYlrL5o3SvEz05e OtRLk4I2kLC1ncMablJDnOsK1yy34ZrGql18YzPbPvtA1zy.xCwty4ZAWAp1 PVe9QUSaWZnC0JB.EEae53KS.nfhE X-Yahoo-SMTP: CmMEOJKswBDu0ApIhMmR9zco32o7KA-- From: Marco Lee To: dev@dpdk.org Date: Fri, 24 Jul 2015 08:53:38 +0800 Message-Id: <1437699218-8783-1-git-send-email-mac_leehk@yahoo.com.hk> X-Mailer: git-send-email 1.7.9.5 Cc: Marco Lee Subject: [dpdk-dev] [PATCH] The VMXNET3 PMD can't receive packet suddenly after a lot of traffic coming in X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 24 Jul 2015 00:53:46 -0000 The RX of VMXNET3 PMD will have deadlock when a lot of traffic coming in. The root cause is due to mbuf allocation fail in vmxnet3_post_rx_bufs() and there is no error handling when it is called from vmxnet3_recv_pkts(). The RXD will not have "free" mbuf for it but the counter still increment. Finally, no packet can be received. This fix is allocate the mbuf first, if the allocation is failed, then reuse the old mbuf. If the allocation is success, the vmxnet3_post_rx_bufs() will call vmxnet3_renew_desc() and RXD will be renew inside. Signed-off-by: Marco Lee --- drivers/net/vmxnet3/vmxnet3_rxtx.c | 37 +++++++++++++++++++++++++++++++++++- 1 file changed, 36 insertions(+), 1 deletion(-) diff --git a/drivers/net/vmxnet3/vmxnet3_rxtx.c b/drivers/net/vmxnet3/vmxnet3_rxtx.c index 39ad6ef..cbed438 100644 --- a/drivers/net/vmxnet3/vmxnet3_rxtx.c +++ b/drivers/net/vmxnet3/vmxnet3_rxtx.c @@ -421,6 +421,35 @@ vmxnet3_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, return nb_tx; } +static inline void +vmxnet3_renew_desc(vmxnet3_rx_queue_t *rxq, uint8_t ring_id, + struct rte_mbuf *mbuf) +{ + uint32_t val = 0; + struct vmxnet3_cmd_ring *ring = &rxq->cmd_ring[ring_id]; + + struct Vmxnet3_RxDesc *rxd; + vmxnet3_buf_info_t *buf_info = &ring->buf_info[ring->next2fill]; + + rxd = (struct Vmxnet3_RxDesc *)(ring->base + ring->next2fill); + + if (ring->rid == 0) + val = VMXNET3_RXD_BTYPE_HEAD; + else + val = VMXNET3_RXD_BTYPE_BODY; + + + buf_info->m = mbuf; + buf_info->len = (uint16_t)(mbuf->buf_len - RTE_PKTMBUF_HEADROOM); + buf_info->bufPA = RTE_MBUF_DATA_DMA_ADDR_DEFAULT(mbuf); + + rxd->addr = buf_info->bufPA; + rxd->btype = val; + rxd->len = buf_info->len; + rxd->gen = ring->gen; + + vmxnet3_cmd_ring_adv_next2fill(ring); +} /* * Allocates mbufs and clusters. Post rx descriptors with buffer details * so that device can receive packets in those buffers. @@ -578,6 +607,8 @@ vmxnet3_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) if (nb_rx >= nb_pkts) break; + struct rte_mbuf *rep; + rep = rte_rxmbuf_alloc(rxq->mb_pool); idx = rcd->rxdIdx; ring_idx = (uint8_t)((rcd->rqID == rxq->qid1) ? 0 : 1); rxd = (Vmxnet3_RxDesc *)rxq->cmd_ring[ring_idx].base + idx; @@ -651,13 +682,17 @@ vmxnet3_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) vmxnet3_rx_offload(rcd, rxm); + if (unlikely(rep == NULL)) { + rep = rxm; + goto rcd_done; + } rx_pkts[nb_rx++] = rxm; rcd_done: rxq->cmd_ring[ring_idx].next2comp = idx; VMXNET3_INC_RING_IDX_ONLY(rxq->cmd_ring[ring_idx].next2comp, rxq->cmd_ring[ring_idx].size); /* It's time to allocate some new buf and renew descriptors */ - vmxnet3_post_rx_bufs(rxq, ring_idx); + vmxnet3_renew_desc(rxq, ring_idx, rep); if (unlikely(rxq->shared->ctrl.updateRxProd)) { VMXNET3_WRITE_BAR0_REG(hw, rxprod_reg[ring_idx] + (rxq->queue_id * VMXNET3_REG_ALIGN), rxq->cmd_ring[ring_idx].next2fill); -- 1.7.9.5