From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mx0a-000f0801.pphosted.com (mx0b-000f0801.pphosted.com [67.231.152.113]) by dpdk.org (Postfix) with ESMTP id 6C8F36841 for ; Thu, 1 Jun 2017 14:24:32 +0200 (CEST) Received: from pps.filterd (m0000700.ppops.net [127.0.0.1]) by mx0b-000f0801.pphosted.com (8.16.0.20/8.16.0.20) with SMTP id v51CLrTR010353; Thu, 1 Jun 2017 05:24:31 -0700 Received: from brmwp-exmb12.corp.brocade.com ([208.47.132.227]) by mx0b-000f0801.pphosted.com with ESMTP id 2at50j2b9w-3 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Thu, 01 Jun 2017 05:24:31 -0700 Received: from [10.252.4.2] (10.252.4.2) by BRMWP-EXMB12.corp.brocade.com (172.16.59.130) with Microsoft SMTP Server (TLS) id 15.0.1210.3; Thu, 1 Jun 2017 06:24:19 -0600 To: References: <1495216560-12920-1-git-send-email-ciwillia@brocade.com> <1495216560-12920-5-git-send-email-ciwillia@brocade.com> CC: , From: "Charles (Chas) Williams" Message-ID: Date: Thu, 1 Jun 2017 08:24:16 -0400 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.8.0 MIME-Version: 1.0 In-Reply-To: <1495216560-12920-5-git-send-email-ciwillia@brocade.com> Content-Type: text/plain; charset="utf-8"; format=flowed Content-Transfer-Encoding: 7bit X-ClientProxiedBy: hq1wp-excas13.corp.brocade.com (10.70.36.103) To BRMWP-EXMB12.corp.brocade.com (172.16.59.130) X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:, , definitions=2017-06-01_02:, , signatures=0 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 priorityscore=1501 malwarescore=0 suspectscore=3 phishscore=0 bulkscore=0 spamscore=0 clxscore=1011 lowpriorityscore=0 impostorscore=0 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1703280000 definitions=main-1706010219 Subject: Re: [dpdk-dev] [PATCH 5/6] net/vmxnet3: receive queue lockup and memleak X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 01 Jun 2017 12:24:32 -0000 While looking at another issue, I think one of the issues fixed in this commit has already been fixed in the last DPDK release by: commit 8fce14b789aecdb4345a62f6980e7b6e7f4ba245 Author: Stefan Puiu Date: Mon Dec 19 11:40:53 2016 +0200 net/vmxnet3: fix Rx deadlock Our use case is that we have an app that needs to keep mbufs around for a while. We've seen cases when calling vmxnet3_post_rx_bufs() from vmxet3_recv_pkts(), it might not succeed to add any mbufs to any RX descriptors (where it returns -err). Since there are no mbufs that the virtual hardware can use, no packets will be received after this; the driver won't refill the mbuf after this so it gets stuck in this state. I call this a deadlock for lack of a better term - the virtual HW waits for free mbufs, while the app waits for the hardware to notify it for data (by flipping the generation bit on the used Rx descriptors). Note that after this, the app can't recover. ... The mbuf leak due to an error during receive still exists. That can be refactored into a new commit. On 05/19/2017 01:55 PM, Charles (Chas) Williams wrote: > From: Mandeep Rohilla > > The receive queue can lockup if all the rx descriptors have lost > their mbufs and temporarily there are no mbufs available. This > can happen if there is an mbuf leak or if the application holds > on to the mbuf for a while. > > This also addresses an mbuf leak in an error condition during > packet receive. > > Signed-off-by: Mandeep Rohilla > --- > drivers/net/vmxnet3/vmxnet3_rxtx.c | 19 +++++++++++++++++++ > 1 file changed, 19 insertions(+) > > diff --git a/drivers/net/vmxnet3/vmxnet3_rxtx.c b/drivers/net/vmxnet3/vmxnet3_rxtx.c > index d8713a1..d21679d 100644 > --- a/drivers/net/vmxnet3/vmxnet3_rxtx.c > +++ b/drivers/net/vmxnet3/vmxnet3_rxtx.c > @@ -731,6 +731,7 @@ vmxnet3_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) > uint16_t nb_rx; > uint32_t nb_rxd, idx; > uint8_t ring_idx; > + uint8_t i; > vmxnet3_rx_queue_t *rxq; > Vmxnet3_RxCompDesc *rcd; > vmxnet3_buf_info_t *rbi; > @@ -800,6 +801,12 @@ vmxnet3_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) > (int)(rcd - (struct Vmxnet3_RxCompDesc *) > rxq->comp_ring.base), rcd->rxdIdx); > rte_pktmbuf_free_seg(rxm); > + if (rxq->start_seg) { > + struct rte_mbuf *start = rxq->start_seg; > + > + rxq->start_seg = NULL; > + rte_pktmbuf_free(start); > + } > goto rcd_done; > } > > @@ -893,6 +900,18 @@ vmxnet3_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) > } > } > > + /* > + * Try to replenish the rx descriptors with the new mbufs > + */ > + for (i = 0; i < VMXNET3_RX_CMDRING_SIZE; i++) { > + vmxnet3_post_rx_bufs(rxq, i); > + if (unlikely(rxq->shared->ctrl.updateRxProd)) { > + VMXNET3_WRITE_BAR0_REG(hw, > + rxprod_reg[i] + > + (rxq->queue_id * VMXNET3_REG_ALIGN), > + rxq->cmd_ring[i].next2fill); > + } > + } > return nb_rx; > } > >