From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-we0-x231.google.com (mail-we0-x231.google.com [IPv6:2a00:1450:400c:c03::231]) by dpdk.org (Postfix) with ESMTP id 585AF58EE for ; Sat, 27 Jul 2013 00:31:40 +0200 (CEST) Received: by mail-we0-f177.google.com with SMTP id m46so2347228wev.36 for ; Fri, 26 Jul 2013 15:31:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc:content-type; bh=bNb+VDvPVpjOXDn+y42GJ7c+MDxZsBHRxk9Ni/7989Q=; b=MvBDDJk8f1X0THOBbS8dzalx3/5HlYyFdOPqsZxXTBTXtV0MyiaKux4GTzZdsO7ZLN Egv2EDZub5svpg0bP7jiBuZ+DnqB8k3rVGPO8hcQLM1cRkqFa/CpvUp5i2bPasyUi0A8 v0G93Og1TOw0wRR+q58JAbRo+iR8hy/UC/TZSq31BPkK3Xedsn1SHhgs6DEVXv+PBwHr P8o7UskogmSAUnD54Co5y9+FzzxYMxSI1x6z4l7XPc1E7bsQ06X8W03YNNXH6saKR0K2 uGkdC+mZ7EE4rDkz6X+uS/mlQ7TTPHXuE2iPcSDSBQ09HR6gqwws+5dzCsC4L7wMDkow NJRQ== X-Received: by 10.194.108.132 with SMTP id hk4mr36369445wjb.43.1374877917992; Fri, 26 Jul 2013 15:31:57 -0700 (PDT) MIME-Version: 1.0 Received: by 10.194.239.101 with HTTP; Fri, 26 Jul 2013 15:31:37 -0700 (PDT) In-Reply-To: References: <20130726125322.0ecbca48@nehalam.linuxnetplumber.net> From: jinho hwang Date: Fri, 26 Jul 2013 18:31:37 -0400 Message-ID: To: Scott Talbert Content-Type: text/plain; charset=ISO-8859-1 Cc: dev Subject: Re: [dpdk-dev] NIC Stops Transmitting X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 26 Jul 2013 22:31:40 -0000 On Fri, Jul 26, 2013 at 4:04 PM, Scott Talbert wrote: > On Fri, 26 Jul 2013, Stephen Hemminger wrote: > >>> I'm writing an application using DPDK that transmits a large number of >>> packets (it doesn't receive any). When I transmit at 2 Gb/sec, >>> everything >>> will run fine for several seconds (receiver is receiving at correct >>> rate), >>> but then the NIC appears to get 'stuck' and doesn't transmit any more >>> packets. In this state, rte_eth_tx_burst() is returning zero (suggesting >>> that there are no available transmit descriptors), but even if I sleep() >>> for a second and try again, rte_eth_tx_burst() still returns 0. It >>> almost >>> appears as if a packet gets stuck in the transmit ring and keeps >>> everything from flowing. I'm using an Intel 82599EB NIC. >>> >> Make sure there is enough memory for mbufs. >> Also what is your ring size and transmit free threshold? >> It is easy to instrument the driver to see where it is saying "no space >> left" >> Also be careful with threshold values, many values of >> pthresh/hthresh/wthresh >> don't work. I would check the Intel reference manual for your hardware. > > > Thanks for the tips. I don't think I'm running out of mbufs, but I'll check > that again. I am using these values from one of the examples - which claim > to be correct for the 82599EB. > > /* > * These default values are optimized for use with the Intel(R) 82599 10 GbE > * Controller and the DPDK ixgbe PMD. Consider using other values for other > * network controllers and/or network drivers. > */ > #define TX_PTHRESH 36 /**< Default values of TX prefetch threshold reg. */ > #define TX_HTHRESH 0 /**< Default values of TX host threshold reg. */ > #define TX_WTHRESH 0 /**< Default values of TX write-back threshold reg. */ > > static const struct rte_eth_txconf tx_conf = { > .tx_thresh = { > .pthresh = TX_PTHRESH, > .hthresh = TX_HTHRESH, > .wthresh = TX_WTHRESH, > }, > .tx_free_thresh = 0, /* Use PMD default values */ > .tx_rs_thresh = 0, /* Use PMD default values */ > }; > > /* > * Configurable number of RX/TX ring descriptors > */ > #define RTE_TEST_TX_DESC_DEFAULT 512 > static uint16_t nb_txd = RTE_TEST_TX_DESC_DEFAULT; > Scott, I am wondering whether you use multiple cores accessing the same receive queue. I had this problem before, but after I make the same number of receiving queues as the number of receiving cores, the problem disappeared. I did not dig more since I did not care how many receive queues I have did not matter. Jinho