From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by dpdk.org (Postfix) with ESMTP id 7F59DA6A for ; Wed, 25 Mar 2015 11:30:49 +0100 (CET) Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga101.fm.intel.com with ESMTP; 25 Mar 2015 03:30:47 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.11,465,1422950400"; d="scan'208";a="703893763" Received: from bricha3-mobl3.ger.corp.intel.com ([10.243.20.28]) by orsmga002.jf.intel.com with SMTP; 25 Mar 2015 03:30:45 -0700 Received: by (sSMTP sendmail emulation); Wed, 25 Mar 2015 10:30:44 +0025 Date: Wed, 25 Mar 2015 10:30:44 +0000 From: Bruce Richardson To: Dor Green Message-ID: <20150325103044.GA1036@bricha3-MOBL3> References: <20150323145958.GA12720@bricha3-MOBL3> <20150323212459.GA5502@mhcomputing.net> <20150324131715.GA11384@bricha3-MOBL3> <20150324162143.GA6276@bricha3-MOBL3> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Organization: Intel Shannon Ltd. User-Agent: Mutt/1.5.23 (2014-03-12) Cc: dev@dpdk.org Subject: Re: [dpdk-dev] Packet data out of bounds after rte_eth_rx_burst X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 25 Mar 2015 10:30:50 -0000 On Wed, Mar 25, 2015 at 10:22:49AM +0200, Dor Green wrote: > The printout: > PMD: eth_ixgbe_dev_init(): MAC: 2, PHY: 11, SFP+: 4 > PMD: eth_ixgbe_dev_init(): port 0 vendorID=0x8086 deviceID=0x154d > PMD: ixgbe_dev_rx_queue_setup(): sw_ring=0x7f80c0af0e40 > hw_ring=0x7f811630ce00 dma_addr=0xf1630ce00 > PMD: check_rx_burst_bulk_alloc_preconditions(): Rx Burst Bulk Alloc > Preconditions: rxq->rx_free_thresh=0, RTE_PMD_IXGBE_RX_MAX_BURST=32 > PMD: ixgbe_dev_rx_queue_setup(): Rx Burst Bulk Alloc Preconditions are > not satisfied, Scattered Rx is requested, or > RTE_LIBRTE_IXGBE_RX_ALLOW_BULK_ALLOC is not enabled (port=0, queue=0). > PMD: check_rx_burst_bulk_alloc_preconditions(): Rx Burst Bulk Alloc > Preconditions: rxq->rx_free_thresh=0, RTE_PMD_IXGBE_RX_MAX_BURST=32 > PMD: ixgbe_dev_tx_queue_setup(): sw_ring=0x7f80c0af0900 > hw_ring=0x7f811631ce80 dma_addr=0xf1631ce80 > PMD: set_tx_function(): Using full-featured tx code path > PMD: set_tx_function(): - txq_flags = 0 [IXGBE_SIMPLE_FLAGS=f01] > PMD: set_tx_function(): - tx_rs_thresh = 32 [RTE_PMD_IXGBE_TX_MAX_BURST=32] > > Can't seem to get any example app to crash. Is there something I can > run on one port which will look at the actual data of the packets? > > The mempool is (I think) set up normally: > > pktmbuf_pool = rte_mempool_create("mbuf_pool", MBUFNB, MBUFSZ, 0, > sizeof(struct rte_pktmbuf_pool_private), > rte_pktmbuf_pool_init, NULL, > rte_pktmbuf_init, NULL, NUMA_SOCKET, 0); > > > For good measure, here's the rest of the port setup (shortened, in > addition to what I showed below): > > static struct rte_eth_rxconf const rxconf = { > .rx_thresh = { > .pthresh = 8, > .hthresh = 8, > .wthresh = 100, This value for wthresh looks very high. Can you perhaps just try using the defaults for the thresholds. [Passing in a NULL instead of the rxconf will just use the defaults for rx_queue_setup in latest DPDK versions.] /Bruce > }, > .rx_free_thresh = 0, > .rx_drop_en = 0, > }; > > rte_eth_dev_configure(port, 1, 1, ðconf); > rte_eth_rx_queue_setup(port, 0, hwsize, NUMA_SOCKET, &rxconf, pktmbuf_pool); > rte_eth_dev_start(port); > > > On Tue, Mar 24, 2015 at 6:21 PM, Bruce Richardson > wrote: > > On Tue, Mar 24, 2015 at 04:10:18PM +0200, Dor Green wrote: > >> 1 . The eth_conf is: > >> > >> static struct rte_eth_conf const ethconf = { > >> .link_speed = 0, > >> .link_duplex = 0, > >> > >> .rxmode = { > >> .mq_mode = ETH_MQ_RX_RSS, > >> .max_rx_pkt_len = ETHER_MAX_LEN, > >> .split_hdr_size = 0, > >> .header_split = 0, > >> .hw_ip_checksum = 0, > >> .hw_vlan_filter = 0, > >> .jumbo_frame = 0, > >> .hw_strip_crc = 0, /**< CRC stripped by hardware */ > >> }, > >> > >> .txmode = { > >> }, > >> > >> .rx_adv_conf = { > >> .rss_conf = { > >> .rss_key = NULL, > >> .rss_hf = ETH_RSS_IPV4 | ETH_RSS_IPV6, > >> } > >> }, > >> > >> .fdir_conf = { > >> .mode = RTE_FDIR_MODE_SIGNATURE, > >> > >> }, > >> > >> .intr_conf = { > >> .lsc = 0, > >> }, > >> }; > >> > >> I've tried setting jumbo frames on with a larger packet length and > >> even turning off RSS/FDIR. No luck. > >> > >> I don't see anything relating to the port in the initial prints, what > >> are you looking for? > > > > I'm looking for the PMD initialization text, like that shown below (from testpmd): > > Configuring Port 0 (socket 0) > > PMD: ixgbe_dev_tx_queue_setup(): sw_ring=0x7f9ba08cd700 hw_ring=0x7f9ba0b00080 dma_addr=0x36d00080 > > PMD: ixgbe_set_tx_function(): Using simple tx code path > > PMD: ixgbe_set_tx_function(): Vector tx enabled. > > PMD: ixgbe_dev_rx_queue_setup(): sw_ring=0x7f9ba08cce80 hw_ring=0x7f9ba0b10080 dma_addr=0x36d10080 > > PMD: ixgbe_set_rx_function(): Vector rx enabled, please make sure RX burst size no less than 32. > > Port 0: 68:05:CA:04:51:3A > > Configuring Port 1 (socket 0) > > PMD: ixgbe_dev_tx_queue_setup(): sw_ring=0x7f9ba08cab40 hw_ring=0x7f9ba0b20100 dma_addr=0x36d20100 > > PMD: ixgbe_set_tx_function(): Using simple tx code path > > PMD: ixgbe_set_tx_function(): Vector tx enabled. > > PMD: ixgbe_dev_rx_queue_setup(): sw_ring=0x7f9ba08ca2c0 hw_ring=0x7f9ba0b30100 dma_addr=0x36d30100 > > PMD: ixgbe_set_rx_function(): Vector rx enabled, please make sure RX burst size no less than 32. > > Port 1: 68:05:CA:04:51:38 > > > > This tells us what RX and TX functions are going to be used for each port. > > > >> > >> 2. The packet is a normal, albeit somewhat large (1239 bytes) TCP data > >> packet (SSL certificate data, specifically). > >> One important thing of note that I've just realised is that it's not > >> this "packet of death" which causes the segmentation fault (i.e. has > >> an out-of-bounds address for its data), but the packet afterwards-- no > >> matter what packet it is. > >> > > Can this problem be reproduced using testpmd or any of the standard dpdk > > example apps, by sending in the same packet sequence? > > > > Is there anything unusual being done in the setup of the mempool used for the > > packet buffers? > > > > /Bruce > > > >> > >> On Tue, Mar 24, 2015 at 3:17 PM, Bruce Richardson > >> wrote: > >> > On Tue, Mar 24, 2015 at 12:54:14PM +0200, Dor Green wrote: > >> >> I've managed to fix it so 1.8 works, and the segmentation fault still occurs. > >> >> > >> >> On Tue, Mar 24, 2015 at 11:55 AM, Dor Green wrote: > >> >> > I tried 1.8, but that fails to initialize my device and fails at the pci probe: > >> >> > "Cause: Requested device 0000:04:00.1 cannot be used" > >> >> > Can't even compile 2.0rc2 atm, getting: > >> >> > "/usr/lib/gcc/x86_64-linux-gnu/4.6/include/emmintrin.h:701:1: note: > >> >> > expected '__m128i' but argument is of type 'int'" > >> >> > For reasons I don't understand. > >> >> > > >> >> > As for the example apps (in 1.7), I can run them properly but I don't > >> >> > think any of them do the same processing as I do. Note that mine does > >> >> > work with most packets. > >> >> > > >> >> > > >> > > >> > Couple of further questions: > >> > 1. What config options are being used to configure the port and what is the > >> > output printed at port initialization time? This is needed to let us track down > >> > what specific RX path is being used inside the ixgbe driver > >> > 2. What type of packets specifically cause problems? Is it reproducible with > >> > one particular packet, or packet type? Are you sending in jumbo-frames? > >> > > >> > Regards, > >> > /Bruce > >> > > >> >> > On Mon, Mar 23, 2015 at 11:24 PM, Matthew Hall wrote: > >> >> >> On Mon, Mar 23, 2015 at 05:19:00PM +0200, Dor Green wrote: > >> >> >>> I changed it to free and it still happens. Note that the segmentation fault > >> >> >>> happens before that anyway. > >> >> >>> > >> >> >>> I am using 1.7.1 at the moment. I can try using a newer version. > >> >> >> > >> >> >> I'm using 1.7.X in my open-source DPDK-based app and it works, but I have an > >> >> >> IGB 1-gigabit NIC though, and how RX / TX work are quite driver specific of > >> >> >> course. > >> >> >> > >> >> >> I suspect there's some issue with how things are working in your IXGBE NIC > >> >> >> driver / setup. Do the same failures occur inside of the DPDK's own sample > >> >> >> apps? > >> >> >> > >> >> >> Matthew.