From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-oi0-f51.google.com (mail-oi0-f51.google.com [209.85.218.51]) by dpdk.org (Postfix) with ESMTP id 0B6DC236 for ; Fri, 1 Sep 2017 17:41:51 +0200 (CEST) Received: by mail-oi0-f51.google.com with SMTP id r203so4967684oih.0 for ; Fri, 01 Sep 2017 08:41:50 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to:cc; bh=9cmr+g6Nf1t3+TWGLC5LNguwzx73iR8cOdENIcXQwek=; b=Hez2PfVen/bDON9JPeuMxypGg6cgg4y1WJkx7lpT8SNGll0fhPPm45Vkmm5A9cT27/ ol9GeQw4HxrRaHRshUTB8gDuUdQWXo4atLAIwtQ+O2RWkQcyv9TfPbAY2auLLHRzJCC2 8/XQtS4DeHXmzFzv+Vl63ho+fREPcl8dtch6zRCEG6TQKDlAKUYxFsHg00aAdyLuRffo iAuw/6UAt1dB6JZzIG0CV+SjPzDGgpUhBC2A22PX8i/QCxdIP7GGAiSTSs7F1m47D0hs cAANeMe54jNqcRx1bPE/rZfsNSC/e4PXrR7v3T0btFmec/W/mEeO405Jn5m9aCwt+IHt 7jGw== X-Gm-Message-State: AHPjjUjephDaxoq44qO8tu3Jd88i82P4gGn/dmM91RxNswdPFpJpEQ/L Ul7PHg2nFRmgTx2PB3cIh0Y1ShlABwBf X-Google-Smtp-Source: ADKCNb44+AIMp+kSzsqnpy1tgOAfqX1PjziWDqi49V/RnFm5MPs++yEMpvfFhbxnh0EHzI7/3ww9T/PKVZ/N/uimPdc= X-Received: by 10.202.77.144 with SMTP id a138mr2137623oib.214.1504280510099; Fri, 01 Sep 2017 08:41:50 -0700 (PDT) MIME-Version: 1.0 Received: by 10.74.150.195 with HTTP; Fri, 1 Sep 2017 08:41:29 -0700 (PDT) In-Reply-To: References: <800120698.826966.1504246225460.ref@mail.yahoo.com> <800120698.826966.1504246225460@mail.yahoo.com> From: Chris Paquin Date: Fri, 1 Sep 2017 11:41:29 -0400 Message-ID: To: Kyle Larose Cc: Dharmesh Mehta , "users@dpdk.org" Content-Type: text/plain; charset="UTF-8" X-Content-Filtered-By: Mailman/MimeDel 2.1.15 Subject: Re: [dpdk-users] jumbo frame support..(more than 1500 bytes) X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 01 Sep 2017 15:41:51 -0000 First off I apologize as I am very new to dpdk. I have a very similar issue, however, I have found that the packet drops ( RX-nombuf:) that I am seeing when setting MTU over 1500 is related to the size of the mbuffer. For example, the larger the mbuf the larger the packet I can receive. However, I cannot set my mbuf larger than 6016 (kb I assume?) Sep 1 11:25:56 rhel7 testpmd[3016]: USER1: create a new mbuf pool : n=171456, size=6016, socket=0 Anything larger, and I cannot launch testpdm. Sep 1 11:25:30 rhel7 testpmd[3008]: USER1: create a new mbuf pool : n=171456, size=6017, socket=0 Sep 1 11:25:30 rhel7 testpmd[3008]: RING: Cannot reserve memory for tailq Sep 1 11:25:30 rhel7 testpmd[3008]: EAL: Error - exiting with code: 1#012 Cause: Sep 1 11:25:30 rhel7 testpmd[3008]: Creation of mbuf pool for socket 0 failed: Cannot allocate memory So in short, I can receive larger packets without dropping them by increasing my mbuffer size, you might be able to try the same. However, I cannot get close to the desired MTU of 9000. Would love to know if you get it working. CHRISTOPHER PAQUIN SENIOR CLOUD CONSULTANT, RHCE, RHCSA-OSP Red Hat M: 770-906-7646 On Fri, Sep 1, 2017 at 8:55 AM, Kyle Larose wrote: > How is it failing? Is it dropping with a frame too long counter? Are you > sure it's not dropping before your device? Have you made sure the max frame > size of every hop in between is large enough? > > -----Original Message----- > From: users [mailto:users-bounces@dpdk.org] On Behalf Of Dharmesh Mehta > Sent: Friday, September 01, 2017 2:10 AM > To: users@dpdk.org > Subject: [dpdk-users] jumbo frame support..(more than 1500 bytes) > > Sorry for resubmission, > Still I am stuck at receiving any packet more than 1500+bytes. is it > related to driver?I can send packet larger than 1500bytes so I am not > suspecting anything wrong with my mbuf initialization. > > In my application I am using following code... > #define MBUF_CACHE_SIZE 128#define MBUF_DATA_SIZE > RTE_MBUF_DEFAULT_BUF_SIZE#define JUMBO_FRAME_MAX_SIZE 0x2600 //(9728 > bytes) .rxmode = { .mq_mode = ETH_MQ_RX_VMDQ_ONLY, .split_hdr_size = > 0, .header_split = 0, /**< Header Split disabled */ .hw_ip_checksum = 0, > /**< IP checksum offload disabled */ .hw_vlan_filter = 0, /**< VLAN > filtering disabled */ .hw_vlan_strip = 1, /**< VLAN strip enabled. */ > .jumbo_frame = 1, /**< Jumbo Frame Support enabled */ .hw_strip_crc = > 1, /**< CRC stripped by hardware */ .enable_scatter = 1, //required for > jumbofram + 1500. .max_rx_pkt_len = JUMBO_FRAME_MAX_SIZE,//ETHER_MAX_LEN > }, create_mbuf_pool(valid_num_ports, rte_lcore_count() - 1, > MBUF_DATA_SIZE, MAX_QUEUES, RTE_TEST_RX_DESC_DEFAULT, MBUF_CACHE_SIZE); > > I am also calling rte_eth_dev_set_mtu, to set MTU 9000, and verified > with rte_eth_dev_get_mtu. > Below is my system info / logs from dpdk (17.05.1). > Yours help is really appreciated. > > Thanks.DM. > uname -aLinux 3.10.0-514.10.2.el7.x86_64 #1 SMP Fri Mar 3 00:04:05 UTC > 2017 x86_64 x86_64 x86_64 GNU/Linux modinfo uio_pci_genericfilename: > /lib/modules/3.10.0-514.10.2.el7.x86_64/kernel/drivers/uio/uio_pci_generic.kodescription: > Generic UIO driver for PCI 2.3 devicesauthor: Michael S. Tsirkin > license: GPL v2version: 0.01.0rhelversion: > 7.3srcversion: 10714380C2025655D980132depends: uiointree: > Yvermagic: 3.10.0-514.10.2.el7.x86_64 SMP mod_unload > modversions signer: CentOS Linux kernel signing keysig_key: > 27:F2:04:85:EB:EB:3B:2D:54:AD:D6:1E:57:B3:08:FA:E0:70:F4:1Fsig_hashalgo: > sha256 > dpdk-17.05.1 > $ ./bind2dpdk_status.sh Checking Ethernet port binding with DPDK > > Network devices using DPDK-compatible driver======================== > ====================0000:01:00.1 'I350 Gigabit Network Connection 1521' > drv=uio_pci_generic unused=igb_uio,vfio-pci0000:01:00.2 'I350 Gigabit > Network Connection 1521' drv=uio_pci_generic unused=igb_uio,vfio-pci0000:01:00.3 > 'I350 Gigabit Network Connection 1521' drv=uio_pci_generic > unused=igb_uio,vfio-pci0000:03:00.0 'I350 Gigabit Network Connection > 1521' drv=uio_pci_generic unused=igb_uio,vfio-pci0000:03:00.1 'I350 > Gigabit Network Connection 1521' drv=uio_pci_generic > unused=igb_uio,vfio-pci0000:03:00.2 'I350 Gigabit Network Connection > 1521' drv=uio_pci_generic unused=igb_uio,vfio-pci0000:03:00.3 'I350 > Gigabit Network Connection 1521' drv=uio_pci_generic > unused=igb_uio,vfio-pci0000:04:00.0 'I350 Gigabit Network Connection > 1521' drv=uio_pci_generic unused=igb_uio,vfio-pci0000:04:00.1 'I350 > Gigabit Network Connection 1521' drv=uio_pci_generic > unused=igb_uio,vfio-pci0000:04:00.2 'I350 Gigabit Network Connection > 1521' drv=uio_pci_generic unused=igb_uio,vfio-pci0000:04:00.3 'I350 > Gigabit Network Connection 1521' drv=uio_pci_generic > unused=igb_uio,vfio-pci0000:05:00.0 'I350 Gigabit Network Connection > 1521' drv=uio_pci_generic unused=igb_uio,vfio-pci0000:05:00.1 'I350 > Gigabit Network Connection 1521' drv=uio_pci_generic > unused=igb_uio,vfio-pci0000:05:00.2 'I350 Gigabit Network Connection > 1521' drv=uio_pci_generic unused=igb_uio,vfio-pci0000:05:00.3 'I350 > Gigabit Network Connection 1521' drv=uio_pci_generic > unused=igb_uio,vfio-pci0000:82:00.0 'I350 Gigabit Network Connection > 1521' drv=uio_pci_generic unused=igb_uio,vfio-pci0000:82:00.1 'I350 > Gigabit Network Connection 1521' drv=uio_pci_generic > unused=igb_uio,vfio-pci0000:82:00.2 'I350 Gigabit Network Connection > 1521' drv=uio_pci_generic unused=igb_uio,vfio-pci0000:82:00.3 'I350 > Gigabit Network Connection 1521' drv=uio_pci_generic > unused=igb_uio,vfio-pci Network devices using kernel > driver===================================0000:01:00.0 'I350 Gigabit > Network Connection 1521' if=eth0 drv=igb unused=igb_uio,vfio-pci,uio_ > pci_generic > Other Network devices===================== > Crypto devices using DPDK-compatible driver======================== > =================== > Crypto devices using kernel driver================================== > Other Crypto devices==================== Eventdev devices using > DPDK-compatible driver============================================= > Eventdev devices using kernel driver======================== > ============ > Other Eventdev devices====================== > Mempool devices using DPDK-compatible driver======================== > ==================== > Mempool devices using kernel driver======================== > =========== > Other Mempool devices===================== > > > > EAL: Detected 72 lcore(s)EAL: Auto-detected process type: PRIMARYEAL: > Probing VFIO support...EAL: VFIO support initializedEAL: PCI device > 0000:01:00.0 on NUMA socket 0EAL: probe driver: 8086:1521 > net_e1000_igbEAL: PCI device 0000:01:00.1 on NUMA socket 0EAL: Device is > blacklisted, not initializingEAL: PCI device 0000:01:00.2 on NUMA socket > 0EAL: Device is blacklisted, not initializingEAL: PCI device 0000:01:00.3 > on NUMA socket 0EAL: Device is blacklisted, not initializingEAL: PCI > device 0000:03:00.0 on NUMA socket 0EAL: probe driver: 8086:1521 > net_e1000_igbEAL: PCI device 0000:03:00.1 on NUMA socket 0EAL: Device is > blacklisted, not initializingEAL: PCI device 0000:03:00.2 on NUMA socket > 0EAL: Device is blacklisted, not initializingEAL: PCI device 0000:03:00.3 > on NUMA socket 0EAL: Device is blacklisted, not initializingEAL: PCI > device 0000:04:00.0 on NUMA socket 0EAL: Device is blacklisted, not > initializingEAL: PCI device 0000:04:00.1 on NUMA socket 0EAL: Device is > blacklisted, not initializingEAL: PCI device 0000:04:00.2 on NUMA socket > 0EAL: Device is blacklisted, not initializingEAL: PCI device 0000:04:00.3 > on NUMA socket 0EAL: Device is blacklisted, not initializingEAL: PCI > device 0000:05:00.0 on NUMA socket 0EAL: Device is blacklisted, not > initializingEAL: PCI device 0000:05:00.1 on NUMA socket 0EAL: Device is > blacklisted, not initializingEAL: PCI device 0000:05:00.2 on NUMA socket > 0EAL: Device is blacklisted, not initializingEAL: PCI device 0000:05:00.3 > on NUMA socket 0EAL: Device is blacklisted, not initializingEAL: PCI > device 0000:82:00.0 on NUMA socket 1EAL: Device is blacklisted, not > initializingEAL: PCI device 0000:82:00.1 on NUMA socket 1EAL: Device is > blacklisted, not initializingEAL: PCI device 0000:82:00.2 on NUMA socket > 1EAL: Device is blacklisted, not initializingEAL: PCI device 0000:82:00.3 > on NUMA socket 1EAL: Device is blacklisted, not > initializingnb_ports=1valid_num_ports=1MBUF_DATA_SIZE=2176 , > MAX_QUEUES=8, RTE_TEST_RX_DESC_DEFAULT=1024 , MBUF_CACHE_SIZE=128Waiting > for data...portid=0enabled_port_mask=1**** MTU is programmed successfully > to 9000port_init port - 0 Device supports maximum rx queues are 8MAX_QUEUES > defined as 8max_no_tx_queue = 8 , max_no_rx_queue = 8pf queue num: 0, > configured vmdq pool num: 8, each vmdq pool has 1 > queuesport=0,rx_rings=8,tx_rings=3rx-queue setup successfully q=0rx-queue > setup successfully q=1rx-queue setup successfully q=2rx-queue setup > successfully q=3rx-queue setup successfully q=4rx-queue setup successfully > q=5rx-queue setup successfully q=6rx-queue setup successfully q=7tx-queue > setup successfully q=0tx-queue setup successfully q=1tx-queue setup > successfully q=2Port 0: Enabling HW FCVHOST_PORT: Max virtio devices > supported: 8VHOST_PORT: Port 0 MAC: a0 36 9f cb ba 34Dump Flow Control > 0HighWater Martk=33828LowWater Martk=32328PauseTime=1664Send XON > Martk=1Mode=1MAC Control Frame forward=0Setting Flow Control = FULLDump > Flow Control 0HighWater Martk=33828LowWater Martk=32328PauseTime=1664Send > XON Martk=1Mode=1MAC Control Frame forward=0**** MTU is programmed > successfully to 9000VHOST_DATA: ********************* TX - Procesing on > Core 40 started********************* TX - Procesing on Core 40 > startedVHOST_DATA: ***************** RX Procesing on Core 41 > started***************** RX Procesing on Core 41 startedvmdq_conf_default. > rxmode.mq_mode=4vmdq_conf_default.rxmode.max_rx_pkt_len= > 9728vmdq_conf_default.rxmode.split_hdr_size=0vmdq_conf_ > default.rxmode.header_split=0vmdq_conf_default.rxmode.hw_ > ip_checksum=0vmdq_conf_default.rxmode.hw_vlan_filter= > 0vmdq_conf_default.rxmode.hw_vlan_strip=1vmdq_conf_default. > rxmode.hw_vlan_extend=0vmdq_conf_default.rxmode.jumbo_ > frame=1vmdq_conf_default.rxmode.hw_strip_crc=1vmdq_ > conf_default.rxmode.enable_scatter=1vmdq_conf_default. > rxmode.enable_lro=0VHOST_CONFIG: vhost-user server: socket created, fd: > 23VHOST_CONFIG: bind to /tmp/vubr0 > >