* [dpdk-users] jumbo frame support..(more than 1500 bytes)
[not found] <800120698.826966.1504246225460.ref@mail.yahoo.com>
@ 2017-09-01 6:10 ` Dharmesh Mehta
2017-09-01 12:55 ` Kyle Larose
0 siblings, 1 reply; 4+ messages in thread
From: Dharmesh Mehta @ 2017-09-01 6:10 UTC (permalink / raw)
To: users
Sorry for resubmission,
Still I am stuck at receiving any packet more than 1500+bytes. is it related to driver?I can send packet larger than 1500bytes so I am not suspecting anything wrong with my mbuf initialization.
In my application I am using following code...
#define MBUF_CACHE_SIZE 128#define MBUF_DATA_SIZE RTE_MBUF_DEFAULT_BUF_SIZE#define JUMBO_FRAME_MAX_SIZE 0x2600 //(9728 bytes)
.rxmode = { .mq_mode = ETH_MQ_RX_VMDQ_ONLY, .split_hdr_size = 0, .header_split = 0, /**< Header Split disabled */ .hw_ip_checksum = 0, /**< IP checksum offload disabled */ .hw_vlan_filter = 0, /**< VLAN filtering disabled */ .hw_vlan_strip = 1, /**< VLAN strip enabled. */ .jumbo_frame = 1, /**< Jumbo Frame Support enabled */ .hw_strip_crc = 1, /**< CRC stripped by hardware */ .enable_scatter = 1, //required for jumbofram + 1500. .max_rx_pkt_len = JUMBO_FRAME_MAX_SIZE,//ETHER_MAX_LEN },
create_mbuf_pool(valid_num_ports, rte_lcore_count() - 1, MBUF_DATA_SIZE, MAX_QUEUES, RTE_TEST_RX_DESC_DEFAULT, MBUF_CACHE_SIZE);
I am also calling rte_eth_dev_set_mtu, to set MTU 9000, and verified with rte_eth_dev_get_mtu.
Below is my system info / logs from dpdk (17.05.1).
Yours help is really appreciated.
Thanks.DM.
uname -aLinux 3.10.0-514.10.2.el7.x86_64 #1 SMP Fri Mar 3 00:04:05 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
modinfo uio_pci_genericfilename: /lib/modules/3.10.0-514.10.2.el7.x86_64/kernel/drivers/uio/uio_pci_generic.kodescription: Generic UIO driver for PCI 2.3 devicesauthor: Michael S. Tsirkin <mst@redhat.com>license: GPL v2version: 0.01.0rhelversion: 7.3srcversion: 10714380C2025655D980132depends: uiointree: Yvermagic: 3.10.0-514.10.2.el7.x86_64 SMP mod_unload modversions signer: CentOS Linux kernel signing keysig_key: 27:F2:04:85:EB:EB:3B:2D:54:AD:D6:1E:57:B3:08:FA:E0:70:F4:1Fsig_hashalgo: sha256
dpdk-17.05.1
$ ./bind2dpdk_status.sh Checking Ethernet port binding with DPDK
Network devices using DPDK-compatible driver============================================0000:01:00.1 'I350 Gigabit Network Connection 1521' drv=uio_pci_generic unused=igb_uio,vfio-pci0000:01:00.2 'I350 Gigabit Network Connection 1521' drv=uio_pci_generic unused=igb_uio,vfio-pci0000:01:00.3 'I350 Gigabit Network Connection 1521' drv=uio_pci_generic unused=igb_uio,vfio-pci0000:03:00.0 'I350 Gigabit Network Connection 1521' drv=uio_pci_generic unused=igb_uio,vfio-pci0000:03:00.1 'I350 Gigabit Network Connection 1521' drv=uio_pci_generic unused=igb_uio,vfio-pci0000:03:00.2 'I350 Gigabit Network Connection 1521' drv=uio_pci_generic unused=igb_uio,vfio-pci0000:03:00.3 'I350 Gigabit Network Connection 1521' drv=uio_pci_generic unused=igb_uio,vfio-pci0000:04:00.0 'I350 Gigabit Network Connection 1521' drv=uio_pci_generic unused=igb_uio,vfio-pci0000:04:00.1 'I350 Gigabit Network Connection 1521' drv=uio_pci_generic unused=igb_uio,vfio-pci0000:04:00.2 'I350 Gigabit Network Connection 1521' drv=uio_pci_generic unused=igb_uio,vfio-pci0000:04:00.3 'I350 Gigabit Network Connection 1521' drv=uio_pci_generic unused=igb_uio,vfio-pci0000:05:00.0 'I350 Gigabit Network Connection 1521' drv=uio_pci_generic unused=igb_uio,vfio-pci0000:05:00.1 'I350 Gigabit Network Connection 1521' drv=uio_pci_generic unused=igb_uio,vfio-pci0000:05:00.2 'I350 Gigabit Network Connection 1521' drv=uio_pci_generic unused=igb_uio,vfio-pci0000:05:00.3 'I350 Gigabit Network Connection 1521' drv=uio_pci_generic unused=igb_uio,vfio-pci0000:82:00.0 'I350 Gigabit Network Connection 1521' drv=uio_pci_generic unused=igb_uio,vfio-pci0000:82:00.1 'I350 Gigabit Network Connection 1521' drv=uio_pci_generic unused=igb_uio,vfio-pci0000:82:00.2 'I350 Gigabit Network Connection 1521' drv=uio_pci_generic unused=igb_uio,vfio-pci0000:82:00.3 'I350 Gigabit Network Connection 1521' drv=uio_pci_generic unused=igb_uio,vfio-pci
Network devices using kernel driver===================================0000:01:00.0 'I350 Gigabit Network Connection 1521' if=eth0 drv=igb unused=igb_uio,vfio-pci,uio_pci_generic
Other Network devices=====================<none>
Crypto devices using DPDK-compatible driver===========================================<none>
Crypto devices using kernel driver==================================<none>
Other Crypto devices====================<none>
Eventdev devices using DPDK-compatible driver=============================================<none>
Eventdev devices using kernel driver====================================<none>
Other Eventdev devices======================<none>
Mempool devices using DPDK-compatible driver============================================<none>
Mempool devices using kernel driver===================================<none>
Other Mempool devices=====================<none>
EAL: Detected 72 lcore(s)EAL: Auto-detected process type: PRIMARYEAL: Probing VFIO support...EAL: VFIO support initializedEAL: PCI device 0000:01:00.0 on NUMA socket 0EAL: probe driver: 8086:1521 net_e1000_igbEAL: PCI device 0000:01:00.1 on NUMA socket 0EAL: Device is blacklisted, not initializingEAL: PCI device 0000:01:00.2 on NUMA socket 0EAL: Device is blacklisted, not initializingEAL: PCI device 0000:01:00.3 on NUMA socket 0EAL: Device is blacklisted, not initializingEAL: PCI device 0000:03:00.0 on NUMA socket 0EAL: probe driver: 8086:1521 net_e1000_igbEAL: PCI device 0000:03:00.1 on NUMA socket 0EAL: Device is blacklisted, not initializingEAL: PCI device 0000:03:00.2 on NUMA socket 0EAL: Device is blacklisted, not initializingEAL: PCI device 0000:03:00.3 on NUMA socket 0EAL: Device is blacklisted, not initializingEAL: PCI device 0000:04:00.0 on NUMA socket 0EAL: Device is blacklisted, not initializingEAL: PCI device 0000:04:00.1 on NUMA socket 0EAL: Device is blacklisted, not initializingEAL: PCI device 0000:04:00.2 on NUMA socket 0EAL: Device is blacklisted, not initializingEAL: PCI device 0000:04:00.3 on NUMA socket 0EAL: Device is blacklisted, not initializingEAL: PCI device 0000:05:00.0 on NUMA socket 0EAL: Device is blacklisted, not initializingEAL: PCI device 0000:05:00.1 on NUMA socket 0EAL: Device is blacklisted, not initializingEAL: PCI device 0000:05:00.2 on NUMA socket 0EAL: Device is blacklisted, not initializingEAL: PCI device 0000:05:00.3 on NUMA socket 0EAL: Device is blacklisted, not initializingEAL: PCI device 0000:82:00.0 on NUMA socket 1EAL: Device is blacklisted, not initializingEAL: PCI device 0000:82:00.1 on NUMA socket 1EAL: Device is blacklisted, not initializingEAL: PCI device 0000:82:00.2 on NUMA socket 1EAL: Device is blacklisted, not initializingEAL: PCI device 0000:82:00.3 on NUMA socket 1EAL: Device is blacklisted, not initializingnb_ports=1valid_num_ports=1MBUF_DATA_SIZE=2176 , MAX_QUEUES=8, RTE_TEST_RX_DESC_DEFAULT=1024 , MBUF_CACHE_SIZE=128Waiting for data...portid=0enabled_port_mask=1**** MTU is programmed successfully to 9000port_init port - 0 Device supports maximum rx queues are 8MAX_QUEUES defined as 8max_no_tx_queue = 8 , max_no_rx_queue = 8pf queue num: 0, configured vmdq pool num: 8, each vmdq pool has 1 queuesport=0,rx_rings=8,tx_rings=3rx-queue setup successfully q=0rx-queue setup successfully q=1rx-queue setup successfully q=2rx-queue setup successfully q=3rx-queue setup successfully q=4rx-queue setup successfully q=5rx-queue setup successfully q=6rx-queue setup successfully q=7tx-queue setup successfully q=0tx-queue setup successfully q=1tx-queue setup successfully q=2Port 0: Enabling HW FCVHOST_PORT: Max virtio devices supported: 8VHOST_PORT: Port 0 MAC: a0 36 9f cb ba 34Dump Flow Control 0HighWater Martk=33828LowWater Martk=32328PauseTime=1664Send XON Martk=1Mode=1MAC Control Frame forward=0Setting Flow Control = FULLDump Flow Control 0HighWater Martk=33828LowWater Martk=32328PauseTime=1664Send XON Martk=1Mode=1MAC Control Frame forward=0**** MTU is programmed successfully to 9000VHOST_DATA: ********************* TX - Procesing on Core 40 started********************* TX - Procesing on Core 40 startedVHOST_DATA: ***************** RX Procesing on Core 41 started***************** RX Procesing on Core 41 startedvmdq_conf_default.rxmode.mq_mode=4vmdq_conf_default.rxmode.max_rx_pkt_len=9728vmdq_conf_default.rxmode.split_hdr_size=0vmdq_conf_default.rxmode.header_split=0vmdq_conf_default.rxmode.hw_ip_checksum=0vmdq_conf_default.rxmode.hw_vlan_filter=0vmdq_conf_default.rxmode.hw_vlan_strip=1vmdq_conf_default.rxmode.hw_vlan_extend=0vmdq_conf_default.rxmode.jumbo_frame=1vmdq_conf_default.rxmode.hw_strip_crc=1vmdq_conf_default.rxmode.enable_scatter=1vmdq_conf_default.rxmode.enable_lro=0VHOST_CONFIG: vhost-user server: socket created, fd: 23VHOST_CONFIG: bind to /tmp/vubr0
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [dpdk-users] jumbo frame support..(more than 1500 bytes)
2017-09-01 6:10 ` [dpdk-users] jumbo frame support..(more than 1500 bytes) Dharmesh Mehta
@ 2017-09-01 12:55 ` Kyle Larose
2017-09-01 15:41 ` Chris Paquin
2017-09-01 15:52 ` Dharmesh Mehta
0 siblings, 2 replies; 4+ messages in thread
From: Kyle Larose @ 2017-09-01 12:55 UTC (permalink / raw)
To: Dharmesh Mehta, users
How is it failing? Is it dropping with a frame too long counter? Are you sure it's not dropping before your device? Have you made sure the max frame size of every hop in between is large enough?
-----Original Message-----
From: users [mailto:users-bounces@dpdk.org] On Behalf Of Dharmesh Mehta
Sent: Friday, September 01, 2017 2:10 AM
To: users@dpdk.org
Subject: [dpdk-users] jumbo frame support..(more than 1500 bytes)
Sorry for resubmission,
Still I am stuck at receiving any packet more than 1500+bytes. is it related to driver?I can send packet larger than 1500bytes so I am not suspecting anything wrong with my mbuf initialization.
In my application I am using following code...
#define MBUF_CACHE_SIZE 128#define MBUF_DATA_SIZE RTE_MBUF_DEFAULT_BUF_SIZE#define JUMBO_FRAME_MAX_SIZE 0x2600 //(9728 bytes) .rxmode = { .mq_mode = ETH_MQ_RX_VMDQ_ONLY, .split_hdr_size = 0, .header_split = 0, /**< Header Split disabled */ .hw_ip_checksum = 0, /**< IP checksum offload disabled */ .hw_vlan_filter = 0, /**< VLAN filtering disabled */ .hw_vlan_strip = 1, /**< VLAN strip enabled. */ .jumbo_frame = 1, /**< Jumbo Frame Support enabled */ .hw_strip_crc = 1, /**< CRC stripped by hardware */ .enable_scatter = 1, //required for jumbofram + 1500. .max_rx_pkt_len = JUMBO_FRAME_MAX_SIZE,//ETHER_MAX_LEN }, create_mbuf_pool(valid_num_ports, rte_lcore_count() - 1, MBUF_DATA_SIZE, MAX_QUEUES, RTE_TEST_RX_DESC_DEFAULT, MBUF_CACHE_SIZE);
I am also calling rte_eth_dev_set_mtu, to set MTU 9000, and verified with rte_eth_dev_get_mtu.
Below is my system info / logs from dpdk (17.05.1).
Yours help is really appreciated.
Thanks.DM.
uname -aLinux 3.10.0-514.10.2.el7.x86_64 #1 SMP Fri Mar 3 00:04:05 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux modinfo uio_pci_genericfilename: /lib/modules/3.10.0-514.10.2.el7.x86_64/kernel/drivers/uio/uio_pci_generic.kodescription: Generic UIO driver for PCI 2.3 devicesauthor: Michael S. Tsirkin <mst@redhat.com>license: GPL v2version: 0.01.0rhelversion: 7.3srcversion: 10714380C2025655D980132depends: uiointree: Yvermagic: 3.10.0-514.10.2.el7.x86_64 SMP mod_unload modversions signer: CentOS Linux kernel signing keysig_key: 27:F2:04:85:EB:EB:3B:2D:54:AD:D6:1E:57:B3:08:FA:E0:70:F4:1Fsig_hashalgo: sha256
dpdk-17.05.1
$ ./bind2dpdk_status.sh Checking Ethernet port binding with DPDK
Network devices using DPDK-compatible driver============================================0000:01:00.1 'I350 Gigabit Network Connection 1521' drv=uio_pci_generic unused=igb_uio,vfio-pci0000:01:00.2 'I350 Gigabit Network Connection 1521' drv=uio_pci_generic unused=igb_uio,vfio-pci0000:01:00.3 'I350 Gigabit Network Connection 1521' drv=uio_pci_generic unused=igb_uio,vfio-pci0000:03:00.0 'I350 Gigabit Network Connection 1521' drv=uio_pci_generic unused=igb_uio,vfio-pci0000:03:00.1 'I350 Gigabit Network Connection 1521' drv=uio_pci_generic unused=igb_uio,vfio-pci0000:03:00.2 'I350 Gigabit Network Connection 1521' drv=uio_pci_generic unused=igb_uio,vfio-pci0000:03:00.3 'I350 Gigabit Network Connection 1521' drv=uio_pci_generic unused=igb_uio,vfio-pci0000:04:00.0 'I350 Gigabit Network Connection 1521' drv=uio_pci_generic unused=igb_uio,vfio-pci0000:04:00.1 'I350 Gigabit Network Connection 1521' drv=uio_pci_generic unused=igb_uio,vfio-pci0000:04:00.2 'I350 Gigabit Network Connection 1521' drv=uio_pci_generic unused=igb_uio,vfio-pci0000:04:00.3 'I350 Gigabit Network Connection 1521' drv=uio_pci_generic unused=igb_uio,vfio-pci0000:05:00.0 'I350 Gigabit Network Connection 1521' drv=uio_pci_generic unused=igb_uio,vfio-pci0000:05:00.1 'I350 Gigabit Network Connection 1521' drv=uio_pci_generic unused=igb_uio,vfio-pci0000:05:00.2 'I350 Gigabit Network Connection 1521' drv=uio_pci_generic unused=igb_uio,vfio-pci0000:05:00.3 'I350 Gigabit Network Connection 1521' drv=uio_pci_generic unused=igb_uio,vfio-pci0000:82:00.0 'I350 Gigabit Network Connection 1521' drv=uio_pci_generic unused=igb_uio,vfio-pci0000:82:00.1 'I350 Gigabit Network Connection 1521' drv=uio_pci_generic unused=igb_uio,vfio-pci0000:82:00.2 'I350 Gigabit Network Connection 1521' drv=uio_pci_generic unused=igb_uio,vfio-pci0000:82:00.3 'I350 Gigabit Network Connection 1521' drv=uio_pci_generic unused=igb_uio,vfio-pci Network devices using kernel driver===================================0000:01:00.0 'I350 Gigabit Network Connection 1521' if=eth0 drv=igb unused=igb_uio,vfio-pci,uio_pci_generic
Other Network devices=====================<none>
Crypto devices using DPDK-compatible driver===========================================<none>
Crypto devices using kernel driver==================================<none>
Other Crypto devices====================<none> Eventdev devices using DPDK-compatible driver=============================================<none>
Eventdev devices using kernel driver====================================<none>
Other Eventdev devices======================<none>
Mempool devices using DPDK-compatible driver============================================<none>
Mempool devices using kernel driver===================================<none>
Other Mempool devices=====================<none>
EAL: Detected 72 lcore(s)EAL: Auto-detected process type: PRIMARYEAL: Probing VFIO support...EAL: VFIO support initializedEAL: PCI device 0000:01:00.0 on NUMA socket 0EAL: probe driver: 8086:1521 net_e1000_igbEAL: PCI device 0000:01:00.1 on NUMA socket 0EAL: Device is blacklisted, not initializingEAL: PCI device 0000:01:00.2 on NUMA socket 0EAL: Device is blacklisted, not initializingEAL: PCI device 0000:01:00.3 on NUMA socket 0EAL: Device is blacklisted, not initializingEAL: PCI device 0000:03:00.0 on NUMA socket 0EAL: probe driver: 8086:1521 net_e1000_igbEAL: PCI device 0000:03:00.1 on NUMA socket 0EAL: Device is blacklisted, not initializingEAL: PCI device 0000:03:00.2 on NUMA socket 0EAL: Device is blacklisted, not initializingEAL: PCI device 0000:03:00.3 on NUMA socket 0EAL: Device is blacklisted, not initializingEAL: PCI device 0000:04:00.0 on NUMA socket 0EAL: Device is blacklisted, not initializingEAL: PCI device 0000:04:00.1 on NUMA socket 0EAL: Device is blacklisted, not initializingEAL: PCI device 0000:04:00.2 on NUMA socket 0EAL: Device is blacklisted, not initializingEAL: PCI device 0000:04:00.3 on NUMA socket 0EAL: Device is blacklisted, not initializingEAL: PCI device 0000:05:00.0 on NUMA socket 0EAL: Device is blacklisted, not initializingEAL: PCI device 0000:05:00.1 on NUMA socket 0EAL: Device is blacklisted, not initializingEAL: PCI device 0000:05:00.2 on NUMA socket 0EAL: Device is blacklisted, not initializingEAL: PCI device 0000:05:00.3 on NUMA socket 0EAL: Device is blacklisted, not initializingEAL: PCI device 0000:82:00.0 on NUMA socket 1EAL: Device is blacklisted, not initializingEAL: PCI device 0000:82:00.1 on NUMA socket 1EAL: Device is blacklisted, not initializingEAL: PCI device 0000:82:00.2 on NUMA socket 1EAL: Device is blacklisted, not initializingEAL: PCI device 0000:82:00.3 on NUMA socket 1EAL: Device is blacklisted, not initializingnb_ports=1valid_num_ports=1MBUF_DATA_SIZE=2176 , MAX_QUEUES=8, RTE_TEST_RX_DESC_DEFAULT=1024 , MBUF_CACHE_SIZE=128Waiting for data...portid=0enabled_port_mask=1**** MTU is programmed successfully to 9000port_init port - 0 Device supports maximum rx queues are 8MAX_QUEUES defined as 8max_no_tx_queue = 8 , max_no_rx_queue = 8pf queue num: 0, configured vmdq pool num: 8, each vmdq pool has 1 queuesport=0,rx_rings=8,tx_rings=3rx-queue setup successfully q=0rx-queue setup successfully q=1rx-queue setup successfully q=2rx-queue setup successfully q=3rx-queue setup successfully q=4rx-queue setup successfully q=5rx-queue setup successfully q=6rx-queue setup successfully q=7tx-queue setup successfully q=0tx-queue setup successfully q=1tx-queue setup successfully q=2Port 0: Enabling HW FCVHOST_PORT: Max virtio devices supported: 8VHOST_PORT: Port 0 MAC: a0 36 9f cb ba 34Dump Flow Control 0HighWater Martk=33828LowWater Martk=32328PauseTime=1664Send XON Martk=1Mode=1MAC Control Frame forward=0Setting Flow Control = FULLDump Flow Control 0HighWater Martk=33828LowWater Martk=32328PauseTime=1664Send XON Martk=1Mode=1MAC Control Frame forward=0**** MTU is programmed successfully to 9000VHOST_DATA: ********************* TX - Procesing on Core 40 started********************* TX - Procesing on Core 40 startedVHOST_DATA: ***************** RX Procesing on Core 41 started***************** RX Procesing on Core 41 startedvmdq_conf_default.rxmode.mq_mode=4vmdq_conf_default.rxmode.max_rx_pkt_len=9728vmdq_conf_default.rxmode.split_hdr_size=0vmdq_conf_default.rxmode.header_split=0vmdq_conf_default.rxmode.hw_ip_checksum=0vmdq_conf_default.rxmode.hw_vlan_filter=0vmdq_conf_default.rxmode.hw_vlan_strip=1vmdq_conf_default.rxmode.hw_vlan_extend=0vmdq_conf_default.rxmode.jumbo_frame=1vmdq_conf_default.rxmode.hw_strip_crc=1vmdq_conf_default.rxmode.enable_scatter=1vmdq_conf_default.rxmode.enable_lro=0VHOST_CONFIG: vhost-user server: socket created, fd: 23VHOST_CONFIG: bind to /tmp/vubr0
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [dpdk-users] jumbo frame support..(more than 1500 bytes)
2017-09-01 12:55 ` Kyle Larose
@ 2017-09-01 15:41 ` Chris Paquin
2017-09-01 15:52 ` Dharmesh Mehta
1 sibling, 0 replies; 4+ messages in thread
From: Chris Paquin @ 2017-09-01 15:41 UTC (permalink / raw)
To: Kyle Larose; +Cc: Dharmesh Mehta, users
First off I apologize as I am very new to dpdk.
I have a very similar issue, however, I have found that the packet drops (
RX-nombuf:) that I am seeing when setting MTU over 1500 is related to the
size of the mbuffer.
For example, the larger the mbuf the larger the packet I can receive.
However, I cannot set my mbuf larger than 6016 (kb I assume?)
Sep 1 11:25:56 rhel7 testpmd[3016]: USER1: create a new mbuf pool
<mbuf_pool_socket_0>: n=171456, size=6016, socket=0
Anything larger, and I cannot launch testpdm.
Sep 1 11:25:30 rhel7 testpmd[3008]: USER1: create a new mbuf pool
<mbuf_pool_socket_0>: n=171456, size=6017, socket=0
Sep 1 11:25:30 rhel7 testpmd[3008]: RING: Cannot reserve memory for tailq
Sep 1 11:25:30 rhel7 testpmd[3008]: EAL: Error - exiting with code: 1#012
Cause:
Sep 1 11:25:30 rhel7 testpmd[3008]: Creation of mbuf pool for socket 0
failed: Cannot allocate memory
So in short, I can receive larger packets without dropping them by
increasing my mbuffer size, you might be able to try the same. However, I
cannot get close to the desired MTU of 9000. Would love to know if you get
it working.
CHRISTOPHER PAQUIN
SENIOR CLOUD CONSULTANT, RHCE, RHCSA-OSP
Red Hat <https://www.redhat.com/>
M: 770-906-7646
<https://red.ht/sig>
On Fri, Sep 1, 2017 at 8:55 AM, Kyle Larose <klarose@sandvine.com> wrote:
> How is it failing? Is it dropping with a frame too long counter? Are you
> sure it's not dropping before your device? Have you made sure the max frame
> size of every hop in between is large enough?
>
> -----Original Message-----
> From: users [mailto:users-bounces@dpdk.org] On Behalf Of Dharmesh Mehta
> Sent: Friday, September 01, 2017 2:10 AM
> To: users@dpdk.org
> Subject: [dpdk-users] jumbo frame support..(more than 1500 bytes)
>
> Sorry for resubmission,
> Still I am stuck at receiving any packet more than 1500+bytes. is it
> related to driver?I can send packet larger than 1500bytes so I am not
> suspecting anything wrong with my mbuf initialization.
>
> In my application I am using following code...
> #define MBUF_CACHE_SIZE 128#define MBUF_DATA_SIZE
> RTE_MBUF_DEFAULT_BUF_SIZE#define JUMBO_FRAME_MAX_SIZE 0x2600 //(9728
> bytes) .rxmode = { .mq_mode = ETH_MQ_RX_VMDQ_ONLY, .split_hdr_size =
> 0, .header_split = 0, /**< Header Split disabled */ .hw_ip_checksum = 0,
> /**< IP checksum offload disabled */ .hw_vlan_filter = 0, /**< VLAN
> filtering disabled */ .hw_vlan_strip = 1, /**< VLAN strip enabled. */
> .jumbo_frame = 1, /**< Jumbo Frame Support enabled */ .hw_strip_crc =
> 1, /**< CRC stripped by hardware */ .enable_scatter = 1, //required for
> jumbofram + 1500. .max_rx_pkt_len = JUMBO_FRAME_MAX_SIZE,//ETHER_MAX_LEN
> }, create_mbuf_pool(valid_num_ports, rte_lcore_count() - 1,
> MBUF_DATA_SIZE, MAX_QUEUES, RTE_TEST_RX_DESC_DEFAULT, MBUF_CACHE_SIZE);
>
> I am also calling rte_eth_dev_set_mtu, to set MTU 9000, and verified
> with rte_eth_dev_get_mtu.
> Below is my system info / logs from dpdk (17.05.1).
> Yours help is really appreciated.
>
> Thanks.DM.
> uname -aLinux 3.10.0-514.10.2.el7.x86_64 #1 SMP Fri Mar 3 00:04:05 UTC
> 2017 x86_64 x86_64 x86_64 GNU/Linux modinfo uio_pci_genericfilename:
> /lib/modules/3.10.0-514.10.2.el7.x86_64/kernel/drivers/uio/uio_pci_generic.kodescription:
> Generic UIO driver for PCI 2.3 devicesauthor: Michael S. Tsirkin
> <mst@redhat.com>license: GPL v2version: 0.01.0rhelversion:
> 7.3srcversion: 10714380C2025655D980132depends: uiointree:
> Yvermagic: 3.10.0-514.10.2.el7.x86_64 SMP mod_unload
> modversions signer: CentOS Linux kernel signing keysig_key:
> 27:F2:04:85:EB:EB:3B:2D:54:AD:D6:1E:57:B3:08:FA:E0:70:F4:1Fsig_hashalgo:
> sha256
> dpdk-17.05.1
> $ ./bind2dpdk_status.sh Checking Ethernet port binding with DPDK
>
> Network devices using DPDK-compatible driver========================
> ====================0000:01:00.1 'I350 Gigabit Network Connection 1521'
> drv=uio_pci_generic unused=igb_uio,vfio-pci0000:01:00.2 'I350 Gigabit
> Network Connection 1521' drv=uio_pci_generic unused=igb_uio,vfio-pci0000:01:00.3
> 'I350 Gigabit Network Connection 1521' drv=uio_pci_generic
> unused=igb_uio,vfio-pci0000:03:00.0 'I350 Gigabit Network Connection
> 1521' drv=uio_pci_generic unused=igb_uio,vfio-pci0000:03:00.1 'I350
> Gigabit Network Connection 1521' drv=uio_pci_generic
> unused=igb_uio,vfio-pci0000:03:00.2 'I350 Gigabit Network Connection
> 1521' drv=uio_pci_generic unused=igb_uio,vfio-pci0000:03:00.3 'I350
> Gigabit Network Connection 1521' drv=uio_pci_generic
> unused=igb_uio,vfio-pci0000:04:00.0 'I350 Gigabit Network Connection
> 1521' drv=uio_pci_generic unused=igb_uio,vfio-pci0000:04:00.1 'I350
> Gigabit Network Connection 1521' drv=uio_pci_generic
> unused=igb_uio,vfio-pci0000:04:00.2 'I350 Gigabit Network Connection
> 1521' drv=uio_pci_generic unused=igb_uio,vfio-pci0000:04:00.3 'I350
> Gigabit Network Connection 1521' drv=uio_pci_generic
> unused=igb_uio,vfio-pci0000:05:00.0 'I350 Gigabit Network Connection
> 1521' drv=uio_pci_generic unused=igb_uio,vfio-pci0000:05:00.1 'I350
> Gigabit Network Connection 1521' drv=uio_pci_generic
> unused=igb_uio,vfio-pci0000:05:00.2 'I350 Gigabit Network Connection
> 1521' drv=uio_pci_generic unused=igb_uio,vfio-pci0000:05:00.3 'I350
> Gigabit Network Connection 1521' drv=uio_pci_generic
> unused=igb_uio,vfio-pci0000:82:00.0 'I350 Gigabit Network Connection
> 1521' drv=uio_pci_generic unused=igb_uio,vfio-pci0000:82:00.1 'I350
> Gigabit Network Connection 1521' drv=uio_pci_generic
> unused=igb_uio,vfio-pci0000:82:00.2 'I350 Gigabit Network Connection
> 1521' drv=uio_pci_generic unused=igb_uio,vfio-pci0000:82:00.3 'I350
> Gigabit Network Connection 1521' drv=uio_pci_generic
> unused=igb_uio,vfio-pci Network devices using kernel
> driver===================================0000:01:00.0 'I350 Gigabit
> Network Connection 1521' if=eth0 drv=igb unused=igb_uio,vfio-pci,uio_
> pci_generic
> Other Network devices=====================<none>
> Crypto devices using DPDK-compatible driver========================
> ===================<none>
> Crypto devices using kernel driver==================================<none>
> Other Crypto devices====================<none> Eventdev devices using
> DPDK-compatible driver=============================================<none>
> Eventdev devices using kernel driver========================
> ============<none>
> Other Eventdev devices======================<none>
> Mempool devices using DPDK-compatible driver========================
> ====================<none>
> Mempool devices using kernel driver========================
> ===========<none>
> Other Mempool devices=====================<none>
>
>
>
> EAL: Detected 72 lcore(s)EAL: Auto-detected process type: PRIMARYEAL:
> Probing VFIO support...EAL: VFIO support initializedEAL: PCI device
> 0000:01:00.0 on NUMA socket 0EAL: probe driver: 8086:1521
> net_e1000_igbEAL: PCI device 0000:01:00.1 on NUMA socket 0EAL: Device is
> blacklisted, not initializingEAL: PCI device 0000:01:00.2 on NUMA socket
> 0EAL: Device is blacklisted, not initializingEAL: PCI device 0000:01:00.3
> on NUMA socket 0EAL: Device is blacklisted, not initializingEAL: PCI
> device 0000:03:00.0 on NUMA socket 0EAL: probe driver: 8086:1521
> net_e1000_igbEAL: PCI device 0000:03:00.1 on NUMA socket 0EAL: Device is
> blacklisted, not initializingEAL: PCI device 0000:03:00.2 on NUMA socket
> 0EAL: Device is blacklisted, not initializingEAL: PCI device 0000:03:00.3
> on NUMA socket 0EAL: Device is blacklisted, not initializingEAL: PCI
> device 0000:04:00.0 on NUMA socket 0EAL: Device is blacklisted, not
> initializingEAL: PCI device 0000:04:00.1 on NUMA socket 0EAL: Device is
> blacklisted, not initializingEAL: PCI device 0000:04:00.2 on NUMA socket
> 0EAL: Device is blacklisted, not initializingEAL: PCI device 0000:04:00.3
> on NUMA socket 0EAL: Device is blacklisted, not initializingEAL: PCI
> device 0000:05:00.0 on NUMA socket 0EAL: Device is blacklisted, not
> initializingEAL: PCI device 0000:05:00.1 on NUMA socket 0EAL: Device is
> blacklisted, not initializingEAL: PCI device 0000:05:00.2 on NUMA socket
> 0EAL: Device is blacklisted, not initializingEAL: PCI device 0000:05:00.3
> on NUMA socket 0EAL: Device is blacklisted, not initializingEAL: PCI
> device 0000:82:00.0 on NUMA socket 1EAL: Device is blacklisted, not
> initializingEAL: PCI device 0000:82:00.1 on NUMA socket 1EAL: Device is
> blacklisted, not initializingEAL: PCI device 0000:82:00.2 on NUMA socket
> 1EAL: Device is blacklisted, not initializingEAL: PCI device 0000:82:00.3
> on NUMA socket 1EAL: Device is blacklisted, not
> initializingnb_ports=1valid_num_ports=1MBUF_DATA_SIZE=2176 ,
> MAX_QUEUES=8, RTE_TEST_RX_DESC_DEFAULT=1024 , MBUF_CACHE_SIZE=128Waiting
> for data...portid=0enabled_port_mask=1**** MTU is programmed successfully
> to 9000port_init port - 0 Device supports maximum rx queues are 8MAX_QUEUES
> defined as 8max_no_tx_queue = 8 , max_no_rx_queue = 8pf queue num: 0,
> configured vmdq pool num: 8, each vmdq pool has 1
> queuesport=0,rx_rings=8,tx_rings=3rx-queue setup successfully q=0rx-queue
> setup successfully q=1rx-queue setup successfully q=2rx-queue setup
> successfully q=3rx-queue setup successfully q=4rx-queue setup successfully
> q=5rx-queue setup successfully q=6rx-queue setup successfully q=7tx-queue
> setup successfully q=0tx-queue setup successfully q=1tx-queue setup
> successfully q=2Port 0: Enabling HW FCVHOST_PORT: Max virtio devices
> supported: 8VHOST_PORT: Port 0 MAC: a0 36 9f cb ba 34Dump Flow Control
> 0HighWater Martk=33828LowWater Martk=32328PauseTime=1664Send XON
> Martk=1Mode=1MAC Control Frame forward=0Setting Flow Control = FULLDump
> Flow Control 0HighWater Martk=33828LowWater Martk=32328PauseTime=1664Send
> XON Martk=1Mode=1MAC Control Frame forward=0**** MTU is programmed
> successfully to 9000VHOST_DATA: ********************* TX - Procesing on
> Core 40 started********************* TX - Procesing on Core 40
> startedVHOST_DATA: ***************** RX Procesing on Core 41
> started***************** RX Procesing on Core 41 startedvmdq_conf_default.
> rxmode.mq_mode=4vmdq_conf_default.rxmode.max_rx_pkt_len=
> 9728vmdq_conf_default.rxmode.split_hdr_size=0vmdq_conf_
> default.rxmode.header_split=0vmdq_conf_default.rxmode.hw_
> ip_checksum=0vmdq_conf_default.rxmode.hw_vlan_filter=
> 0vmdq_conf_default.rxmode.hw_vlan_strip=1vmdq_conf_default.
> rxmode.hw_vlan_extend=0vmdq_conf_default.rxmode.jumbo_
> frame=1vmdq_conf_default.rxmode.hw_strip_crc=1vmdq_
> conf_default.rxmode.enable_scatter=1vmdq_conf_default.
> rxmode.enable_lro=0VHOST_CONFIG: vhost-user server: socket created, fd:
> 23VHOST_CONFIG: bind to /tmp/vubr0
>
>
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [dpdk-users] jumbo frame support..(more than 1500 bytes)
2017-09-01 12:55 ` Kyle Larose
2017-09-01 15:41 ` Chris Paquin
@ 2017-09-01 15:52 ` Dharmesh Mehta
1 sibling, 0 replies; 4+ messages in thread
From: Dharmesh Mehta @ 2017-09-01 15:52 UTC (permalink / raw)
To: users, Kyle Larose
Hello Kyle,
Could you please tell me which DEBUG flag I should turn on it can print error.
how do I check it is not dropping before my device?
Have you made sure the max frame size of every hop in between is large enough?
- can you tell me which file / debug flag I should turn on so I get more insight.
thanks,dharmesh.If you see my e-mail. setting of buffer size is done correctly in my main application.
I hope same #define will be get picked by lib / driver etc.... during compilation.
Do we have centralized place in DPDK where we can configure all this setting - just like linux kernel config file.?
-DM
On Friday, September 1, 2017, 5:55:51 AM PDT, Kyle Larose <klarose@sandvine.com> wrote:
How is it failing? Is it dropping with a frame too long counter? Are you sure it's not dropping before your device? Have you made sure the max frame size of every hop in between is large enough?
-----Original Message-----
From: users [mailto:users-bounces@dpdk.org] On Behalf Of Dharmesh Mehta
Sent: Friday, September 01, 2017 2:10 AM
To: users@dpdk.org
Subject: [dpdk-users] jumbo frame support..(more than 1500 bytes)
Sorry for resubmission,
Still I am stuck at receiving any packet more than 1500+bytes. is it related to driver?I can send packet larger than 1500bytes so I am not suspecting anything wrong with my mbuf initialization.
In my application I am using following code...
#define MBUF_CACHE_SIZE 128#define MBUF_DATA_SIZE RTE_MBUF_DEFAULT_BUF_SIZE#define JUMBO_FRAME_MAX_SIZE 0x2600 //(9728 bytes) .rxmode = { .mq_mode = ETH_MQ_RX_VMDQ_ONLY, .split_hdr_size = 0, .header_split = 0, /**< Header Split disabled */ .hw_ip_checksum = 0, /**< IP checksum offload disabled */ .hw_vlan_filter = 0, /**< VLAN filtering disabled */ .hw_vlan_strip = 1, /**< VLAN strip enabled. */ .jumbo_frame = 1, /**< Jumbo Frame Support enabled */ .hw_strip_crc = 1, /**< CRC stripped by hardware */ .enable_scatter = 1, //required for jumbofram + 1500. .max_rx_pkt_len = JUMBO_FRAME_MAX_SIZE,//ETHER_MAX_LEN }, create_mbuf_pool(valid_num_ports, rte_lcore_count() - 1, MBUF_DATA_SIZE, MAX_QUEUES, RTE_TEST_RX_DESC_DEFAULT, MBUF_CACHE_SIZE);
I am also calling rte_eth_dev_set_mtu, to set MTU 9000, and verified with rte_eth_dev_get_mtu.
Below is my system info / logs from dpdk (17.05.1).
Yours help is really appreciated.
Thanks.DM.
uname -aLinux 3.10.0-514.10.2.el7.x86_64 #1 SMP Fri Mar 3 00:04:05 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux modinfo uio_pci_genericfilename: /lib/modules/3.10.0-514.10.2.el7.x86_64/kernel/drivers/uio/uio_pci_generic.kodescription: Generic UIO driver for PCI 2.3 devicesauthor: Michael S. Tsirkin <mst@redhat.com>license: GPL v2version: 0.01.0rhelversion: 7.3srcversion: 10714380C2025655D980132depends: uiointree: Yvermagic: 3.10.0-514.10.2.el7.x86_64 SMP mod_unload modversions signer: CentOS Linux kernel signing keysig_key: 27:F2:04:85:EB:EB:3B:2D:54:AD:D6:1E:57:B3:08:FA:E0:70:F4:1Fsig_hashalgo: sha256
dpdk-17.05.1
$ ./bind2dpdk_status.sh Checking Ethernet port binding with DPDK
Network devices using DPDK-compatible driver============================================0000:01:00.1 'I350 Gigabit Network Connection 1521' drv=uio_pci_generic unused=igb_uio,vfio-pci0000:01:00.2 'I350 Gigabit Network Connection 1521' drv=uio_pci_generic unused=igb_uio,vfio-pci0000:01:00.3 'I350 Gigabit Network Connection 1521' drv=uio_pci_generic unused=igb_uio,vfio-pci0000:03:00.0 'I350 Gigabit Network Connection 1521' drv=uio_pci_generic unused=igb_uio,vfio-pci0000:03:00.1 'I350 Gigabit Network Connection 1521' drv=uio_pci_generic unused=igb_uio,vfio-pci0000:03:00.2 'I350 Gigabit Network Connection 1521' drv=uio_pci_generic unused=igb_uio,vfio-pci0000:03:00.3 'I350 Gigabit Network Connection 1521' drv=uio_pci_generic unused=igb_uio,vfio-pci0000:04:00.0 'I350 Gigabit Network Connection 1521' drv=uio_pci_generic unused=igb_uio,vfio-pci0000:04:00.1 'I350 Gigabit Network Connection 1521' drv=uio_pci_generic unused=igb_uio,vfio-pci0000:04:00.2 'I350 Gigabit Network Connection 1521' drv=uio_pci_generic unused=igb_uio,vfio-pci0000:04:00.3 'I350 Gigabit Network Connection 1521' drv=uio_pci_generic unused=igb_uio,vfio-pci0000:05:00.0 'I350 Gigabit Network Connection 1521' drv=uio_pci_generic unused=igb_uio,vfio-pci0000:05:00.1 'I350 Gigabit Network Connection 1521' drv=uio_pci_generic unused=igb_uio,vfio-pci0000:05:00.2 'I350 Gigabit Network Connection 1521' drv=uio_pci_generic unused=igb_uio,vfio-pci0000:05:00.3 'I350 Gigabit Network Connection 1521' drv=uio_pci_generic unused=igb_uio,vfio-pci0000:82:00.0 'I350 Gigabit Network Connection 1521' drv=uio_pci_generic unused=igb_uio,vfio-pci0000:82:00.1 'I350 Gigabit Network Connection 1521' drv=uio_pci_generic unused=igb_uio,vfio-pci0000:82:00.2 'I350 Gigabit Network Connection 1521' drv=uio_pci_generic unused=igb_uio,vfio-pci0000:82:00.3 'I350 Gigabit Network Connection 1521' drv=uio_pci_generic unused=igb_uio,vfio-pci Network devices using kernel driver===================================0000:01:00.0 'I350 Gigabit Network Connection 1521' if=eth0 drv=igb unused=igb_uio,vfio-pci,uio_pci_generic
Other Network devices=====================<none>
Crypto devices using DPDK-compatible driver===========================================<none>
Crypto devices using kernel driver==================================<none>
Other Crypto devices====================<none> Eventdev devices using DPDK-compatible driver=============================================<none>
Eventdev devices using kernel driver====================================<none>
Other Eventdev devices======================<none>
Mempool devices using DPDK-compatible driver============================================<none>
Mempool devices using kernel driver===================================<none>
Other Mempool devices=====================<none>
EAL: Detected 72 lcore(s)EAL: Auto-detected process type: PRIMARYEAL: Probing VFIO support...EAL: VFIO support initializedEAL: PCI device 0000:01:00.0 on NUMA socket 0EAL: probe driver: 8086:1521 net_e1000_igbEAL: PCI device 0000:01:00.1 on NUMA socket 0EAL: Device is blacklisted, not initializingEAL: PCI device 0000:01:00.2 on NUMA socket 0EAL: Device is blacklisted, not initializingEAL: PCI device 0000:01:00.3 on NUMA socket 0EAL: Device is blacklisted, not initializingEAL: PCI device 0000:03:00.0 on NUMA socket 0EAL: probe driver: 8086:1521 net_e1000_igbEAL: PCI device 0000:03:00.1 on NUMA socket 0EAL: Device is blacklisted, not initializingEAL: PCI device 0000:03:00.2 on NUMA socket 0EAL: Device is blacklisted, not initializingEAL: PCI device 0000:03:00.3 on NUMA socket 0EAL: Device is blacklisted, not initializingEAL: PCI device 0000:04:00.0 on NUMA socket 0EAL: Device is blacklisted, not initializingEAL: PCI device 0000:04:00.1 on NUMA socket 0EAL: Device is blacklisted, not initializingEAL: PCI device 0000:04:00.2 on NUMA socket 0EAL: Device is blacklisted, not initializingEAL: PCI device 0000:04:00.3 on NUMA socket 0EAL: Device is blacklisted, not initializingEAL: PCI device 0000:05:00.0 on NUMA socket 0EAL: Device is blacklisted, not initializingEAL: PCI device 0000:05:00.1 on NUMA socket 0EAL: Device is blacklisted, not initializingEAL: PCI device 0000:05:00.2 on NUMA socket 0EAL: Device is blacklisted, not initializingEAL: PCI device 0000:05:00.3 on NUMA socket 0EAL: Device is blacklisted, not initializingEAL: PCI device 0000:82:00.0 on NUMA socket 1EAL: Device is blacklisted, not initializingEAL: PCI device 0000:82:00.1 on NUMA socket 1EAL: Device is blacklisted, not initializingEAL: PCI device 0000:82:00.2 on NUMA socket 1EAL: Device is blacklisted, not initializingEAL: PCI device 0000:82:00.3 on NUMA socket 1EAL: Device is blacklisted, not initializingnb_ports=1valid_num_ports=1MBUF_DATA_SIZE=2176 , MAX_QUEUES=8, RTE_TEST_RX_DESC_DEFAULT=1024 , MBUF_CACHE_SIZE=128Waiting for data...portid=0enabled_port_mask=1**** MTU is programmed successfully to 9000port_init port - 0 Device supports maximum rx queues are 8MAX_QUEUES defined as 8max_no_tx_queue = 8 , max_no_rx_queue = 8pf queue num: 0, configured vmdq pool num: 8, each vmdq pool has 1 queuesport=0,rx_rings=8,tx_rings=3rx-queue setup successfully q=0rx-queue setup successfully q=1rx-queue setup successfully q=2rx-queue setup successfully q=3rx-queue setup successfully q=4rx-queue setup successfully q=5rx-queue setup successfully q=6rx-queue setup successfully q=7tx-queue setup successfully q=0tx-queue setup successfully q=1tx-queue setup successfully q=2Port 0: Enabling HW FCVHOST_PORT: Max virtio devices supported: 8VHOST_PORT: Port 0 MAC: a0 36 9f cb ba 34Dump Flow Control 0HighWater Martk=33828LowWater Martk=32328PauseTime=1664Send XON Martk=1Mode=1MAC Control Frame forward=0Setting Flow Control = FULLDump Flow Control 0HighWater Martk=33828LowWater Martk=32328PauseTime=1664Send XON Martk=1Mode=1MAC Control Frame forward=0**** MTU is programmed successfully to 9000VHOST_DATA: ********************* TX - Procesing on Core 40 started********************* TX - Procesing on Core 40 startedVHOST_DATA: ***************** RX Procesing on Core 41 started***************** RX Procesing on Core 41 startedvmdq_conf_default.rxmode.mq_mode=4vmdq_conf_default.rxmode.max_rx_pkt_len=9728vmdq_conf_default.rxmode.split_hdr_size=0vmdq_conf_default.rxmode.header_split=0vmdq_conf_default.rxmode.hw_ip_checksum=0vmdq_conf_default.rxmode.hw_vlan_filter=0vmdq_conf_default.rxmode.hw_vlan_strip=1vmdq_conf_default.rxmode.hw_vlan_extend=0vmdq_conf_default.rxmode.jumbo_frame=1vmdq_conf_default.rxmode.hw_strip_crc=1vmdq_conf_default.rxmode.enable_scatter=1vmdq_conf_default.rxmode.enable_lro=0VHOST_CONFIG: vhost-user server: socket created, fd: 23VHOST_CONFIG: bind to /tmp/vubr0
From cpaquin@redhat.com Fri Sep 1 17:59:30 2017
Return-Path: <cpaquin@redhat.com>
Received: from mail-oi0-f50.google.com (mail-oi0-f50.google.com
[209.85.218.50]) by dpdk.org (Postfix) with ESMTP id 557D5325A
for <users@dpdk.org>; Fri, 1 Sep 2017 17:59:30 +0200 (CEST)
Received: by mail-oi0-f50.google.com with SMTP id r203so5359976oih.0
for <users@dpdk.org>; Fri, 01 Sep 2017 08:59:30 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
d\x1e100.net; s 161025;
h=x-gm-message-state:mime-version:in-reply-to:references:from:date
:message-id:subject:to:cc;
bh=go/1mmwfV9Ha5EmAshoxnnF7VNw8RKFyVdygav9VJgc=;
b=eHSrzr4Oe6Vlk/GFyrOYiRwrt4so24Uw2/LsiMhmdupa65IPMOseet/MixRjwm6HYr
sad26hXwxPGZHDZsNR19xcD3Nav39iU4aO8Y3Jz/XkvoqAjFcEMwQeJGNsHrwqkJCM8d
Pvcw51K7ivpPO2fgwQ+aQuu1mvxk9Q4VUKhUsTOd4yz3zrVZgR0IqBPoPEPRjWI1NcPk
813T7MPaOo1bDRUNscY+NC3wwdmokvboAQni/UDXvF/3YssQJtFlf2saH6OC868FHPCK
Lw8Z9YfdZmAACiAq0WEjCm7JPeaph4jE7wiN1AgGNDdakzxI241S2Z8F2wTqbcc+wYhn
aWpA=X-Gm-Message-State: AHPjjUgLlb1Rn7xRdFX6uael6hmjxfsqyJpVv+gfiAd1Yk8i/dn5VkKe
8MxDdNRg2RsEgKMiftQyBuvLFH6WTMVM7JOShA=X-Google-Smtp-Source: ADKCNb47KU20DnM53349Ej3pt+bOF9pBj+5XXNAytI2oKYrj3fFNhwTslsV7i52qNyhnyeIx+xgSo+cFIwhaCdI0h24X-Received: by 10.202.85.19 with SMTP id j19mr2385880oib.193.1504281569376;
Fri, 01 Sep 2017 08:59:29 -0700 (PDT)
MIME-Version: 1.0
Received: by 10.74.150.195 with HTTP; Fri, 1 Sep 2017 08:59:08 -0700 (PDT)
In-Reply-To: <1577249268.1093678.1504281150569@mail.yahoo.com>
References: <800120698.826966.1504246225460.ref@mail.yahoo.com>
<800120698.826966.1504246225460@mail.yahoo.com>
<D76BBBCF97F57144BB5FCF08007244A7A9045B9E@wtl-exchp-2.sandvine.com>
<1577249268.1093678.1504281150569@mail.yahoo.com>
From: Chris Paquin <cpaquin@redhat.com>
Date: Fri, 1 Sep 2017 11:59:08 -0400
Message-ID: <CAAMYPe4i4s-6nAF3dBQcbA-r6yu_mAu0imexc8hJpHcJFf8ZEA@mail.gmail.com>
To: Dharmesh Mehta <mehtadharmesh@yahoo.com>
Cc: "users@dpdk.org" <users@dpdk.org>, Kyle Larose <klarose@sandvine.com>
Content-Type: text/plain; charset="UTF-8"
X-Content-Filtered-By: Mailman/MimeDel 2.1.15
Subject: Re: [dpdk-users] jumbo frame support..(more than 1500 bytes)
X-BeenThere: users@dpdk.org
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: DPDK usage discussions <users.dpdk.org>
List-Unsubscribe: <http://dpdk.org/ml/options/users>,
<mailto:users-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://dpdk.org/ml/archives/users/>
List-Post: <mailto:users@dpdk.org>
List-Help: <mailto:users-request@dpdk.org?subject=help>
List-Subscribe: <http://dpdk.org/ml/listinfo/users>,
<mailto:users-request@dpdk.org?subject=subscribe>
X-List-Received-Date: Fri, 01 Sep 2017 15:59:30 -0000
Darmesh I use the command below to show dropped packets. Not sure if this
helps
testpmd> show port stats 0
######################## NIC statistics for port 0
########################
RX-packets: 0 RX-missed: 0 RX-bytes: 0
RX-errors: 0
RX-nombuf: 2366029984
TX-packets: 0 TX-errors: 0 TX-bytes: 0
Throughput (since last show)
Rx-pps: 0
Tx-pps: 0
############################################################################
CHRISTOPHER PAQUIN
SENIOR CLOUD CONSULTANT, RHCE, RHCSA-OSP
Red Hat <https://www.redhat.com/>
M: 770-906-7646
<https://red.ht/sig>
On Fri, Sep 1, 2017 at 11:52 AM, Dharmesh Mehta <mehtadharmesh@yahoo.com>
wrote:
> Hello Kyle,
> Could you please tell me which DEBUG flag I should turn on it can print
> error.
> how do I check it is not dropping before my device?
> Have you made sure the max frame size of every hop in between is large
> enough?
> - can you tell me which file / debug flag I should turn on so I get
> more insight.
>
>
> thanks,dharmesh.If you see my e-mail. setting of buffer size is done
> correctly in my main application.
> I hope same #define will be get picked by lib / driver etc.... during
> compilation.
> Do we have centralized place in DPDK where we can configure all this
> setting - just like linux kernel config file.?
> -DM
> On Friday, September 1, 2017, 5:55:51 AM PDT, Kyle Larose <
> klarose@sandvine.com> wrote:
>
> How is it failing? Is it dropping with a frame too long counter? Are you
> sure it's not dropping before your device? Have you made sure the max frame
> size of every hop in between is large enough?
>
> -----Original Message-----
> From: users [mailto:users-bounces@dpdk.org] On Behalf Of Dharmesh Mehta
> Sent: Friday, September 01, 2017 2:10 AM
> To: users@dpdk.org
> Subject: [dpdk-users] jumbo frame support..(more than 1500 bytes)
>
> Sorry for resubmission,
> Still I am stuck at receiving any packet more than 1500+bytes. is it
> related to driver?I can send packet larger than 1500bytes so I am not
> suspecting anything wrong with my mbuf initialization.
>
> In my application I am using following code...
> #define MBUF_CACHE_SIZE 128#define MBUF_DATA_SIZE
> RTE_MBUF_DEFAULT_BUF_SIZE#define JUMBO_FRAME_MAX_SIZE 0x2600 //(9728
> bytes) .rxmode = { .mq_mode = ETH_MQ_RX_VMDQ_ONLY, .split_hdr_size > 0, .header_split = 0, /**< Header Split disabled */ .hw_ip_checksum = 0,
> /**< IP checksum offload disabled */ .hw_vlan_filter = 0, /**< VLAN
> filtering disabled */ .hw_vlan_strip = 1, /**< VLAN strip enabled. */
> .jumbo_frame = 1, /**< Jumbo Frame Support enabled */ .hw_strip_crc > 1, /**< CRC stripped by hardware */ .enable_scatter = 1, //required for
> jumbofram + 1500. .max_rx_pkt_len = JUMBO_FRAME_MAX_SIZE,//ETHER_MAX_LEN
> }, create_mbuf_pool(valid_num_ports, rte_lcore_count() - 1,
> MBUF_DATA_SIZE, MAX_QUEUES, RTE_TEST_RX_DESC_DEFAULT, MBUF_CACHE_SIZE);
>
> I am also calling rte_eth_dev_set_mtu, to set MTU 9000, and verified
> with rte_eth_dev_get_mtu.
> Below is my system info / logs from dpdk (17.05.1).
> Yours help is really appreciated.
>
> Thanks.DM.
> uname -aLinux 3.10.0-514.10.2.el7.x86_64 #1 SMP Fri Mar 3 00:04:05 UTC
> 2017 x86_64 x86_64 x86_64 GNU/Linux modinfo uio_pci_genericfilename:
> /lib/modules/3.10.0-514.10.2.el7.x86_64/kernel/drivers/uio/uio_pci_generic.kodescription:
> Generic UIO driver for PCI 2.3 devicesauthor: Michael S. Tsirkin
> <mst@redhat.com>license: GPL v2version: 0.01.0rhelversion:
> 7.3srcversion: 10714380C2025655D980132depends: uiointree:
> Yvermagic: 3.10.0-514.10.2.el7.x86_64 SMP mod_unload
> modversions signer: CentOS Linux kernel signing keysig_key:
> 27:F2:04:85:EB:EB:3B:2D:54:AD:D6:1E:57:B3:08:FA:E0:70:F4:1Fsig_hashalgo:
> sha256
> dpdk-17.05.1
> $ ./bind2dpdk_status.sh Checking Ethernet port binding with DPDK
>
> Network devices using DPDK-compatible driver=======================> ===================\000:01:00.1 'I350 Gigabit Network Connection 1521'
> drv=uio_pci_generic unused=igb_uio,vfio-pci0000:01:00.2 'I350 Gigabit
> Network Connection 1521' drv=uio_pci_generic unused=igb_uio,vfio-pci0000:01:00.3
> 'I350 Gigabit Network Connection 1521' drv=uio_pci_generic
> unused=igb_uio,vfio-pci0000:03:00.0 'I350 Gigabit Network Connection
> 1521' drv=uio_pci_generic unused=igb_uio,vfio-pci0000:03:00.1 'I350
> Gigabit Network Connection 1521' drv=uio_pci_generic
> unused=igb_uio,vfio-pci0000:03:00.2 'I350 Gigabit Network Connection
> 1521' drv=uio_pci_generic unused=igb_uio,vfio-pci0000:03:00.3 'I350
> Gigabit Network Connection 1521' drv=uio_pci_generic
> unused=igb_uio,vfio-pci0000:04:00.0 'I350 Gigabit Network Connection
> 1521' drv=uio_pci_generic unused=igb_uio,vfio-pci0000:04:00.1 'I350
> Gigabit Network Connection 1521' drv=uio_pci_generic
> unused=igb_uio,vfio-pci0000:04:00.2 'I350 Gigabit Network Connection
> 1521' drv=uio_pci_generic unused=igb_uio,vfio-pci0000:04:00.3 'I350
> Gigabit Network Connection 1521' drv=uio_pci_generic
> unused=igb_uio,vfio-pci0000:05:00.0 'I350 Gigabit Network Connection
> 1521' drv=uio_pci_generic unused=igb_uio,vfio-pci0000:05:00.1 'I350
> Gigabit Network Connection 1521' drv=uio_pci_generic
> unused=igb_uio,vfio-pci0000:05:00.2 'I350 Gigabit Network Connection
> 1521' drv=uio_pci_generic unused=igb_uio,vfio-pci0000:05:00.3 'I350
> Gigabit Network Connection 1521' drv=uio_pci_generic
> unused=igb_uio,vfio-pci0000:82:00.0 'I350 Gigabit Network Connection
> 1521' drv=uio_pci_generic unused=igb_uio,vfio-pci0000:82:00.1 'I350
> Gigabit Network Connection 1521' drv=uio_pci_generic
> unused=igb_uio,vfio-pci0000:82:00.2 'I350 Gigabit Network Connection
> 1521' drv=uio_pci_generic unused=igb_uio,vfio-pci0000:82:00.3 'I350
> Gigabit Network Connection 1521' drv=uio_pci_generic
> unused=igb_uio,vfio-pci Network devices using kernel
> driver==================================\000:01:00.0 'I350 Gigabit
> Network Connection 1521' if=eth0 drv=igb unused=igb_uio,vfio-pci,uio_
> pci_generic
> Other Network devices=====================<none>
> Crypto devices using DPDK-compatible driver=======================> ===================<none>
> Crypto devices using kernel driver==================================<none>
> Other Crypto devices====================<none> Eventdev devices using
> DPDK-compatible driver=============================================<none>
> Eventdev devices using kernel driver=======================> ============<none>
> Other Eventdev devices======================<none>
> Mempool devices using DPDK-compatible driver=======================> ====================<none>
> Mempool devices using kernel driver=======================> ===========<none>
> Other Mempool devices=====================<none>
>
>
>
> EAL: Detected 72 lcore(s)EAL: Auto-detected process type: PRIMARYEAL:
> Probing VFIO support...EAL: VFIO support initializedEAL: PCI device
> 0000:01:00.0 on NUMA socket 0EAL: probe driver: 8086:1521
> net_e1000_igbEAL: PCI device 0000:01:00.1 on NUMA socket 0EAL: Device is
> blacklisted, not initializingEAL: PCI device 0000:01:00.2 on NUMA socket
> 0EAL: Device is blacklisted, not initializingEAL: PCI device 0000:01:00.3
> on NUMA socket 0EAL: Device is blacklisted, not initializingEAL: PCI
> device 0000:03:00.0 on NUMA socket 0EAL: probe driver: 8086:1521
> net_e1000_igbEAL: PCI device 0000:03:00.1 on NUMA socket 0EAL: Device is
> blacklisted, not initializingEAL: PCI device 0000:03:00.2 on NUMA socket
> 0EAL: Device is blacklisted, not initializingEAL: PCI device 0000:03:00.3
> on NUMA socket 0EAL: Device is blacklisted, not initializingEAL: PCI
> device 0000:04:00.0 on NUMA socket 0EAL: Device is blacklisted, not
> initializingEAL: PCI device 0000:04:00.1 on NUMA socket 0EAL: Device is
> blacklisted, not initializingEAL: PCI device 0000:04:00.2 on NUMA socket
> 0EAL: Device is blacklisted, not initializingEAL: PCI device 0000:04:00.3
> on NUMA socket 0EAL: Device is blacklisted, not initializingEAL: PCI
> device 0000:05:00.0 on NUMA socket 0EAL: Device is blacklisted, not
> initializingEAL: PCI device 0000:05:00.1 on NUMA socket 0EAL: Device is
> blacklisted, not initializingEAL: PCI device 0000:05:00.2 on NUMA socket
> 0EAL: Device is blacklisted, not initializingEAL: PCI device 0000:05:00.3
> on NUMA socket 0EAL: Device is blacklisted, not initializingEAL: PCI
> device 0000:82:00.0 on NUMA socket 1EAL: Device is blacklisted, not
> initializingEAL: PCI device 0000:82:00.1 on NUMA socket 1EAL: Device is
> blacklisted, not initializingEAL: PCI device 0000:82:00.2 on NUMA socket
> 1EAL: Device is blacklisted, not initializingEAL: PCI device 0000:82:00.3
> on NUMA socket 1EAL: Device is blacklisted, not
> initializingnb_ports=1valid_num_ports=1MBUF_DATA_SIZE!76 ,
> MAX_QUEUES=8, RTE_TEST_RX_DESC_DEFAULT\x1024 , MBUF_CACHE_SIZE\x128Waiting
> for data...portid\x0enabled_port_mask=1**** MTU is programmed successfully
> to 9000port_init port - 0 Device supports maximum rx queues are 8MAX_QUEUES
> defined as 8max_no_tx_queue = 8 , max_no_rx_queue = 8pf queue num: 0,
> configured vmdq pool num: 8, each vmdq pool has 1
> queuesport=0,rx_rings=8,tx_rings=3rx-queue setup successfully q=0rx-queue
> setup successfully q=1rx-queue setup successfully q=2rx-queue setup
> successfully q=3rx-queue setup successfully q=4rx-queue setup successfully
> q=5rx-queue setup successfully q=6rx-queue setup successfully q=7tx-queue
> setup successfully q=0tx-queue setup successfully q=1tx-queue setup
> successfully q=2Port 0: Enabling HW FCVHOST_PORT: Max virtio devices
> supported: 8VHOST_PORT: Port 0 MAC: a0 36 9f cb ba 34Dump Flow Control
> 0HighWater Martk3828LowWater Martk2328PauseTime\x1664Send XON
> Martk=1Mode=1MAC Control Frame forward=0Setting Flow Control = FULLDump
> Flow Control 0HighWater Martk3828LowWater Martk2328PauseTime\x1664Send
> XON Martk=1Mode=1MAC Control Frame forward=0**** MTU is programmed
> successfully to 9000VHOST_DATA: ********************* TX - Procesing on
> Core 40 started********************* TX - Procesing on Core 40
> startedVHOST_DATA: ***************** RX Procesing on Core 41
> started***************** RX Procesing on Core 41 startedvmdq_conf_default.
> rxmode.mq_mode=4vmdq_conf_default.rxmode.max_rx_pkt_len> 9728vmdq_conf_default.rxmode.split_hdr_size=0vmdq_conf_
> default.rxmode.header_split=0vmdq_conf_default.rxmode.hw_
> ip_checksum=0vmdq_conf_default.rxmode.hw_vlan_filter> 0vmdq_conf_default.rxmode.hw_vlan_strip=1vmdq_conf_default.
> rxmode.hw_vlan_extend=0vmdq_conf_default.rxmode.jumbo_
> frame=1vmdq_conf_default.rxmode.hw_strip_crc=1vmdq_
> conf_default.rxmode.enable_scatter=1vmdq_conf_default.
> rxmode.enable_lro=0VHOST_CONFIG: vhost-user server: socket created, fd:
> 23VHOST_CONFIG: bind to /tmp/vubr0
>
>
>
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2017-09-01 15:52 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
[not found] <800120698.826966.1504246225460.ref@mail.yahoo.com>
2017-09-01 6:10 ` [dpdk-users] jumbo frame support..(more than 1500 bytes) Dharmesh Mehta
2017-09-01 12:55 ` Kyle Larose
2017-09-01 15:41 ` Chris Paquin
2017-09-01 15:52 ` Dharmesh Mehta
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).