DPDK usage discussions
 help / color / mirror / Atom feed
* [dpdk-users] Fails to receive data more than 1500 bytes.
       [not found] <1098935201.614169.1503613167623.ref@mail.yahoo.com>
@ 2017-08-24 22:19 ` Dharmesh Mehta
  2017-08-24 23:18   ` Stephen Hemminger
  0 siblings, 1 reply; 3+ messages in thread
From: Dharmesh Mehta @ 2017-08-24 22:19 UTC (permalink / raw)
  To: Users

Hello,
I am using Intel i350 NIC card, and I am not able to receive data more than 1500 bytes in a packet. I tried igb_uio as well as uio_pci_generic driver, but both fails.
If I reduce data <= 1500 bytes than it works, but any thing more than 1500 is not able to receive.
Do I have to tune any config parameter in order to support more than 1500?
I tried to set MTU from code using API - rte_eth_dev_set_mtu(port_id, mtu) , but no success.
0000:03:00.0 'I350 Gigabit Network Connection 1521' drv=igb_uio unused=vfio-pci,uio_pci_generic0000:03:00.1 'I350 Gigabit Network Connection 1521' drv=uio_pci_generic unused=igb_uio,vfio-pci0000:03:00.2 'I350 Gigabit Network Connection 1521' drv=uio_pci_generic unused=igb_uio,vfio-pci0000:03:00.3 'I350 Gigabit Network Connection 1521' drv=uio_pci_generic unused=igb_uio,vfio-pci
Thanks in advance.DM.
From cpaquin@redhat.com  Fri Aug 25 00:20:11 2017
Return-Path: <cpaquin@redhat.com>
Received: from mail-oi0-f47.google.com (mail-oi0-f47.google.com
 [209.85.218.47]) by dpdk.org (Postfix) with ESMTP id C6B5E7D30
 for <users@dpdk.org>; Fri, 25 Aug 2017 00:20:10 +0200 (CEST)
Received: by mail-oi0-f47.google.com with SMTP id k77so7445140oib.2
 for <users@dpdk.org>; Thu, 24 Aug 2017 15:20:10 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:in-reply-to:references:from:date
 :message-id:subject:to:cc;
 bh=TMCerZd8ZMXWScBEdbcsBc8Uh+4i0/QseZ31Gf4xl+E=;
 b=gZCHBHLZHhM28QdRJNll9DRx9/xJ1Xx2oAaDYZC4CmmNhv78Awzv/ub5OECv3Nj+C5
 jYraOe60Y55dQLEtaLCp3fSnBBAe+cUuTQ7G2sNuL9t8WOU7N+Joz0dbzt4R2S/8WevU
 h00eOs+QdP6P/wzsn3sWi+MxHKCjA8zrG6yDEvrIPkW5onaXY/wxz7Om2yp6u8gG2UGK
 9gM46SAEfN+PZYHPN38asz9Ow0GdsoSLjnGodf6kPinr23ssplP8yKCQaLlBbCbbpSaC
 FgRvZYSlz17Jy+i50m5gyTRFGg/lzmM0cO6v1rfvJMkttQIpXU7yguTFTOazjywfbDjZ
 O2Mg==
X-Gm-Message-State: AHYfb5hG7bE+wclk+S+BJpXetfagiZgl7kJZ9a9qPt6gRjukfMYxsL0R
 vZsWPzoqMtnyvmH1cZqJXrbPMe/vwjAVkvNgWQ==
X-Received: by 10.202.188.7 with SMTP id m7mr10498479oif.55.1503613209813;
 Thu, 24 Aug 2017 15:20:09 -0700 (PDT)
MIME-Version: 1.0
Received: by 10.74.150.195 with HTTP; Thu, 24 Aug 2017 15:19:49 -0700 (PDT)
In-Reply-To: <9B0331B6EBBD0E4684FBFAEDA55776F93DBDF0EB@HASMSX110.ger.corp.intel.com>
References: <CAAMYPe4r+Nc=c4+-xoPmWnCiOatscfPJB1G_9N_J1+pP=dM=rA@mail.gmail.com>
 <9B0331B6EBBD0E4684FBFAEDA55776F93DBDF0EB@HASMSX110.ger.corp.intel.com>
From: Chris Paquin <cpaquin@redhat.com>
Date: Thu, 24 Aug 2017 18:19:49 -0400
Message-ID: <CAAMYPe6BSkzePp3NHNXQArk8LFHfqe5HZTQ30VhY53Rbjjg6ug@mail.gmail.com>
To: "Rosen, Rami" <rami.rosen@intel.com>
Cc: "users@dpdk.org" <users@dpdk.org>
Content-Type: text/plain; charset="UTF-8"
X-Content-Filtered-By: Mailman/MimeDel 2.1.15
Subject: Re: [dpdk-users] [SRIOV][TestPMD][OpenStack] No probed ethernet
	devices
X-BeenThere: users@dpdk.org
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: DPDK usage discussions <users.dpdk.org>
List-Unsubscribe: <http://dpdk.org/ml/options/users>,
 <mailto:users-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://dpdk.org/ml/archives/users/>
List-Post: <mailto:users@dpdk.org>
List-Help: <mailto:users-request@dpdk.org?subject=help>
List-Subscribe: <http://dpdk.org/ml/listinfo/users>,
 <mailto:users-request@dpdk.org?subject=subscribe>
X-List-Received-Date: Thu, 24 Aug 2017 22:20:11 -0000

Rami, I misspoke, I was actually told not to use igb_uio as it was old and
was replaced by virtio_pci. I am now seeing that this is not the case.
After compiling 17.08, I was able to successfully bind using igb-uio.

Regarding the kernel params you mention. I did set them, but am used to
those configurations only being used for SR-IOV passthrough on Compute
nodes. Is it also correct to set them in the guest? Or are you actually
talking about the compute node?

FYI - I do have testpdm up and running, now reading through docs and
figuring out how to config.

Thanks for the reply.



CHRISTOPHER PAQUIN

SENIOR CLOUD CONSULTANT, RHCE, RHCSA-OSP

Red Hat  <https://www.redhat.com/>

M: 770-906-7646
<https://red.ht/sig>

On Thu, Aug 24, 2017 at 5:56 PM, Rosen, Rami <rami.rosen@intel.com> wrote:

> Chris,
>
> >(not igb_uio - which I read was deprecated).
> Interesting, can you please give a link to where you read it ?
>
> Do you have "iommu=pt intel_iommu=on" in the kernel command line ? does
> "cat /proc/cmdline" show it ?
>
> What does "dmesg | grep -e DMAR -e IOMMU" show ?
>
> Regards,
> Rami Rosen
>
>
> -----Original Message-----
> From: users [mailto:users-bounces@dpdk.org] On Behalf Of Chris Paquin
> Sent: Thursday, August 24, 2017 22:21
> To: users@dpdk.org
> Subject: [dpdk-users] [SRIOV][TestPMD][OpenStack] No probed ethernet
> devices
>
> Hello. I am trying to get testpmd up and running in a RHEL 7.4 guest (DPDK
> 17.08), however, I am unable to get my interface to bind to a dpdk
> compatible driver.
>
> [root@localhost ~]# dpdk-devbind --status
>
> Network devices using DPDK-compatible driver ==============================
> ==============
> <none>
>
> Network devices using kernel driver
> ===================================
> 0000:00:03.0 'Virtio network device' if=eth0 drv=virtio-pci
> unused=virtio_pci,vfio-pci *Active*
> 0000:00:05.0 '82599 Ethernet Controller Virtual Function' if=eth1
> drv=ixgbevf unused=vfio-pci *Active*
>
> I am trying to bind the vfio-pci driver (not igb_uio - which I read was
> deprecated).
>
> I am running into the following error.
>
> [root@testpmd-vm ~]# dpdk_nic_bind --bind=vfio-pci 0000:00:05.0
> Error: bind failed for 0000:00:05.0 - Cannot bind to driver vfio-pci
>
> Has anyone seen this before? Can someone confirm that I am attempting to
> bind correctly?
>

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [dpdk-users] Fails to receive data more than 1500 bytes.
  2017-08-24 22:19 ` [dpdk-users] Fails to receive data more than 1500 bytes Dharmesh Mehta
@ 2017-08-24 23:18   ` Stephen Hemminger
  2017-08-25  0:01     ` Dharmesh Mehta
  0 siblings, 1 reply; 3+ messages in thread
From: Stephen Hemminger @ 2017-08-24 23:18 UTC (permalink / raw)
  To: Dharmesh Mehta; +Cc: Users

On Thu, 24 Aug 2017 22:19:27 +0000 (UTC)
Dharmesh Mehta <mehtadharmesh@yahoo.com> wrote:

> Hello,
> I am using Intel i350 NIC card, and I am not able to receive data more than 1500 bytes in a packet. I tried igb_uio as well as uio_pci_generic driver, but both fails.
> If I reduce data <= 1500 bytes than it works, but any thing more than 1500 is not able to receive.
> Do I have to tune any config parameter in order to support more than 1500?
> I tried to set MTU from code using API - rte_eth_dev_set_mtu(port_id, mtu) , but no success.
> 0000:03:00.0 'I350 Gigabit Network Connection 1521' drv=igb_uio unused=vfio-pci,uio_pci_generic0000:03:00.1 'I350 Gigabit Network Connection 1521' drv=uio_pci_generic unused=igb_uio,vfio-pci0000:03:00.2 'I350 Gigabit Network Connection 1521' drv=uio_pci_generic unused=igb_uio,vfio-pci0000:03:00.3 'I350 Gigabit Network Connection 1521' drv=uio_pci_generic unused=igb_uio,vfio-pci
> Thanks in advance.DM.

In order to support >1500 bytes, you need to at least:
	1. set jumbo_frame when setting rxmode
	2. set enable_scatter in rxmode (unless mtu + overhead < pool size)
	3. make sure pool mbuf size > eth->min_rx_buf_size

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [dpdk-users] Fails to receive data more than 1500 bytes.
  2017-08-24 23:18   ` Stephen Hemminger
@ 2017-08-25  0:01     ` Dharmesh Mehta
  0 siblings, 0 replies; 3+ messages in thread
From: Dharmesh Mehta @ 2017-08-25  0:01 UTC (permalink / raw)
  To: Stephen Hemminger; +Cc: Users

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain; charset=UTF-8, Size: 14019 bytes --]






/* empty vmdq configuration structure. Filled in programatically */static struct rte_eth_conf vmdq_conf_default = { .rxmode = { .mq_mode        = ETH_MQ_RX_VMDQ_ONLY, .split_hdr_size = 0, .header_split   = 0, /**< Header Split disabled */ .hw_ip_checksum = 0, /**< IP checksum offload disabled */ .hw_vlan_filter = 0, /**< VLAN filtering disabled */ /*  * It is necessary for 1G NIC such as I350,  * this fixes bug of ipv4 forwarding in guest can't  * forward pakets from one virtio dev to another virtio dev.  */ .hw_vlan_strip  = 1, /**< VLAN strip enabled. */ .jumbo_frame    = 0, /**< Jumbo Frame Support disabled */ .hw_strip_crc   = 1, /**< CRC stripped by hardware */ .enable_scatter = 1, //required for jumbofram + 1500. .jumbo_frame = 1, //required for jumbofram + 1500. },
 .txmode = { .mq_mode = ETH_MQ_TX_NONE, }, .rx_adv_conf = { /*  * should be overridden separately in code with  * appropriate values  */ .vmdq_rx_conf = { .nb_queue_pools = ETH_8_POOLS, .enable_default_pool = 0, .default_pool = 0, .nb_pool_maps = 0, .pool_map = {{0, 0},}, }, },};
This is my config. am i missing something?

      From: Stephen Hemminger <stephen@networkplumber.org>
 To: Dharmesh Mehta <mehtadharmesh@yahoo.com> 
Cc: Users <users@dpdk.org>
 Sent: Thursday, August 24, 2017 4:18 PM
 Subject: Re: [dpdk-users] Fails to receive data more than 1500 bytes.
   
On Thu, 24 Aug 2017 22:19:27 +0000 (UTC)
Dharmesh Mehta <mehtadharmesh@yahoo.com> wrote:

> Hello,
> I am using Intel i350 NIC card, and I am not able to receive data more than 1500 bytes in a packet. I tried igb_uio as well as uio_pci_generic driver, but both fails.
> If I reduce data <= 1500 bytes than it works, but any thing more than 1500 is not able to receive.
> Do I have to tune any config parameter in order to support more than 1500?
> I tried to set MTU from code using API - rte_eth_dev_set_mtu(port_id, mtu) , but no success.
> 0000:03:00.0 'I350 Gigabit Network Connection 1521' drv=igb_uio unused=vfio-pci,uio_pci_generic0000:03:00.1 'I350 Gigabit Network Connection 1521' drv=uio_pci_generic unused=igb_uio,vfio-pci0000:03:00.2 'I350 Gigabit Network Connection 1521' drv=uio_pci_generic unused=igb_uio,vfio-pci0000:03:00.3 'I350 Gigabit Network Connection 1521' drv=uio_pci_generic unused=igb_uio,vfio-pci
> Thanks in advance.DM.

In order to support >1500 bytes, you need to at least:
    1. set jumbo_frame when setting rxmode
    2. set enable_scatter in rxmode (unless mtu + overhead < pool size)
    3. make sure pool mbuf size > eth->min_rx_buf_size


   
From mehtadharmesh@yahoo.com  Fri Aug 25 02:49:39 2017
Return-Path: <mehtadharmesh@yahoo.com>
Received: from sonic310-15.consmr.mail.bf2.yahoo.com
 (sonic310-15.consmr.mail.bf2.yahoo.com [74.6.135.125])
 by dpdk.org (Postfix) with ESMTP id 849C77D14
 for <users@dpdk.org>; Fri, 25 Aug 2017 02:49:39 +0200 (CEST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048;
 t\x1503622179; bh=Scp6MSg5ayVQK1sT3gsmCJJZX8TTKw2en9UDgBJ2ZXY=;
 hÚte:From:Reply-To:To:Cc:In-Reply-To:References:Subject:From:Subject;
 b=BuouF+3oKXbNAkiDik5To58q3YL4ITHRw0PIc9VfJNF3v+mjuKl7l2avWEwdyZpCUyUej1dgY9pN9xMpCnpF0T0SK76MsCKa2QmeFqrCOckp35vdT545Att8SXURdBZ5L0Ozn02/yTSgdQLvp8ewyT3Kz5ysYqhgt9THMizW/CcUhuHgC5NucfMG0x2nXSq/JrKXRHaDF+YadNzyqBN9yafQArfrhiNJmrHopjvGGF/g219pOmuUFnNsmiwt7UntvYiMPi+NoTvQShz/twAiLy0MZTZ9Sd/Q2WMFkpWJoRNy4Ue3GmKbCwSBCucVOe0IpP6CeC2C177KrUwemBlh5g=X-YMail-OSG: z0Mb55AVM1kbXWAPdrP5ABN8J3U_2Pw7f2hiSK1Jyk8W5_Iuij2sRrGHK5z_6wG
 7kijBqCvzJ4IWVisfj8wKXu6HZYHVPpCer.tEJTGHBS2Pvpn7iXagAnOZ7WngELoPpA9_ntNBGWK
 3XC1WKrcbckw3wyiU0_abOFQEghf7QKlCkhL6Ondj8cdFG7yjYPrzl0KZGq8rx8PnI3erfayFKIq
 u.zko_jYFaE9063R1u82pIgbpebu8Yy6S17Inh2NqznpL2hb3fvk6_x.P1CgyzU.KsCKHHrzH1z9
 j._MC_2emMKOfNE15hnLubClgp_nw0UdNjjae709KzXZTiuPWmhzMxmyzXtYeCBfbE9LdsUFuEWh
 3UqK6_6N2ECO6iqtsvfug8a_E_.pM7Tlvq7wN8GtKu.frF7r12.GuvodfRA5VWZHeYXo4rxUbFAe
 P3kJ2AOWJsRmblo2.gF6UabEFLqSXwzW5I2rc5UzNdWu6CVg-
Received: from sonic.gate.mail.ne1.yahoo.com by
 sonic310.consmr.mail.bf2.yahoo.com with HTTP; Fri, 25 Aug 2017 00:49:39 +0000
Date: Fri, 25 Aug 2017 00:49:27 +0000 (UTC)
From: Dharmesh Mehta <mehtadharmesh@yahoo.com>
To: Stephen Hemminger <stephen@networkplumber.org>
Cc: Users <users@dpdk.org>
Message-ID: <1411108186.679125.1503622167743@mail.yahoo.com>
In-Reply-To: <20170824161831.2967997e@xeon-e3>
References: <1098935201.614169.1503613167623.ref@mail.yahoo.com>
 <1098935201.614169.1503613167623@mail.yahoo.com>
 <20170824161831.2967997e@xeon-e3>
MIME-Version: 1.0
X-Mailer: WebService/1.1.10451 YahooMailNeo Mozilla/5.0 (Windows NT 6.1; Win64;
 x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/60.0.3112.101
 Safari/537.36
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable
X-Content-Filtered-By: Mailman/MimeDel 2.1.15
Subject: Re: [dpdk-users] Fails to receive data more than 1500 bytes.
X-BeenThere: users@dpdk.org
X-Mailman-Version: 2.1.15
Precedence: list
Reply-To: Dharmesh Mehta <mehtadharmesh@yahoo.com>
List-Id: DPDK usage discussions <users.dpdk.org>
List-Unsubscribe: <http://dpdk.org/ml/options/users>,
 <mailto:users-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://dpdk.org/ml/archives/users/>
List-Post: <mailto:users@dpdk.org>
List-Help: <mailto:users-request@dpdk.org?subject=help>
List-Subscribe: <http://dpdk.org/ml/listinfo/users>,
 <mailto:users-request@dpdk.org?subject=subscribe>
X-List-Received-Date: Fri, 25 Aug 2017 00:49:39 -0000

Here is dump of my rx_mode. (i am using dpdk 17.05.1)
vmdq_conf_default.rxmode.mq_mode=4vmdq_conf_default.rxmode.max_rx_pkt_len=9728vmdq_conf_default.rxmode.split_hdr_size=0vmdq_conf_default.rxmode.header_split=0vmdq_conf_default.rxmode.hw_ip_checksum=0vmdq_conf_default.rxmode.hw_vlan_filter=0vmdq_conf_default.rxmode.hw_vlan_strip=1vmdq_conf_default.rxmode.hw_vlan_extend=0vmdq_conf_default.rxmode.jumbo_frame=1vmdq_conf_default.rxmode.hw_strip_crc=1vmdq_conf_default.rxmode.enable_scatter=1vmdq_conf_default.rxmode.enable_lro=0
but still I don't see my code is able to capture packet. TX is fine.
What other area of code I should check?
-DM.


      From: Stephen Hemminger <stephen@networkplumber.org>
 To: Dharmesh Mehta <mehtadharmesh@yahoo.com> 
Cc: Users <users@dpdk.org>
 Sent: Thursday, August 24, 2017 4:18 PM
 Subject: Re: [dpdk-users] Fails to receive data more than 1500 bytes.
   
On Thu, 24 Aug 2017 22:19:27 +0000 (UTC)
Dharmesh Mehta <mehtadharmesh@yahoo.com> wrote:

> Hello,
> I am using Intel i350 NIC card, and I am not able to receive data more than 1500 bytes in a packet. I tried igb_uio as well as uio_pci_generic driver, but both fails.
> If I reduce data <= 1500 bytes than it works, but any thing more than 1500 is not able to receive.
> Do I have to tune any config parameter in order to support more than 1500?
> I tried to set MTU from code using API - rte_eth_dev_set_mtu(port_id, mtu) , but no success.
> 0000:03:00.0 'I350 Gigabit Network Connection 1521' drv=igb_uio unused=vfio-pci,uio_pci_generic0000:03:00.1 'I350 Gigabit Network Connection 1521' drv=uio_pci_generic unused=igb_uio,vfio-pci0000:03:00.2 'I350 Gigabit Network Connection 1521' drv=uio_pci_generic unused=igb_uio,vfio-pci0000:03:00.3 'I350 Gigabit Network Connection 1521' drv=uio_pci_generic unused=igb_uio,vfio-pci
> Thanks in advance.DM.

In order to support >1500 bytes, you need to at least:
    1. set jumbo_frame when setting rxmode
    2. set enable_scatter in rxmode (unless mtu + overhead < pool size)
    3. make sure pool mbuf size > eth->min_rx_buf_size


   
From wbahacer@126.com  Fri Aug 25 10:41:00 2017
Return-Path: <wbahacer@126.com>
Received: from m15-114.126.com (m15-114.126.com [220.181.15.114])
 by dpdk.org (Postfix) with ESMTP id 6BBEC7CB8
 for <users@dpdk.org>; Fri, 25 Aug 2017 10:40:57 +0200 (CEST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d\x126.com;
 s=s110527; h=From:Subject:Message-ID:Date:MIME-Version; bh=RKBmT
 rar8LHEIEN8IT2K57Zl7l4ENmOye/poLTKHams=; b=pxE3TYFja0ls2UlIJ5+9u
 93thRzmrzV32Xlp/htF6mgjUlfdWiJMw7GXHvunRZf703cySvj00usUfRserSHtR
 EDv/XVEkcWTJfonE9IZEtLBe+f5b3bTH01+faQnbEXNNoprOg4wPxdnupI9HO+Tl
 aTENAkiG+go8O/91KFacSQReceived: from [10.24.0.238] (unknown [202.189.3.162])
 by smtp7 (Coremail) with SMTP id DsmowACnH8SV4p9ZOvCqAA--.43098S2;
 Fri, 25 Aug 2017 16:40:54 +0800 (CST)
To: users@dpdk.org
From: Furong <WBAHACER@126.com>
Message-ID: <c9a0571a-1bac-8bea-9bf4-10a856d7ef41@126.com>
Date: Fri, 25 Aug 2017 16:40:51 +0800
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:52.0) Gecko/20100101
 Thunderbird/52.3.0
MIME-Version: 1.0
X-CM-TRANSID: DsmowACnH8SV4p9ZOvCqAA--.43098S2
X-Coremail-Antispam: 1Uf129KBjvJXoWxAw4DJFW8ZF1DWr15Jr4Utwb_yoWrJw47pa
 4UKF97tw1kJr4rWws5Za4ruFW2kFs7Za17G34fJ34vkF1qg3savr98K3Z8uayUuF4Iyry5
 XrWDGFyv9w1kAaDanT9S1TB71UUUUUJqnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2
 9KBjDUYxBIdaVFxhVjvjDU0xZFpf9x07j_CzZUUUUUX-Originating-IP: [202.189.3.162]
X-CM-SenderInfo: xzedxtxfhuqiyswou0bp/1tbizxY-MFUw5m3AowAAsb
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
X-Content-Filtered-By: Mailman/MimeDel 2.1.15
Subject: [dpdk-users] How to tune configurations for measuring zero
 packet-loss performance of OVS-DPDK with vhost-user?
X-BeenThere: users@dpdk.org
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: DPDK usage discussions <users.dpdk.org>
List-Unsubscribe: <http://dpdk.org/ml/options/users>,
 <mailto:users-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://dpdk.org/ml/archives/users/>
List-Post: <mailto:users@dpdk.org>
List-Help: <mailto:users-request@dpdk.org?subject=help>
List-Subscribe: <http://dpdk.org/ml/listinfo/users>,
 <mailto:users-request@dpdk.org?subject=subscribe>
X-List-Received-Date: Fri, 25 Aug 2017 08:41:00 -0000

Hi! I've built a testbed to measure the zero packet-loss performance of
OVS-DPDK with vhost-user.

Here are the configurations of my testbed:
1. Host machine (ubuntu14.04.5, Linux-3.19.0-25):

     a/  hardware: quad socket with Intel Xeon E5-4603v2@2.20GHz (4
cores/socket), 32GB DDR3 memory, dual-port Intel 82599ES NIC
(10Gbps/port, in socket0);

     b/  BIOS settings: disable power management options including
C-state, P-state, Step Speedup and set cpu in performance mode;

     c/  host OS booting parameters: isolcpus=0-7, nohz_full=0-7,
rcu_nocbs=0-7, intel_iommu=on, iommu=pt and 16 x 1G hugepages

     d/  OVS-DPDK:

          1)  version: OVS-2.6.1, DPDK-16.07.2 (using
x86_64-ivshmem-linuxapp-gcc target)

          2)  configurations: 2 physical port (dpdk0 and dpdk1, vfio-pci
dirver) and 2 vhost-user port (vhost0, vhost1) were added to ovs bridge
(br0), and 1 PMD core (pinned to core 0, in socket0) was used for
forwarding. The fowarding rules were
"in_port=dpdk0,action=output:vhost0" and
"in_port=vhost1,action=output:dpdk0".

      e/ irq affinity: kill irqbalance and set smp_affinity of all irqs
to 0xff00 (core 8-15).

      f/  RT priority: change RT priority of ksoftirqd (chrt -fp 2
$tid), rcuos (chrt -fp 3 $tid) and rcuob (chrt -fp 2 $tid).

2. VM setting

      a/ hypervisor: QEMU-2.8.0 and KVM

      b/ QEMU command:

qemu-system-x86_64 -enable-kvm -drive file=$IMAGE,if=virtio -cpu host
-smp 3 -m 4G -boot c \
              -name $NAME -vnc :$VNC_INDEX  -net none \
              -object
memory-backend-file,id=mem,size=4G,mem-path=/dev/hugepages,share=on \
              -mem-prealloc -numa node,memdev=mem \
              -chardev socket,id=char1,path=$VHOSTDIR/vhost0 \
              -netdev type=vhost-user,id=net1,chardev=char1,vhostforce \
              -device
virtio-net-pci,netdev=net1,macR:54:00:00:00:14,csum=off,gso=off,guest_tso4=off,guest_tso6=off,guest_ecn=off,mrg_rxbuf=off,rx_queue_size\x1024,indirect_desc=on
\
              -chardev socket,id=char2,path=$VHOSTDIR/vhost1 \
              -netdev type=vhost-user,id=net2,chardev=char2,vhostforce \
              -device
virtio-net-pci,netdev=net2,macR:54:00:00:00:15,csum=off,gso=off,guest_tso4=off,guest_tso6=off,guest_ecn=off,mrg_rxbuf=off,rx_queue_size\x1024,indirect_desc=on

     c/ Guest OS: ubuntu14.04

     d/ Guest OS booting parameters: isolcpus=0-1, nohz_full=0-1,
rcu_nocbs=0-1, and 1 x 1G hugepages

     e/ irq affinity and RT priority: remove irqs and change RT priority
of isolated vcpus (vcpu0, vcpu1)

     f/ Guest forwarding application: example/l2fwd build on
dpdk-16.07.2 (using ivshmem target). The function of l2fwd is to forward
packets from one port to another port, and each port has its' own
polling thread to receive packets.

     g/ App configurations: two virtio ports (vhost0, vhost1, using
uio_pci_generic driver) were used by l2fwd, and l2fwd had 2 polling
threads that ran on vcpu0 and vcpu1 (pinned to physical core1 and core2,
in socket0).

3. Traffic generator

      a/ Spirent TestCenter with 2 x 10G ports was used to generate traffic.

      b/ 1 flow with 64B packet size was generated from one port and
sent to dpdk0, and then receive and count packets at another port.

Here are my results:
1. Max throughput (non zero packet-loss case): 2.03Gbps

2. Max throughput (zero packet-loss case): 100 ~ 200Mbps

And I got some information about packet loss from packet statistics in
OVS and l2fwd:

When input traffic large than 200Mbps, there may were 3 packet loss
point -- OVS rx from physical NIC (RX queue was full), OVS tx to vhost
port (vhost rx queue was full) and l2fwd tx to vhost port (vhost tx
queue was full).

I don't know why the difference between above 2 cases is so large.

I doubt that I've misconfigure my testbed. Could someone share
experience with me ?

Thanks a lot!

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2017-08-25  0:01 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <1098935201.614169.1503613167623.ref@mail.yahoo.com>
2017-08-24 22:19 ` [dpdk-users] Fails to receive data more than 1500 bytes Dharmesh Mehta
2017-08-24 23:18   ` Stephen Hemminger
2017-08-25  0:01     ` Dharmesh Mehta

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).