DPDK patches and discussions
 help / color / mirror / Atom feed
From: Didier Pallard <didier.pallard@6wind.com>
To: Yong Wang <yongwang@vmware.com>, "dev@dpdk.org" <dev@dpdk.org>
Subject: Re: [dpdk-dev] [PATCH 0/8] net/vmxnet3: fix offload issues
Date: Fri, 13 Apr 2018 16:33:38 +0200	[thread overview]
Message-ID: <a5d261c5-69a5-0f86-965c-1d92c1e854a9@6wind.com> (raw)
In-Reply-To: <E4F52E19-FC73-4E4D-B546-4BC4FE7A3E56@vmware.com>

Hi wang,

We didn't make unitary tests with testpmd tools, the validation tests 
have been done
using our DPDK application in the following topology:


+------------------------------+
| +-----------+  +-----------+ |
| | Linux VM1 |  | Linux VM2 | |
| +------+----+  +----+------+ |
|       VMware DvSwitch        |
|     +--+------------+--+     |
|     |  +---OVSbr0---+  |     |
|     |                  |     |
|     |  6WIND DPDK app  |     |
|     +------------------+     |
|      VMware ESXi 6.0/6.5     |
+------------------------------+



All the available offloads are enabled in Linux VM 1 and 2.
Iperf TCP traffic is started from Linux VM1 to Linux VM2.


With ESXi 6.0 (vHW 11), we got the following numbers using 2 cores for 
our DPDK app:
- with LRO enabled on the DPDK app ports: 21 Gbps
- with LRO disabled on the DPDK app ports: 9 Gbps


With ESXi 6.5 (vHW 13), we got the following numbers using 2 cores for 
our DPDK app:
- with LRO enabled on the DPDK app ports: 40 Gbps
- with LRO disabled on the DPDK app ports: 20 Gbps


Didier

/*
*/
On 04/13/2018 06:44 AM, Yong Wang wrote:
> On 3/28/18, 8:44 AM, "dev on behalf of Didier Pallard" <dev-bounces@dpdk.org on behalf of didier.pallard@6wind.com> wrote:
>
>      This patchset fixes several issues found in vmxnet3 driver
>      when enabling LRO offload support:
>      - Rx offload information are not correctly gathered in
>        multisegmented packets, leading to inconsistent
>        packet type and Rx offload bits in resulting mbuf
>      - MSS recovery from offload information is not done
>        thus LRO mbufs do not contain a correct tso_segsz value.
>      - MSS value is not propagated by the host on some
>        hypervisor versions (6.0 for example)
>      - If two small TCP segments are aggregated in a single
>        mbuf, an empty segment that only contains offload
>        information is appended to this segment, and is
>        propagated as is to the application. But if the application
>        sends back to the hypervisor a mbuf with an empty
>        segment, this mbuf is dropped by the hypervisor.
>      
>      Didier Pallard (8):
>        net: export IPv6 header extensions skip function
>        net/vmxnet3: return unknown IPv4 extension len ptype
>        net/vmxnet3: gather offload data on first and last segment
>        net/vmxnet3: fix Rx offload information in multiseg packets
>        net/vmxnet3: complete Rx offloads support
>        net/vmxnet3: guess mss if not provided in LRO mode
>        net/vmxnet3: ignore emtpy segments in reception
>        net/vmxnet3: skip empty segments in transmission
>      
>       drivers/net/vmxnet3/Makefile            |   1 +
>       drivers/net/vmxnet3/base/vmxnet3_defs.h |  27 ++++-
>       drivers/net/vmxnet3/vmxnet3_ethdev.c    |   2 +
>       drivers/net/vmxnet3/vmxnet3_ethdev.h    |   1 +
>       drivers/net/vmxnet3/vmxnet3_rxtx.c      | 200 ++++++++++++++++++++++++++------
>       lib/librte_net/Makefile                 |   1 +
>       lib/librte_net/rte_net.c                |  21 ++--
>       lib/librte_net/rte_net.h                |  27 +++++
>       lib/librte_net/rte_net_version.map      |   1 +
>       9 files changed, 238 insertions(+), 43 deletions(-)
>      
>      --
>      2.11.0
>      
> Didier, the changes look good overall.  Can you describe how did you test this patch set as well as making sure no regression for non-lro case?
>

  reply	other threads:[~2018-04-13 14:33 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-03-28 15:43 Didier Pallard
2018-03-28 15:43 ` [dpdk-dev] [PATCH 1/8] net: export IPv6 header extensions skip function Didier Pallard
2018-04-17 19:28   ` Ferruh Yigit
2018-04-23  8:35   ` Olivier Matz
2018-03-28 15:43 ` [dpdk-dev] [PATCH 2/8] net/vmxnet3: return unknown IPv4 extension len ptype Didier Pallard
2018-04-16 19:46   ` Yong Wang
2018-04-17  9:09     ` Didier Pallard
2018-03-28 15:43 ` [dpdk-dev] [PATCH 3/8] net/vmxnet3: gather offload data on first and last segment Didier Pallard
2018-03-28 15:43 ` [dpdk-dev] [PATCH 4/8] net/vmxnet3: fix Rx offload information in multiseg packets Didier Pallard
2018-03-28 15:43 ` [dpdk-dev] [PATCH 5/8] net/vmxnet3: complete Rx offloads support Didier Pallard
2018-03-28 15:43 ` [dpdk-dev] [PATCH 6/8] net/vmxnet3: guess mss if not provided in LRO mode Didier Pallard
2018-03-28 15:43 ` [dpdk-dev] [PATCH 7/8] net/vmxnet3: ignore emtpy segments in reception Didier Pallard
2018-03-28 15:43 ` [dpdk-dev] [PATCH 8/8] net/vmxnet3: skip empty segments in transmission Didier Pallard
2018-04-13  4:44 ` [dpdk-dev] [PATCH 0/8] net/vmxnet3: fix offload issues Yong Wang
2018-04-13 14:33   ` Didier Pallard [this message]
2018-04-20 22:02 ` Yong Wang
2018-04-23 14:46   ` Ferruh Yigit

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=a5d261c5-69a5-0f86-965c-1d92c1e854a9@6wind.com \
    --to=didier.pallard@6wind.com \
    --cc=dev@dpdk.org \
    --cc=yongwang@vmware.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).