From: Vlad Zolotarov <vladz@cloudius-systems.com>
To: dev@dpdk.org
Subject: [dpdk-dev] [PATCH v1 0/5]: Add LRO support to ixgbe PMD
Date: Tue, 3 Mar 2015 21:48:38 +0200 [thread overview]
Message-ID: <1425412123-5227-1-git-send-email-vladz@cloudius-systems.com> (raw)
This series adds the missing flow for enabling the LRO in the ethdev and
adds a support for this feature in the ixgbe PMD. There is a big hope that this
initiative is going to be picked up by some Intel developer that would add the LRO support
to other Intel PMDs. ;)
The series starts with some cleanup work in the code the final patch (the actual adding of
the LRO support) is going to touch/use/change. There are still quite a few issues in the ixgbe
PMD code left but they have to be a matter of a different series and I've left a few "TODO"
remarks in the code.
The LRO ("RSC" in Intel's context) PMD completion handling code follows the same design as the
corresponding Linux and FreeBSD implementation: pass the aggregation's cluster HEAD buffer to
the NEXTP entry of the software ring till EOP is met.
HW configuration follows the corresponding specs: this feature is supported only by x540 and
82599 PF devices.
The feature has been tested with seastar TCP stack with the following configuration on Tx side:
- MTU: 400B
- 100 concurrent TCP connections.
The results were:
- Without LRO: total throughput: 0.12Gbps, coefficient of variance: 1.41%
- With LRO: total throughput: 8.21Gbps, coefficient of variance: 0.59%
Vlad Zolotarov (5):
ixgbe: Cleanups
ixgbe: Bug fix: Properly configure Rx CRC stripping for x540 devices
ixgbe: Code refactoring
common_linuxapp: Added CONFIG_RTE_ETHDEV_LRO_SUPPORT option
ixgbe: Add LRO support
config/common_linuxapp | 1 +
lib/librte_ether/rte_ethdev.h | 7 +-
lib/librte_pmd_ixgbe/ixgbe_ethdev.c | 17 +
lib/librte_pmd_ixgbe/ixgbe_ethdev.h | 5 +
lib/librte_pmd_ixgbe/ixgbe_rxtx.c | 724 ++++++++++++++++++++++++++++++++----
lib/librte_pmd_ixgbe/ixgbe_rxtx.h | 6 +
6 files changed, 687 insertions(+), 73 deletions(-)
--
2.1.0
next reply other threads:[~2015-03-03 19:48 UTC|newest]
Thread overview: 18+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-03-03 19:48 Vlad Zolotarov [this message]
2015-03-03 19:48 ` [dpdk-dev] [PATCH v1 1/5] ixgbe: Cleanups Vlad Zolotarov
2015-03-03 19:48 ` [dpdk-dev] [PATCH v1 2/5] ixgbe: Bug fix: Properly configure Rx CRC stripping for x540 devices Vlad Zolotarov
2015-03-03 19:48 ` [dpdk-dev] [PATCH v1 3/5] ixgbe: Code refactoring Vlad Zolotarov
2015-03-03 19:48 ` [dpdk-dev] [PATCH v1 4/5] common_linuxapp: Added CONFIG_RTE_ETHDEV_LRO_SUPPORT option Vlad Zolotarov
2015-03-03 19:48 ` [dpdk-dev] [PATCH v1 5/5] ixgbe: Add LRO support Vlad Zolotarov
2015-03-04 0:33 ` Stephen Hemminger
2015-03-04 7:24 ` Vlad Zolotarov
2015-03-04 0:33 ` Stephen Hemminger
2015-03-04 7:57 ` Vlad Zolotarov
2015-03-04 18:54 ` Stephen Hemminger
2015-03-05 9:36 ` Vlad Zolotarov
2015-03-04 8:05 ` Avi Kivity
2015-03-04 0:34 ` Stephen Hemminger
2015-03-04 7:57 ` Vlad Zolotarov
2015-03-04 0:36 ` Stephen Hemminger
2015-03-04 7:59 ` Vlad Zolotarov
2015-03-04 18:51 ` Stephen Hemminger
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1425412123-5227-1-git-send-email-vladz@cloudius-systems.com \
--to=vladz@cloudius-systems.com \
--cc=dev@dpdk.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).