DPDK patches and discussions
 help / color / mirror / Atom feed
From: "Ananyev, Konstantin" <konstantin.ananyev@intel.com>
To: Vlad Zolotarov <vladz@cloudius-systems.com>,
	"dev@dpdk.org" <dev@dpdk.org>
Subject: Re: [dpdk-dev] [PATCH v7 0/3]: Add LRO support to ixgbe PMD
Date: Wed, 11 Mar 2015 14:13:32 +0000	[thread overview]
Message-ID: <2601191342CEEE43887BDE71AB977258213F57FF@irsmsx105.ger.corp.intel.com> (raw)
In-Reply-To: <1426015891-20450-1-git-send-email-vladz@cloudius-systems.com>



> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Vlad Zolotarov
> Sent: Tuesday, March 10, 2015 7:31 PM
> To: dev@dpdk.org
> Subject: [dpdk-dev] [PATCH v7 0/3]: Add LRO support to ixgbe PMD
> 
> This series adds the missing flow for enabling the LRO in the ethdev and
> adds a support for this feature in the ixgbe PMD. There is a big hope that this
> initiative is going to be picked up by some Intel developer that would add the LRO support
> to other Intel PMDs.
> 
> The series starts with some cleanup work in the code the final patch (the actual adding of
> the LRO support) is going to touch/use/change. There are still quite a few issues in the ixgbe
> PMD code left but they have to be a matter of a different series and I've left a few "TODO"
> remarks in the code.
> 
> The LRO ("RSC" in Intel's context) PMD completion handling code follows the same design as the
> corresponding Linux and FreeBSD implementation: pass the aggregation's cluster HEAD buffer to
> the NEXTP entry of the software ring till EOP is met.
> 
> HW configuration follows the corresponding specs: this feature is supported only by x540 and
> 82599 PF devices.
> 
> The feature has been tested with seastar TCP stack with the following configuration on Tx side:
>    - MTU: 400B
>    - 100 concurrent TCP connections.
> 
> The results were:
>    - Without LRO: total throughput: 0.12Gbps, coefficient of variance: 1.41%
>    - With LRO:    total throughput: 8.21Gbps, coefficient of variance: 0.59%
> 
> This is an almost factor 80 improvement.
> 
> New in v7:
>    - Free not-yet-completed RSC aggregations in rte_eth_dev_stop() flow.
>    - Fixed rx_bulk_alloc_allowed and rx_vec_allowed initialization:
>       - Don't set them to FALSE in rte_eth_dev_stop() flow - the following
>         rte_eth_dev_start() will need them.
>       - Reset them to TRUE in rte_eth_dev_configure() and not in a probe() flow.
>         This will ensure the proper behaviour if port is re-configured.

That changes probably should be part of another patch you submitted:
[PATCH v2 3/3] ixgbe: Unify the rx_pkt_bulk callbackinitialization
?
Konstantin

>    - Reset the sw_ring[].mbuf entry in a bulk allocation case.
>      This is needed for ixgbe_rx_queue_release_mbufs().
>    - _recv_pkts_lro(): added the missing memory barrier before RDT update in a
>      non-bulk allocation case.
>    - Don't allow RSC when device is configured in an SR-IOV mode.
> 
> New in v6:
>    - Fix of the typo in the "bug fixes" series that broke the compilation caused a
>      minor change in this follow-up series.
> 
> New in v5:
>    - Split the series into "bug fixes" and "all the rest" so that the former could be
>      integrated into a 2.0 release.
>    - Put the RTE_ETHDEV_HAS_LRO_SUPPORT definition at the beginning of rte_ethdev.h.
>    - Removed the "TODO: Remove me" comment near RTE_ETHDEV_HAS_LRO_SUPPORT.
> 
> New in v4:
>    - Remove CONFIG_RTE_ETHDEV_LRO_SUPPORT from config/common_linuxapp.
>    - Define RTE_ETHDEV_HAS_LRO_SUPPORT in rte_ethdev.h.
>    - As a result of "ixgbe: check rxd number to avoid mbuf leak" (352078e8e) Vector Rx
>      had to get the same treatment as Rx Bulk Alloc (see PATCH4 for more details).
> 
> New in v3:
>    - ixgbe_rx_alloc_bufs(): Always reset refcnt of the buffers to 1. Otherwise rte_pktmbuf_free()
>      won't free them.
> 
> New in v2:
>    - Removed rte_eth_dev_data.lro_bulk_alloc and added ixgbe_hw.rx_bulk_alloc_allowed
>      instead.
>    - Unified the rx_pkt_bulk callback setting (a separate new patch).
>    - Fixed a few styling and spelling issues.
> 
> Vlad Zolotarov (3):
>   ixgbe: Cleanups
>   ixgbe: Code refactoring
>   ixgbe: Add LRO support
> 
>  lib/librte_ether/rte_ethdev.h       |   9 +-
>  lib/librte_pmd_ixgbe/ixgbe_ethdev.c |  29 +-
>  lib/librte_pmd_ixgbe/ixgbe_ethdev.h |   5 +
>  lib/librte_pmd_ixgbe/ixgbe_rxtx.c   | 738 ++++++++++++++++++++++++++++++++----
>  lib/librte_pmd_ixgbe/ixgbe_rxtx.h   |   6 +
>  5 files changed, 710 insertions(+), 77 deletions(-)
> 
> --
> 2.1.0

  parent reply	other threads:[~2015-03-11 14:13 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-03-10 19:31 Vlad Zolotarov
2015-03-10 19:31 ` [dpdk-dev] [PATCH v7 1/3] ixgbe: Cleanups Vlad Zolotarov
2015-03-10 19:31 ` [dpdk-dev] [PATCH v7 2/3] ixgbe: Code refactoring Vlad Zolotarov
2015-03-10 19:31 ` [dpdk-dev] [PATCH v7 3/3] ixgbe: Add LRO support Vlad Zolotarov
2015-03-11 15:02   ` Ananyev, Konstantin
2015-03-11 15:56     ` Vlad Zolotarov
2015-03-11 14:13 ` Ananyev, Konstantin [this message]
2015-03-11 15:56   ` [dpdk-dev] [PATCH v7 0/3]: Add LRO support to ixgbe PMD Vlad Zolotarov

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=2601191342CEEE43887BDE71AB977258213F57FF@irsmsx105.ger.corp.intel.com \
    --to=konstantin.ananyev@intel.com \
    --cc=dev@dpdk.org \
    --cc=vladz@cloudius-systems.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).