DPDK patches and discussions
 help / color / mirror / Atom feed
From: Thomas Monjalon <thomas@monjalon.net>
To: Gal Cohen <galco@mellanox.com>
Cc: dev@dpdk.org
Subject: Re: [dpdk-dev] DPDK 20.05 Mellanox Roadmap
Date: Mon, 30 Mar 2020 00:00:55 +0200	[thread overview]
Message-ID: <1734829.R1toDxpfAE@xps> (raw)
In-Reply-To: <AM6PR05MB5078C11F13B3756FC5638A45C3F90@AM6PR05MB5078.eurprd05.prod.outlook.com>

Summarized for the web:
	http://git.dpdk.org/tools/dpdk-web/commit/?id=7edb98830


16/03/2020 09:21, Gal Cohen:
> Below is Mellanox's roadmap for DPDK20.05:
> 
> 
> 
> Reduce memory consumption in mlx5 PMD -
> 
> [1] Reduce flow memory (entry size) footprint/consumption.
> 
> [2] Remove flow rules caching.
> 
> Change the mlx5 PMD implementation of rte_eth_dev_stop()/rte_eth_dev_start() to stop caching flow rules (freeing resources for the created flows).
> 
> Benefits: Scale and performance improvement.
> 
> 
> 
> HW offload for TTL matching
> 
> [3] Offload TTL matching from routing applications to the NIC.
> 
> Usage through rte_flow API ; Implementation in mlx5 PMD.
> 
> Benefits: Simplifying the routing application complexity as well as reducing host CPU usage through offloading the TTL logic to the NIC.
> 
> 
> 
> Flow aging - Introducing new AGE action in the rte_flow API
> 
> [4]  Add a new action to allow the application (client) to define an age threshold (seconds) on which it expects to get notification from the mlx5 PMD.
> 
> The PMD will implement the flow aging monitoring aging through the use of the rte_flow API, instead of the application.
> 
> Additional background can be found in here: https://patchwork.dpdk.org/patch/53701/.
> 
> 
> 
> Add support for additional features in vDPA
> 
>             [5] Support Large-Send-Offload (LSO).
> 
>             [6] Improve debuggability through vDPA device counters and error reporting.
> 
>             [7] Rate limiting - allow application definition of the maximal number of packets; providing means to apply different policies/QoS for different applications / tenants.
> 
>             [8] ConnectX-6 Dx - introduce HW Doorbell instead of current SW Doorbell.
> 
> 
> 
> Performance improvement for jumbo-frame size
> 
>             [9] Add support for large MTU size while using MPRQ to provide full line rate performance with any packet size.
> 
> 
> 
> Hairpin with jumbo-frames
> 
>  [10] Add support for jumbo-frame packets In addition to the existing hairpin offloading functionality.
> 
> 
> 
> Enable GTP encap/decap
> 
> [11] Add encap/decap to GTP using the rte_flow raw_encap/raw_decap APIs.
> 
> 
> 
> Adding test application
> 
>            [12] Introduce a new (standalone) test application for measuring flow insertion rate and traffic performance.
> 
>            The application defines flows with rte_flow API ; Measure the flow insertion rate and traffic performance.
> 





      reply	other threads:[~2020-03-29 22:01 UTC|newest]

Thread overview: 2+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-03-16  8:21 Gal Cohen
2020-03-29 22:00 ` Thomas Monjalon [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1734829.R1toDxpfAE@xps \
    --to=thomas@monjalon.net \
    --cc=dev@dpdk.org \
    --cc=galco@mellanox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).