DPDK patches and discussions
 help / color / mirror / Atom feed
From: Asaf Penso <asafp@mellanox.com>
To: "dev@dpdk.org" <dev@dpdk.org>
Subject: [dpdk-dev] DPDK20.02 Mellanox Roadmap
Date: Thu, 26 Dec 2019 17:02:56 +0000	[thread overview]
Message-ID: <AM6PR05MB66152A6C556514DD2AB237ADC02B0@AM6PR05MB6615.eurprd05.prod.outlook.com> (raw)

This is Mellanox's roadmap for DPDK20.02, which we are working on currently.



Enable zero copy to/from GPU, Storage-device etc.

* Enable a pinned external buffer with pktmbuf pool:

* Introducing a new flag on the rte_pktmbuf_pool_private to specify this mempool is for mbuf with a pinned external buffer.

* This will enable a GPU or a storage device to do zero-copy for the received frames.



Preserve DSCP field for IPv4/IPv6 decapsulation

* Introduce a new rte_flow API to set DSCP field for IPv4 and IPv6 during decapsulation

   In case of an overlay network, when doing decapsulation, the DSCP field may need to be updated accordingly to preserve the IP Precedence



Additions to mlx5 PMD (ConnectX-5 SmartNIC, BlueField IPU and above):

* Support multiple header modifications in a single flow rule

   With this, a single flow can have several IPv6 header modification actions

* HW offload for a finer granularity of RSS only on the source or only on the destination, for both L3 and L4

   For example, a GW applications where two sides of the flows will be handled by the same core

* HW offload for matching on a GTP-U header, specifically on the msg_type and teid fields

   With this, a classification for a 4G/5G barrier can be done

* Support PMD hint not to inline packet

   This is in order to support a mixed traffic pattern, where some buffers are from the local host memory and others from other devices.



Reduce memory consumption in mlx5 PMD

* Change the implementation of rte_eth_dev_stop()/rte_eth_dev_start() which currently caches rules, to a non-cached implementation freeing all software and hardware resources for the created flows.



Support the full feature-set of ConnectX-5, including full functionality of hw offloads and performance, in ConnectX-6 Dx



Behavior change on rte_flow encap/decap actions

* ECN field will always be copied from the inner frame to the outer frame on enacap, and vise-versa on decap

   This is important to easily support congestion control algorithms that validate the ECN bit.

   One example is RoCE congestion control.



Introducing a new mlx5 PMD for vDPA (ConnectX-6 Dx, BlueField IPU and above):

* Adding a new mlx5 PMD to support vHost Data Path Acceleration (vDPA) - mlx5_vdpa

* mlx5_vdpa can run on top of PCI devices - VFs or PF

* According to the PCI device devargs, specified by the user, the driver's probe function will choose either mlx5 or mlx5_vdpa



                 reply	other threads:[~2019-12-26 17:02 UTC|newest]

Thread overview: [no followups] expand[flat|nested]  mbox.gz  Atom feed

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=AM6PR05MB66152A6C556514DD2AB237ADC02B0@AM6PR05MB6615.eurprd05.prod.outlook.com \
    --to=asafp@mellanox.com \
    --cc=dev@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).